From fdkong.jd at gmail.com Thu Jan 2 17:29:57 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 2 Jan 2020 16:29:57 -0700 Subject: [petsc-users] Fwd: Moose install troubleshooting help In-Reply-To: References: Message-ID: Satish, Do you have any suggestions for this? Chris, It may be helpful if you could share the petsc configuration log file with us? Fande, ---------- Forwarded message --------- From: Chris Thompson Date: Tue, Dec 31, 2019 at 9:53 AM Subject: Moose install troubleshooting help To: moose-users Dear All, I could use some help or a pointer in the right direction. I have been following the directions at https://mooseframework.inl.gov/getting_started/installation/hpc_install_moose.html All was going well until "make PETSC_DIR=$STACK_SRC/petsc-3.11.4 PETSC_ARCH=linux-opt install". I tried running "make --debug=v PETSC_DIR=$STACK_SRC/petsc-3.11.4 PETSC_ARCH=linux-opt install", but that didn't produce any more helpful information. This on a CentOS system running 7.6 Here is the error / output I am getting. rogue /usr/local/neapps/moose/stack_temp/petsc-3.11.4 926$ make --debug=v PETSC_DIR=/usr/local/neapps/moose/stack_temp/petsc-3.11.4 PETSC_ARCH=linux-opt install GNU Make 3.82 Built for x86_64-redhat-linux-gnu Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Reading makefiles... Reading makefile `makefile'... Reading makefile `linux-opt/lib/petsc/conf/petscvariables' (search path) (no ~ expansion)... Reading makefile `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/lib/petsc/conf/variables' (search path) (no ~ expansion)... Reading makefile `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/linux-opt/lib/petsc/conf/petscvariables' (search path) (no ~ expansion)... Reading makefile `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/lib/petsc/conf/rules' (search path) (no ~ expansion)... Reading makefile `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/linux-opt/lib/petsc/conf/petscrules' (search path) (no ~ expansion)... Reading makefile `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/lib/petsc/conf/test.common' (search path) (no ~ expansion)... Updating goal targets.... Considering target file `install'. File `install' does not exist. Finished prerequisites of target file `install'. Must remake target `install'. Invoking recipe from makefile:250 to update target `install'. *** Using PETSC_DIR=/usr/local/neapps/moose/stack_temp/petsc-3.11.4 PETSC_ARCH=linux-opt *** *** Installing PETSc at prefix location: /usr/local/neapps/moose/petsc-3.11.4 *** Traceback (most recent call last): File "./config/install.py", line 434, in Installer(sys.argv[1:]).run() File "./config/install.py", line 428, in run self.runcopy() File "./config/install.py", line 407, in runcopy self.installIncludes() File "./config/install.py", line 305, in installIncludes self.copies.extend(self.copytree(self.rootIncludeDir, self.destIncludeDir,exclude = exclude)) File "./config/install.py", line 246, in copytree raise shutil.Error(errors) shutil.Error: ['/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc', '/usr/local/neapps/moose/petsc-3.11.4/include/petsc', '[\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private\', \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private\', \'[\\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private/kernels\\\', \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private/kernels\\\', \\\'[\\\\\\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private/kernels\\\\\\\', \\\\\\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private/kernels\\\\\\\', "[Errno 1] Operation not permitted: \\\\\\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private/kernels\\\\\\\'"]\\\', \\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private\\\', \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private\\\', "[Errno 1] Operation not permitted: \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private\\\'"]\', \'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/finclude\', \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/finclude\', \'[\\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/finclude\\\', \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/finclude\\\', "[Errno 1] Operation not permitted: \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/finclude\\\'"]\', \'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc\', \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc\', "[Errno 1] Operation not permitted: \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc\'"]', '/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include', '/usr/local/neapps/moose/petsc-3.11.4/include', "[Errno 1] Operation not permitted: '/usr/local/neapps/moose/petsc-3.11.4/include'"] make: *** [install] Error 1 I'm not sure how to proceed with this error. Thank you, Chris -- You received this message because you are subscribed to the Google Groups "moose-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/d7c11fe1-f6aa-4b5f-8746-9ea7bc93de87%40googlegroups.com . -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 2 20:04:29 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Jan 2020 21:04:29 -0500 Subject: [petsc-users] Fwd: Moose install troubleshooting help In-Reply-To: References: Message-ID: On Thu, Jan 2, 2020 at 6:31 PM Fande Kong wrote: > > Satish, > > Do you have any suggestions for this? > > Chris, > > It may be helpful if you could share the petsc configuration log file with > us? > > > > Fande, > > ---------- Forwarded message --------- > From: Chris Thompson > Date: Tue, Dec 31, 2019 at 9:53 AM > Subject: Moose install troubleshooting help > To: moose-users > > > Dear All, > I could use some help or a pointer in the right direction. > > I have been following the directions at > https://mooseframework.inl.gov/getting_started/installation/hpc_install_moose.html > All was going well until "make PETSC_DIR=$STACK_SRC/petsc-3.11.4 > PETSC_ARCH=linux-opt install". I tried running "make --debug=v > PETSC_DIR=$STACK_SRC/petsc-3.11.4 PETSC_ARCH=linux-opt install", but that > didn't produce any more helpful information. > > This on a CentOS system running 7.6 > > Here is the error / output I am getting. > > rogue /usr/local/neapps/moose/stack_temp/petsc-3.11.4 926$ make --debug=v > PETSC_DIR=/usr/local/neapps/moose/stack_temp/petsc-3.11.4 > PETSC_ARCH=linux-opt install > GNU Make 3.82 > Built for x86_64-redhat-linux-gnu > Copyright (C) 2010 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later < > http://gnu.org/licenses/gpl.html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. > Reading makefiles... > Reading makefile `makefile'... > Reading makefile `linux-opt/lib/petsc/conf/petscvariables' (search path) > (no ~ expansion)... > Reading makefile > `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/lib/petsc/conf/variables' > (search path) (no ~ expansion)... > Reading makefile > `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/linux-opt/lib/petsc/conf/petscvariables' > (search path) (no ~ expansion)... > Reading makefile > `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/lib/petsc/conf/rules' > (search path) (no ~ expansion)... > Reading makefile > `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/linux-opt/lib/petsc/conf/petscrules' > (search path) (no ~ expansion)... > Reading makefile > `/usr/local/neapps/moose/stack_temp/petsc-3.11.4/lib/petsc/conf/test.common' > (search path) (no ~ expansion)... > Updating goal targets.... > Considering target file `install'. > File `install' does not exist. > Finished prerequisites of target file `install'. > Must remake target `install'. > Invoking recipe from makefile:250 to update target `install'. > *** Using PETSC_DIR=/usr/local/neapps/moose/stack_temp/petsc-3.11.4 > PETSC_ARCH=linux-opt *** > *** Installing PETSc at prefix location: > /usr/local/neapps/moose/petsc-3.11.4 *** > Traceback (most recent call last): > File "./config/install.py", line 434, in > Installer(sys.argv[1:]).run() > File "./config/install.py", line 428, in run > self.runcopy() > File "./config/install.py", line 407, in runcopy > self.installIncludes() > File "./config/install.py", line 305, in installIncludes > self.copies.extend(self.copytree(self.rootIncludeDir, > self.destIncludeDir,exclude = exclude)) > File "./config/install.py", line 246, in copytree > raise shutil.Error(errors) > shutil.Error: > ['/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc', > '/usr/local/neapps/moose/petsc-3.11.4/include/petsc', > '[\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private\', > \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private\', > > \'[\\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private/kernels\\\', > > \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private/kernels\\\', > > \\\'[\\\\\\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private/kernels\\\\\\\', > > \\\\\\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private/kernels\\\\\\\', > "[Errno 1] Operation not permitted: > \\\\\\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private/kernels\\\\\\\'"]\\\', > \\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/private\\\', > \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private\\\', > "[Errno 1] Operation not permitted: > \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/private\\\'"]\', > \'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/finclude\', > \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/finclude\', > \'[\\\'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc/finclude\\\', > \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/finclude\\\', > "[Errno 1] Operation not permitted: > \\\'/usr/local/neapps/moose/petsc-3.11.4/include/petsc/finclude\\\'"]\', > \'/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include/petsc\', > \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc\', "[Errno 1] > Operation not permitted: > \'/usr/local/neapps/moose/petsc-3.11.4/include/petsc\'"]', > '/usr/local/neapps/moose/stack_temp/petsc-3.11.4/include', > '/usr/local/neapps/moose/petsc-3.11.4/include', "[Errno 1] Operation not > permitted: '/usr/local/neapps/moose/petsc-3.11.4/include'"] > make: *** [install] Error 1 > > I'm not sure how to proceed with this error. > It looks like you do not have permissions in the install directory. You may want to run the install command using 'sudo'. Thanks, Matt > Thank you, > Chris > > -- > You received this message because you are subscribed to the Google Groups > "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/d7c11fe1-f6aa-4b5f-8746-9ea7bc93de87%40googlegroups.com > > . > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Sun Jan 5 04:28:26 2020 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Sun, 5 Jan 2020 13:28:26 +0300 Subject: [petsc-users] KSPComputeOperator in petsc4py In-Reply-To: References: Message-ID: On Thu, 26 Dec 2019 at 21:36, Mark Cunningham < mark.cunningham at ariacoustics.com> wrote: > I have an application written in python and using petsc4py. I would like > to study the effects of different preconditioners on my eigenvalue spectrum > and thought to use KSPComputeOperator to obtain the preconditioned matrix. > This, apparently, is not disclosed through the petsc4py implementation. > Could you share the modifications you implemented? This way I could quickly add them to petsc4py to make them available in next release. > I found KSP.pyx, where I believe that the KSP object is defined, and added > a definition for the function but, after a pip install, > If you "cd" to the top level petsc4py source directory with your modifications, then you may need $ pip install --no-cache-dir . # note the final dot "." otherwise pip may just reinstall petsc4py from a previously built wheel file stored in the pip cache. -- Lisandro Dalcin ============ Research Scientist Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jan 7 05:31:13 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Jan 2020 06:31:13 -0500 Subject: [petsc-users] Stopping TS in PostStep Message-ID: I have a test in a PostStep function in TS and I would like to terminate, with success, when my test is satisfied. What is the best way to do that? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Tue Jan 7 05:41:35 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Tue, 7 Jan 2020 12:41:35 +0100 Subject: [petsc-users] Stopping TS in PostStep In-Reply-To: References: Message-ID: <2D022C21-8A1D-4B52-A7B9-98E2336FF5A0@gmail.com> I think the standard approach is https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/TSSetConvergedReason.html with https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/TS_CONVERGED_USER.html > Am 07.01.2020 um 12:31 schrieb Mark Adams : > > I have a test in a PostStep function in TS and I would like to terminate, with success, when my test is satisfied. What is the best way to do that? > Thanks, > Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jan 7 06:54:27 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Jan 2020 07:54:27 -0500 Subject: [petsc-users] Stopping TS in PostStep In-Reply-To: <2D022C21-8A1D-4B52-A7B9-98E2336FF5A0@gmail.com> References: <2D022C21-8A1D-4B52-A7B9-98E2336FF5A0@gmail.com> Message-ID: Yep, thanks, On Tue, Jan 7, 2020 at 6:41 AM Patrick Sanan wrote: > I think the standard approach is > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/TSSetConvergedReason.html > > with > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/TS_CONVERGED_USER.html > > > > Am 07.01.2020 um 12:31 schrieb Mark Adams : > > I have a test in a PostStep function in TS and I would like to terminate, > with success, when my test is satisfied. What is the best way to do that? > Thanks, > Mark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jan 7 08:27:10 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Jan 2020 09:27:10 -0500 Subject: [petsc-users] TS shallow reset Message-ID: I would like to do a parameters study with a TS solve and want to put TSSolve in a loop. I save the initial conditions in a Vec and copy that into the solution vector after each solve, to get ready for the next one. But, TS does not seem to do anything on the second solve. I guess it thinks it is converged. Is there a way to reset the solver without redoing the whole TS? Maybe set the step number and TSSetConvergedReason? Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jan 7 08:59:12 2020 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 7 Jan 2020 09:59:12 -0500 Subject: [petsc-users] [petsc-maint] (no subject) In-Reply-To: <7495091578406650@iva1-ad256f95df1b.qloud-c.yandex.net> References: <6997781578401371@sas2-acef09fc61af.qloud-c.yandex.net> <7495091578406650@iva1-ad256f95df1b.qloud-c.yandex.net> Message-ID: I?m not sure what the compilers, and C++ are doing here On Tue, Jan 7, 2020 at 9:17 AM ?????? ???? wrote: > However, after configuring > > cout<<1. + 1.*PETSC_i< > outputs (1, 0) instead of (1, 1). > > 07.01.2020, 16:01, "Mark Adams" : > > yes, configure with > > --with-precision=single > --with-scalar-type=complex > > > On Tue, Jan 7, 2020 at 7:49 AM ?????? ???? wrote: > > Good day! Is it possible to use complex numbers in single precision in > linear solvers? I'd like to decrease time and memory needed for calculation. > > Best regards, Ilya Kudrov! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jan 7 09:14:07 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 7 Jan 2020 15:14:07 +0000 Subject: [petsc-users] TS shallow reset In-Reply-To: References: Message-ID: <782C2042-8B02-4F92-8B17-DFAC90AFACEC@anl.gov> Do you reset the initial tilmestep? Otherwise the second solve thinks it is at the end. Also you may need to reset the iteration number Something like ierr = TSSetTime(appctx->ts, 0);CHKERRQ(ierr); ierr = TSSetStepNumber(appctx->ts, 0);CHKERRQ(ierr); ierr = TSSetTimeStep(appctx->ts, appctx->initial_dt);CHKERRQ(ierr); > On Jan 7, 2020, at 8:27 AM, Mark Adams wrote: > > I would like to do a parameters study with a TS solve and want to put TSSolve in a loop. I save the initial conditions in a Vec and copy that into the solution vector after each solve, to get ready for the next one. But, TS does not seem to do anything on the second solve. I guess it thinks it is converged. Is there a way to reset the solver without redoing the whole TS? > > Maybe set the step number and TSSetConvergedReason? > > Thanks, > Mark From bsmith at mcs.anl.gov Tue Jan 7 09:21:50 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 7 Jan 2020 15:21:50 +0000 Subject: [petsc-users] [petsc-maint] (no subject) In-Reply-To: References: <6997781578401371@sas2-acef09fc61af.qloud-c.yandex.net> <7495091578406650@iva1-ad256f95df1b.qloud-c.yandex.net> Message-ID: > On Jan 7, 2020, at 8:59 AM, Mark Adams wrote: > > I?m not sure what the compilers, and C++ are doing here > > On Tue, Jan 7, 2020 at 9:17 AM ?????? ???? wrote: > However, after configuring > > cout<<1. + 1.*PETSC_i< > outputs (1, 0) instead of (1, 1). Where after configure? PETSC_i is not defined until after PetscInitialize() is called. Send full example. Here is the code that defines it /* Initialized the global complex variable; this is because with shared libraries the constructors for global variables are not called; at least on IRIX. */ #if defined(PETSC_HAVE_COMPLEX) { #if defined(PETSC_CLANGUAGE_CXX) && !defined(PETSC_USE_REAL___FLOAT128) PetscComplex ic(0.0,1.0); PETSC_i = ic; #else PETSC_i = _Complex_I; #endif } Perhaps it is problematic in C++? With single precision? Try PetscComplex ic(0.0,1.0); and see what ic is Barry > > 07.01.2020, 16:01, "Mark Adams" : > yes, configure with > > --with-precision=single > --with-scalar-type=complex > > > On Tue, Jan 7, 2020 at 7:49 AM ?????? ???? wrote: > Good day! Is it possible to use complex numbers in single precision in linear solvers? I'd like to decrease time and memory needed for calculation. > > Best regards, Ilya Kudrov! From ilyakudrov at yandex.ru Tue Jan 7 09:12:11 2020 From: ilyakudrov at yandex.ru (=?utf-8?B?0JrRg9C00YDQvtCyINCY0LvRjNGP?=) Date: Tue, 07 Jan 2020 18:12:11 +0300 Subject: [petsc-users] [petsc-maint] (no subject) In-Reply-To: References: <6997781578401371@sas2-acef09fc61af.qloud-c.yandex.net> <7495091578406650@iva1-ad256f95df1b.qloud-c.yandex.net> Message-ID: <6971061578409931@sas2-4fe1bb3c0a49.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From ilyakudrov at yandex.ru Tue Jan 7 09:25:24 2020 From: ilyakudrov at yandex.ru (=?utf-8?B?0JrRg9C00YDQvtCyINCY0LvRjNGP?=) Date: Tue, 07 Jan 2020 18:25:24 +0300 Subject: [petsc-users] [petsc-maint] (no subject) In-Reply-To: References: <6997781578401371@sas2-acef09fc61af.qloud-c.yandex.net> <7495091578406650@iva1-ad256f95df1b.qloud-c.yandex.net> Message-ID: <6390391578410724@myt6-4218ece6190d.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Jan 7 10:06:38 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 7 Jan 2020 16:06:38 +0000 Subject: [petsc-users] TS shallow reset In-Reply-To: <782C2042-8B02-4F92-8B17-DFAC90AFACEC@anl.gov> References: <782C2042-8B02-4F92-8B17-DFAC90AFACEC@anl.gov> Message-ID: <410F69B7-7358-4DFA-AE90-3BAC4E6C5A5B@anl.gov> Normally only the initial time and initial stepsize need to be reset (via TSSetTime and TSSetTimeStep) if you need to solve the ODE repeatedly on the same time interval. If you don?t reset these, successive calls to TSSolve will just continue the integration from the previous end point. So if you are solving autonomous ODEs with fixed time steps, resetting the final time may also work. TSSetMaxTime(ts,1.0) TSSolve(ts) TSSetMaxTime(ts,2.0) TSSolve(ts) Hong (Mr.) > On Jan 7, 2020, at 9:14 AM, Smith, Barry F. wrote: > > > Do you reset the initial tilmestep? Otherwise the second solve thinks it is at the end. Also you may need to reset the iteration number > > Something like > > ierr = TSSetTime(appctx->ts, 0);CHKERRQ(ierr); > ierr = TSSetStepNumber(appctx->ts, 0);CHKERRQ(ierr); > ierr = TSSetTimeStep(appctx->ts, appctx->initial_dt);CHKERRQ(ierr); > > > >> On Jan 7, 2020, at 8:27 AM, Mark Adams wrote: >> >> I would like to do a parameters study with a TS solve and want to put TSSolve in a loop. I save the initial conditions in a Vec and copy that into the solution vector after each solve, to get ready for the next one. But, TS does not seem to do anything on the second solve. I guess it thinks it is converged. Is there a way to reset the solver without redoing the whole TS? >> >> Maybe set the step number and TSSetConvergedReason? >> >> Thanks, >> Mark > From ellen.price at cfa.harvard.edu Wed Jan 8 13:49:34 2020 From: ellen.price at cfa.harvard.edu (Ellen Price) Date: Wed, 8 Jan 2020 14:49:34 -0500 Subject: [petsc-users] Problems applying multigrid Message-ID: Hi PETSc users! I was playing around with getting faster convergence for my code and decided to give multigrid a try. For some context, I am using the beuler timestepper, so the SNES/KSP/PC hierarchy is contained within that. The problem is defined on a regular grid, a DMDA in dimension 2 with 5 dofs. It has DM_BOUNDARY_GHOSTED boundaries. The way I understand it, with these options: -pc_type mg -pc_mg_galerkin both -pc_mg_levels 2 I should be able to at least *try* multigrid on this problem at a basic level without changing anything else (please correct me if I'm wrong!), even if I don't immediately get good convergence. I know multigrid can be finicky (I've seen a lot of mailing list posts about it), but I've run into an error that might be internal, or at least not obvious. Here is a sample of the output: [12]PETSC ERROR: Arguments are incompatible [12]PETSC ERROR: Processor's coarse DMDA must lie over fine DMDA i_c 314 i_f 628 fine ghost range [499,628] Does anyone know what would cause this? A quick Google search just brought up the PETSc source code where the error is issued. Thanks, Ellen Price -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 8 14:31:24 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 8 Jan 2020 20:31:24 +0000 Subject: [petsc-users] Problems applying multigrid In-Reply-To: References: Message-ID: Yeah, this is an annoying feature of DMDA and PCMG in PETSc. Some coarse grid ranges and particular parallel layouts won't work with geometric multigrid. You are using 314 on the coarse and 628 on the fine grid. Try changing the them by 1 and start with one process. Barry > On Jan 8, 2020, at 1:49 PM, Ellen Price wrote: > > Hi PETSc users! > > I was playing around with getting faster convergence for my code and decided to give multigrid a try. For some context, I am using the beuler timestepper, so the SNES/KSP/PC hierarchy is contained within that. The problem is defined on a regular grid, a DMDA in dimension 2 with 5 dofs. It has DM_BOUNDARY_GHOSTED boundaries. > > The way I understand it, with these options: > -pc_type mg -pc_mg_galerkin both -pc_mg_levels 2 > I should be able to at least *try* multigrid on this problem at a basic level without changing anything else (please correct me if I'm wrong!), even if I don't immediately get good convergence. > > I know multigrid can be finicky (I've seen a lot of mailing list posts about it), but I've run into an error that might be internal, or at least not obvious. Here is a sample of the output: > > [12]PETSC ERROR: Arguments are incompatible > [12]PETSC ERROR: Processor's coarse DMDA must lie over fine DMDA > i_c 314 i_f 628 fine ghost range [499,628] > > Does anyone know what would cause this? A quick Google search just brought up the PETSc source code where the error is issued. > > Thanks, > Ellen Price From aph at email.arizona.edu Wed Jan 8 16:01:30 2020 From: aph at email.arizona.edu (Anthony Paul Haas) Date: Wed, 8 Jan 2020 15:01:30 -0700 Subject: [petsc-users] PetscOptionsGetBool error Message-ID: Hello, I am using Petsc 3.7.6.0. with Fortran code and I am getting a segmentation violation for the following line: call PetscOptionsGetBool(PETSC_NULL_CHARACTER,"-use_mumps_lu",flg_mumps_lu,flg,self%ierr_ps) in which: flg_mumps_lu and flg are defined as PetscBool and flg_mumps_lu = PETSC_TRUE Is the option -use_mumps_lu deprecated? Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From repepo at gmail.com Wed Jan 8 16:34:39 2020 From: repepo at gmail.com (Santiago Andres Triana) Date: Wed, 8 Jan 2020 23:34:39 +0100 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: Dear Matt, petsc-users: Finally back after the holidays to try to solve this issue, thanks for your patience! I compiled the latest petsc (3.12.3) with debugging enabled, the same problem appears: relatively large matrices result in out of memory errors. This is not the case for petsc-3.9.4, all fine there. This is a non-hermitian, generalized eigenvalue problem, I generate the A and B matrices myself and then I use example 7 (from the slepc tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol 1e-14 $opts where the $opts variable is: export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' the output from valgrind (sample from one processor) and from the program are attached. If it's of any use the matrices are here (might need at least 180 Gb of ram to solve the problem succesfully under petsc-3.9.4): https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. Any suggestions, comments or any other help are very much appreciated! Cheers, Santiago On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley wrote: > On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana > wrote: > >> Dear all, >> >> After upgrading to petsc 3.12.2 my solver program crashes consistently. >> Before the upgrade I was using petsc 3.9.4 with no problems. >> >> My application deals with a complex-valued, generalized eigenvalue >> problem. The matrices involved are relatively large, typically 2 to 10 Gb >> in size, which is no problem for petsc 3.9.4. >> > > Are you sure that your indices do not exceed 4B? If so, you need to > configure using > > --with-64-bit-indices > > Also, it would be nice if you ran with the debugger so we can get a stack > trace for the SEGV. > > Thanks, > > Matt > > >> However, after the upgrade I can only obtain solutions when the matrices >> are small, the solver crashes when the matrices' size exceed about 1.5 Gb: >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the >> batch system) has told this process to end >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS >> X to find memory corruption errors >> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >> and run >> [0]PETSC ERROR: to get more information on the crash. >> >> and so on for each cpu. >> >> >> I tried using valgrind and this is the typical output: >> >> ==2874== Conditional jump or move depends on uninitialised value(s) >> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) >> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ >> ld-2.22.so) >> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) >> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) >> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) >> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) >> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) >> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) >> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) >> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) >> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) >> ==2874== by 0x12: ??? >> ==2874== >> >> >> These are my configuration options. Identical for both petsc 3.9.4 and >> 3.12.2: >> >> ./configure --with-scalar-type=complex --download-mumps >> --download-parmetis --download-metis --download-scalapack=1 >> --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 >> --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 >> -march=native' COPTFLAGS='-O3 -march=native' >> >> >> Thanks in advance for any comments or ideas! >> >> Cheers, >> Santiago >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test1.e6034496 Type: application/octet-stream Size: 42316 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: valgrind.log.23361 Type: application/octet-stream Size: 53884 bytes Desc: not available URL: From jczhang at mcs.anl.gov Wed Jan 8 17:16:10 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Wed, 8 Jan 2020 23:16:10 +0000 Subject: [petsc-users] PetscOptionsGetBool error In-Reply-To: References: Message-ID: A deprecated option won't cause segfault. From https://www.mcs.anl.gov/petsc/petsc-current/src/dm/label/examples/tutorials/ex1f90.F90.html, it seems you missed the first PETSC_NULL_OPTIONS. --Junchao Zhang On Wed, Jan 8, 2020 at 4:02 PM Anthony Paul Haas > wrote: Hello, I am using Petsc 3.7.6.0. with Fortran code and I am getting a segmentation violation for the following line: call PetscOptionsGetBool(PETSC_NULL_CHARACTER,"-use_mumps_lu",flg_mumps_lu,flg,self%ierr_ps) in which: flg_mumps_lu and flg are defined as PetscBool and flg_mumps_lu = PETSC_TRUE Is the option -use_mumps_lu deprecated? Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 8 21:12:51 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 9 Jan 2020 03:12:51 +0000 Subject: [petsc-users] PetscOptionsGetBool error In-Reply-To: References: Message-ID: Try the debugger. > On Jan 8, 2020, at 4:01 PM, Anthony Paul Haas wrote: > > Hello, > > I am using Petsc 3.7.6.0. with Fortran code and I am getting a segmentation violation for the following line: > > call PetscOptionsGetBool(PETSC_NULL_CHARACTER,"-use_mumps_lu",flg_mumps_lu,flg,self%ierr_ps) > > in which: > flg_mumps_lu and flg are defined as PetscBool and > flg_mumps_lu = PETSC_TRUE > > Is the option -use_mumps_lu deprecated? > > Thanks, > > Anthony > From zonexo at gmail.com Wed Jan 8 21:22:24 2020 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 9 Jan 2020 11:22:24 +0800 Subject: [petsc-users] Problems with PCMGSetLevels and MatNullSpaceCreate in Fortran Message-ID: <5e3049b6-4877-f115-aa12-471611aa2685@gmail.com> Hi, After upgrading to the newer ver of PETSc 3.8.3, I got these error during compile in VS2008 with Intel Fortran: call PCMGSetLevels(pc,mg_lvl,PETSC_NULL_OBJECT,ierr) This name does not have a type, and must have an explicit type. [PETSC_NULL_OBJECT] call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL_OBJECT,nullspace,ierr) There is no matching specific subroutine for this generic subroutine call.?? [MATNULLSPACECREATE] So how do I correct these errors? -- Thank you very much. Yours sincerely, ================================================ TAY Wee-Beng ??? (Zheng Weiming) Personal research webpage: _http://tayweebeng.wixsite.com/website_ Youtube research showcase: _https://goo.gl/PtvdwQ_ linkedin: _https://www.linkedin.com/in/tay-weebeng_ ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 8 21:31:24 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 9 Jan 2020 03:31:24 +0000 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: This is extremely worrisome: ==23361== Use of uninitialised value of size 8 ==23361== at 0x847E939: gk_randint64 (random.c:99) ==23361== by 0x847EF88: gk_randint32 (random.c:128) ==23361== by 0x81EBF0B: libparmetis__Match_Global (in /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) do you get that with PETSc-3.9.4 or only with 3.12.3? This may result in Parmetis using non-random numbers and then giving back an inappropriate ordering that requires more memory for SuperLU_DIST. Suggest looking at the code, or running in the debugger to see what is going on there. We use parmetis all the time and don't see this. Barry > On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana wrote: > > Dear Matt, petsc-users: > > Finally back after the holidays to try to solve this issue, thanks for your patience! > I compiled the latest petsc (3.12.3) with debugging enabled, the same problem appears: relatively large matrices result in out of memory errors. This is not the case for petsc-3.9.4, all fine there. > This is a non-hermitian, generalized eigenvalue problem, I generate the A and B matrices myself and then I use example 7 (from the slepc tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: > > mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol 1e-14 $opts > > where the $opts variable is: > export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' > > the output from valgrind (sample from one processor) and from the program are attached. > If it's of any use the matrices are here (might need at least 180 Gb of ram to solve the problem succesfully under petsc-3.9.4): > > https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 > https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 > > WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. > Any suggestions, comments or any other help are very much appreciated! > > Cheers, > Santiago > > > > On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley wrote: > On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana wrote: > Dear all, > > After upgrading to petsc 3.12.2 my solver program crashes consistently. Before the upgrade I was using petsc 3.9.4 with no problems. > > My application deals with a complex-valued, generalized eigenvalue problem. The matrices involved are relatively large, typically 2 to 10 Gb in size, which is no problem for petsc 3.9.4. > > Are you sure that your indices do not exceed 4B? If so, you need to configure using > > --with-64-bit-indices > > Also, it would be nice if you ran with the debugger so we can get a stack trace for the SEGV. > > Thanks, > > Matt > > However, after the upgrade I can only obtain solutions when the matrices are small, the solver crashes when the matrices' size exceed about 1.5 Gb: > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > [0]PETSC ERROR: to get more information on the crash. > > and so on for each cpu. > > > I tried using valgrind and this is the typical output: > > ==2874== Conditional jump or move depends on uninitialised value(s) > ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) > ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ld-2.22.so) > ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) > ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) > ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) > ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) > ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) > ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) > ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) > ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) > ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) > ==2874== by 0x12: ??? > ==2874== > > > These are my configuration options. Identical for both petsc 3.9.4 and 3.12.2: > > ./configure --with-scalar-type=complex --download-mumps --download-parmetis --download-metis --download-scalapack=1 --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' COPTFLAGS='-O3 -march=native' > > > Thanks in advance for any comments or ideas! > > Cheers, > Santiago > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > From bsmith at mcs.anl.gov Wed Jan 8 21:33:56 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 9 Jan 2020 03:33:56 +0000 Subject: [petsc-users] Problems with PCMGSetLevels and MatNullSpaceCreate in Fortran In-Reply-To: <5e3049b6-4877-f115-aa12-471611aa2685@gmail.com> References: <5e3049b6-4877-f115-aa12-471611aa2685@gmail.com> Message-ID: <62397B92-E9FB-4F72-B48E-70B2412C9CC8@anl.gov> https://www.mcs.anl.gov/petsc/documentation/changes/38.html > On Jan 8, 2020, at 9:22 PM, TAY wee-beng wrote: > > Hi, > > After upgrading to the newer ver of PETSc 3.8.3, I got these error during compile in VS2008 with Intel Fortran: > > call PCMGSetLevels(pc,mg_lvl,PETSC_NULL_OBJECT,ierr) > > This name does not have a type, and must have an explicit type. [PETSC_NULL_OBJECT] > > call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL_OBJECT,nullspace,ierr) > > There is no matching specific subroutine for this generic subroutine call. [MATNULLSPACECREATE] > > So how do I correct these errors? > > -- > Thank you very much. > > Yours sincerely, > > ================================================ > TAY Wee-Beng ??? (Zheng Weiming) > Personal research webpage: http://tayweebeng.wixsite.com/website > Youtube research showcase: https://goo.gl/PtvdwQ > linkedin: https://www.linkedin.com/in/tay-weebeng > ================================================ > From stefano.zampini at gmail.com Thu Jan 9 09:25:35 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Thu, 9 Jan 2020 16:25:35 +0100 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: Can you reproduce the issue with smaller matrices? Or with a debug build (i.e. using ?with-debugging=1 and compilation flags -02 -g)? The only changes in parmetis between the two PETSc releases are these below, but I don?t see how they could cause issues kl-18448:pkg-parmetis szampini$ git log -2 commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag: v4.0.3-p5, origin/master, origin/dalcinl/random, origin/HEAD) Author: Lisandro Dalcin Date: Thu May 9 18:44:10 2019 +0300 GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 Author: Lisandro Dalcin Date: Sat May 4 17:22:19 2019 +0300 GKlib: Use gk_randint32() to define the RandomInRange() macro > On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users wrote: > > > This is extremely worrisome: > > ==23361== Use of uninitialised value of size 8 > ==23361== at 0x847E939: gk_randint64 (random.c:99) > ==23361== by 0x847EF88: gk_randint32 (random.c:128) > ==23361== by 0x81EBF0B: libparmetis__Match_Global (in /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) > > do you get that with PETSc-3.9.4 or only with 3.12.3? > > This may result in Parmetis using non-random numbers and then giving back an inappropriate ordering that requires more memory for SuperLU_DIST. > > Suggest looking at the code, or running in the debugger to see what is going on there. We use parmetis all the time and don't see this. > > Barry > > > > > > >> On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana wrote: >> >> Dear Matt, petsc-users: >> >> Finally back after the holidays to try to solve this issue, thanks for your patience! >> I compiled the latest petsc (3.12.3) with debugging enabled, the same problem appears: relatively large matrices result in out of memory errors. This is not the case for petsc-3.9.4, all fine there. >> This is a non-hermitian, generalized eigenvalue problem, I generate the A and B matrices myself and then I use example 7 (from the slepc tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: >> >> mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol 1e-14 $opts >> >> where the $opts variable is: >> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' >> >> the output from valgrind (sample from one processor) and from the program are attached. >> If it's of any use the matrices are here (might need at least 180 Gb of ram to solve the problem succesfully under petsc-3.9.4): >> >> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 >> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 >> >> WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. >> Any suggestions, comments or any other help are very much appreciated! >> >> Cheers, >> Santiago >> >> >> >> On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley wrote: >> On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana wrote: >> Dear all, >> >> After upgrading to petsc 3.12.2 my solver program crashes consistently. Before the upgrade I was using petsc 3.9.4 with no problems. >> >> My application deals with a complex-valued, generalized eigenvalue problem. The matrices involved are relatively large, typically 2 to 10 Gb in size, which is no problem for petsc 3.9.4. >> >> Are you sure that your indices do not exceed 4B? If so, you need to configure using >> >> --with-64-bit-indices >> >> Also, it would be nice if you ran with the debugger so we can get a stack trace for the SEGV. >> >> Thanks, >> >> Matt >> >> However, after the upgrade I can only obtain solutions when the matrices are small, the solver crashes when the matrices' size exceed about 1.5 Gb: >> >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [0]PETSC ERROR: to get more information on the crash. >> >> and so on for each cpu. >> >> >> I tried using valgrind and this is the typical output: >> >> ==2874== Conditional jump or move depends on uninitialised value(s) >> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) >> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ld-2.22.so) >> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) >> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) >> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) >> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) >> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) >> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) >> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) >> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) >> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) >> ==2874== by 0x12: ??? >> ==2874== >> >> >> These are my configuration options. Identical for both petsc 3.9.4 and 3.12.2: >> >> ./configure --with-scalar-type=complex --download-mumps --download-parmetis --download-metis --download-scalapack=1 --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' COPTFLAGS='-O3 -march=native' >> >> >> Thanks in advance for any comments or ideas! >> >> Cheers, >> Santiago >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Thu Jan 9 12:47:19 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Thu, 9 Jan 2020 10:47:19 -0800 Subject: [petsc-users] set petsc matrix using input array Message-ID: Dear PETSc dev team, Suppose I have the matrix already in triplet format int int[] I, int[] J, double[] A, Is possible to create petsc matrix using A without copying? I like to avoid the duplicate memory if possible. Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jan 9 12:55:35 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 9 Jan 2020 18:55:35 +0000 Subject: [petsc-users] set petsc matrix using input array In-Reply-To: References: Message-ID: Since PETSc does not use that format there, of course, has to be a time when you have duplicate memory. Barry > On Jan 9, 2020, at 12:47 PM, Sam Guo wrote: > > Dear PETSc dev team, > Suppose I have the matrix already in triplet format int int[] I, int[] J, double[] A, Is possible to create petsc matrix using A without copying? I like to avoid the duplicate memory if possible. > > Thanks, > Sam From jed at jedbrown.org Thu Jan 9 14:02:08 2020 From: jed at jedbrown.org (Jed Brown) Date: Thu, 09 Jan 2020 13:02:08 -0700 Subject: [petsc-users] set petsc matrix using input array In-Reply-To: References: Message-ID: <87h814mm9r.fsf@jedbrown.org> Note that PETSc's formats are more space-efficient and faster than the COO (triplet) format. If you can produce triplet chunks instead of the full matrix, you can add them incrementally to reduce the peak memory usage. Note that many preconditioners use storage similar to (or greater than) a single assembled matrix, so copying (which is done before preconditioner setup) may not increase the peak memory usage (which is all that matters for capability). "Smith, Barry F. via petsc-users" writes: > Since PETSc does not use that format there, of course, has to be a time when you have duplicate memory. > > Barry > > > > > >> On Jan 9, 2020, at 12:47 PM, Sam Guo wrote: >> >> Dear PETSc dev team, >> Suppose I have the matrix already in triplet format int int[] I, int[] J, double[] A, Is possible to create petsc matrix using A without copying? I like to avoid the duplicate memory if possible. >> >> Thanks, >> Sam From repepo at gmail.com Thu Jan 9 14:16:18 2020 From: repepo at gmail.com (Santiago Andres Triana) Date: Thu, 9 Jan 2020 21:16:18 +0100 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: Dear all, I think parmetis is not involved since I still run out of memory if I use the following options: export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' and issuing: mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is much lower than current version. I can only solve relatively small problems using the 3.12 series :( I have an example with smaller matrices that will likely fail in a 32 Gb ram machine with petsc-3.12 but runs just fine with petsc-3.9. The -memory_view output is with petsc-3.9.4: (log 'justfine.log' attached) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 1.6665e+10 max 7.5674e+08 min 6.4215e+08 Current process memory: total 1.5841e+10 max 7.2881e+08 min 6.0905e+08 Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 max 1.5868e+08 min 1.0179e+08 Current space PetscMalloc()ed: total 1.8808e+06 max 7.8368e+04 min 7.8368e+04 with petsc-3.12.2: (log 'toobig.log' attached) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 3.1564e+10 max 1.3662e+09 min 1.2604e+09 Current process memory: total 3.0355e+10 max 1.3082e+09 min 1.2254e+09 Maximum (over computational time) space PetscMalloc()ed: total 2.7618e+09 max 1.4339e+08 min 8.6493e+07 Current space PetscMalloc()ed: total 3.6127e+06 max 1.5053e+05 min 1.5053e+05 Strangely, monitoring with 'top' I can see *appreciably higher* peak memory use, usually twice what -memory_view ends up reporting, both for petsc-3.9.4 and current. Program fails usually at this peak if not enough ram available The matrices for the example quoted above can be downloaded here (I use slepc's tutorial ex7.c to solve the problem): https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) I haven't been able to use a debugger successfully since I am using a compute node without the possibility of an xterm ... note that I have no experience using a debugger so any help on that will also be appreciated! Hope I can switch to the current petsc/slepc version for my production runs soon... Thanks again! Santiago On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini wrote: > Can you reproduce the issue with smaller matrices? Or with a debug build > (i.e. using ?with-debugging=1 and compilation flags -02 -g)? > > The only changes in parmetis between the two PETSc releases are these > below, but I don?t see how they could cause issues > > kl-18448:pkg-parmetis szampini$ git log -2 > commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (*HEAD -> **master*, *tag: > v4.0.3-p5*, *origin/master*, *origin/dalcinl/random*, *origin/HEAD*) > Author: Lisandro Dalcin > Date: Thu May 9 18:44:10 2019 +0300 > > GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices > > commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 > Author: Lisandro Dalcin > Date: Sat May 4 17:22:19 2019 +0300 > > GKlib: Use gk_randint32() to define the RandomInRange() macro > > > > On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > This is extremely worrisome: > > ==23361== Use of uninitialised value of size 8 > ==23361== at 0x847E939: gk_randint64 (random.c:99) > ==23361== by 0x847EF88: gk_randint32 (random.c:128) > ==23361== by 0x81EBF0B: libparmetis__Match_Global (in > /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) > > do you get that with PETSc-3.9.4 or only with 3.12.3? > > This may result in Parmetis using non-random numbers and then giving > back an inappropriate ordering that requires more memory for SuperLU_DIST. > > Suggest looking at the code, or running in the debugger to see what is > going on there. We use parmetis all the time and don't see this. > > Barry > > > > > > > On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana > wrote: > > Dear Matt, petsc-users: > > Finally back after the holidays to try to solve this issue, thanks for > your patience! > I compiled the latest petsc (3.12.3) with debugging enabled, the same > problem appears: relatively large matrices result in out of memory errors. > This is not the case for petsc-3.9.4, all fine there. > This is a non-hermitian, generalized eigenvalue problem, I generate the A > and B matrices myself and then I use example 7 (from the slepc tutorial at > $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: > > mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 > --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc > -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol > 1e-14 $opts > > where the $opts variable is: > export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu > -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type > superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm > PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason > -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' > > the output from valgrind (sample from one processor) and from the program > are attached. > If it's of any use the matrices are here (might need at least 180 Gb of > ram to solve the problem succesfully under petsc-3.9.4): > > https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 > https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 > > WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 > Gb ram), but only up to 3Gb with the latest petsc/slepc. > Any suggestions, comments or any other help are very much appreciated! > > Cheers, > Santiago > > > > On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley > wrote: > On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana > wrote: > Dear all, > > After upgrading to petsc 3.12.2 my solver program crashes consistently. > Before the upgrade I was using petsc 3.9.4 with no problems. > > My application deals with a complex-valued, generalized eigenvalue > problem. The matrices involved are relatively large, typically 2 to 10 Gb > in size, which is no problem for petsc 3.9.4. > > Are you sure that your indices do not exceed 4B? If so, you need to > configure using > > --with-64-bit-indices > > Also, it would be nice if you ran with the debugger so we can get a stack > trace for the SEGV. > > Thanks, > > Matt > > However, after the upgrade I can only obtain solutions when the matrices > are small, the solver crashes when the matrices' size exceed about 1.5 Gb: > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the > batch system) has told this process to end > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > X to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and > run > [0]PETSC ERROR: to get more information on the crash. > > and so on for each cpu. > > > I tried using valgrind and this is the typical output: > > ==2874== Conditional jump or move depends on uninitialised value(s) > ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) > ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ > ld-2.22.so) > ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) > ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) > ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) > ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) > ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) > ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) > ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) > ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) > ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) > ==2874== by 0x12: ??? > ==2874== > > > These are my configuration options. Identical for both petsc 3.9.4 and > 3.12.2: > > ./configure --with-scalar-type=complex --download-mumps > --download-parmetis --download-metis --download-scalapack=1 > --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 > --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 > -march=native' COPTFLAGS='-O3 -march=native' > > > Thanks in advance for any comments or ideas! > > Cheers, > Santiago > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: justfine.log Type: text/x-log Size: 14523 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: toobig.log Type: text/x-log Size: 15046 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Jan 9 14:46:28 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 9 Jan 2020 20:46:28 +0000 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: with petsc-3.9.4: (log 'justfine.log' attached) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 1.6665e+10 max 7.5674e+08 min 6.4215e+08 Current process memory: total 1.5841e+10 max 7.2881e+08 min 6.0905e+08 Below is the space allocated by PETSc Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 max 1.5868e+08 min 1.0179e+08 Current space PetscMalloc()ed: total 1.8808e+06 max 7.8368e+04 min 7.8368e+04 with petsc-3.12.2: (log 'toobig.log' attached) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 3.1564e+10 max 1.3662e+09 min 1.2604e+09 Current process memory: total 3.0355e+10 max 1.3082e+09 min 1.2254e+09 Below is the space allocated by PETSc. Note that 2.7618e+09 max is actually a bit lower than 3.1290e+09 Maximum (over computational time) space PetscMalloc()ed: total 2.7618e+09 max 1.4339e+08 min 8.6493e+07 Current space PetscMalloc()ed: total 3.6127e+06 max 1.5053e+05 min 1.5053e+05 So it is not PETSc that allocating more memory than before. Use the Massif option of valgrind to see where the large chunk of memory is actually used in the simulation. Barry On Jan 9, 2020, at 2:16 PM, Santiago Andres Triana > wrote: Dear all, I think parmetis is not involved since I still run out of memory if I use the following options: export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' and issuing: mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is much lower than current version. I can only solve relatively small problems using the 3.12 series :( I have an example with smaller matrices that will likely fail in a 32 Gb ram machine with petsc-3.12 but runs just fine with petsc-3.9. The -memory_view output is with petsc-3.9.4: (log 'justfine.log' attached) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 1.6665e+10 max 7.5674e+08 min 6.4215e+08 Current process memory: total 1.5841e+10 max 7.2881e+08 min 6.0905e+08 Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 max 1.5868e+08 min 1.0179e+08 Current space PetscMalloc()ed: total 1.8808e+06 max 7.8368e+04 min 7.8368e+04 with petsc-3.12.2: (log 'toobig.log' attached) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 3.1564e+10 max 1.3662e+09 min 1.2604e+09 Current process memory: total 3.0355e+10 max 1.3082e+09 min 1.2254e+09 Maximum (over computational time) space PetscMalloc()ed: total 2.7618e+09 max 1.4339e+08 min 8.6493e+07 Current space PetscMalloc()ed: total 3.6127e+06 max 1.5053e+05 min 1.5053e+05 Strangely, monitoring with 'top' I can see *appreciably higher* peak memory use, usually twice what -memory_view ends up reporting, both for petsc-3.9.4 and current. Program fails usually at this peak if not enough ram available The matrices for the example quoted above can be downloaded here (I use slepc's tutorial ex7.c to solve the problem): https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) I haven't been able to use a debugger successfully since I am using a compute node without the possibility of an xterm ... note that I have no experience using a debugger so any help on that will also be appreciated! Hope I can switch to the current petsc/slepc version for my production runs soon... Thanks again! Santiago On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini wrote: Can you reproduce the issue with smaller matrices? Or with a debug build (i.e. using ?with-debugging=1 and compilation flags -02 -g)? The only changes in parmetis between the two PETSc releases are these below, but I don?t see how they could cause issues kl-18448:pkg-parmetis szampini$ git log -2 commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag: v4.0.3-p5, origin/master, origin/dalcinl/random, origin/HEAD) Author: Lisandro Dalcin Date: Thu May 9 18:44:10 2019 +0300 GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 Author: Lisandro Dalcin Date: Sat May 4 17:22:19 2019 +0300 GKlib: Use gk_randint32() to define the RandomInRange() macro On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users wrote: This is extremely worrisome: ==23361== Use of uninitialised value of size 8 ==23361== at 0x847E939: gk_randint64 (random.c:99) ==23361== by 0x847EF88: gk_randint32 (random.c:128) ==23361== by 0x81EBF0B: libparmetis__Match_Global (in /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) do you get that with PETSc-3.9.4 or only with 3.12.3? This may result in Parmetis using non-random numbers and then giving back an inappropriate ordering that requires more memory for SuperLU_DIST. Suggest looking at the code, or running in the debugger to see what is going on there. We use parmetis all the time and don't see this. Barry On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana wrote: Dear Matt, petsc-users: Finally back after the holidays to try to solve this issue, thanks for your patience! I compiled the latest petsc (3.12.3) with debugging enabled, the same problem appears: relatively large matrices result in out of memory errors. This is not the case for petsc-3.9.4, all fine there. This is a non-hermitian, generalized eigenvalue problem, I generate the A and B matrices myself and then I use example 7 (from the slepc tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol 1e-14 $opts where the $opts variable is: export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' the output from valgrind (sample from one processor) and from the program are attached. If it's of any use the matrices are here (might need at least 180 Gb of ram to solve the problem succesfully under petsc-3.9.4): https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. Any suggestions, comments or any other help are very much appreciated! Cheers, Santiago On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley wrote: On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana wrote: Dear all, After upgrading to petsc 3.12.2 my solver program crashes consistently. Before the upgrade I was using petsc 3.9.4 with no problems. My application deals with a complex-valued, generalized eigenvalue problem. The matrices involved are relatively large, typically 2 to 10 Gb in size, which is no problem for petsc 3.9.4. Are you sure that your indices do not exceed 4B? If so, you need to configure using --with-64-bit-indices Also, it would be nice if you ran with the debugger so we can get a stack trace for the SEGV. Thanks, Matt However, after the upgrade I can only obtain solutions when the matrices are small, the solver crashes when the matrices' size exceed about 1.5 Gb: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. and so on for each cpu. I tried using valgrind and this is the typical output: ==2874== Conditional jump or move depends on uninitialised value(s) ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ld-2.22.so) ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) ==2874== by 0x12: ??? ==2874== These are my configuration options. Identical for both petsc 3.9.4 and 3.12.2: ./configure --with-scalar-type=complex --download-mumps --download-parmetis --download-metis --download-scalapack=1 --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' COPTFLAGS='-O3 -march=native' Thanks in advance for any comments or ideas! Cheers, Santiago -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Jan 9 15:03:58 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 9 Jan 2020 21:03:58 +0000 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: This kind of issue is difficult to untangle because you have potentially three pieces of software which might have changed between v3.9 and v3.12, namely PETSc, SLEPC and SuperLU_dist. You need to isolate which software component is responsible for the 2x increase in memory. When I look at the memory usage in the log files, things look very very similar for the raw PETSc objects. [v3.9] --- Event Stage 0: Main Stage Viewer 4 3 2520 0. Matrix 15 15 125236536 0. Vector 22 22 19713856 0. Index Set 10 10 995280 0. Vec Scatter 4 4 4928 0. EPS Solver 1 1 2276 0. Spectral Transform 1 1 848 0. Basis Vectors 1 1 2168 0. PetscRandom 1 1 662 0. Region 1 1 672 0. Direct Solver 1 1 17440 0. Krylov Solver 1 1 1176 0. Preconditioner 1 1 1000 0. versus [v3.12] --- Event Stage 0: Main Stage Viewer 4 3 2520 0. Matrix 15 15 125237144 0. Vector 22 22 19714528 0. Index Set 10 10 995096 0. Vec Scatter 4 4 3168 0. Star Forest Graph 4 4 3936 0. EPS Solver 1 1 2292 0. Spectral Transform 1 1 848 0. Basis Vectors 1 1 2184 0. PetscRandom 1 1 662 0. Region 1 1 672 0. Direct Solver 1 1 17456 0. Krylov Solver 1 1 1400 0. Preconditioner 1 1 1000 0. Certainly there is no apparent factor 2x increase in memory usage in the underlying petsc objects themselves. Furthermore, the counts of creations of petsc objects in toobig.log and justfine.log match, indicating that none of the implementations used in either PETSc or SLEPc have fundamentally changed wrt the usage of the native petsc objects. It is also curious that VecNorm is called 3 times in "justfine.log" and 19 times in "toobig.log" - although I don't see how that could be related to you problem... The above at least gives me the impression that issue of memory increase is likely not coming from PETSc. I just read Barry's useful email which is even more compelling and also indicates SLEPc is not the likely culprit either as it uses PetscMalloc() internally. Some options to identify the problem: 1/ Eliminate SLEPc as a possible culprit by not calling EPSSolve() and rather just call KSPSolve() with some RHS vector. * If you still see a 2x increase, switch the preconditioner to using -pc_type bjacobi -ksp_max_it 10 rather than superlu_dist. If the memory usage is good, you can be pretty certain the issue arises internally to superl_dist. 2/ Leave your code as is and perform your profiling using mumps rather than superlu_dist. This is a less reliable test than 1/ since the mumps implementation used with v3.9 and v3.12 may differ... Thanks Dave On Thu, 9 Jan 2020 at 20:17, Santiago Andres Triana wrote: > Dear all, > > I think parmetis is not involved since I still run out of memory if I use > the following options: > export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu > -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' > and issuing: > mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target > -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view > > Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is much > lower than current version. I can only solve relatively small problems > using the 3.12 series :( > I have an example with smaller matrices that will likely fail in a 32 Gb > ram machine with petsc-3.12 but runs just fine with petsc-3.9. The > -memory_view output is > > with petsc-3.9.4: (log 'justfine.log' attached) > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 1.6665e+10 > max 7.5674e+08 min 6.4215e+08 > Current process memory: total 1.5841e+10 > max 7.2881e+08 min 6.0905e+08 > Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 > max 1.5868e+08 min 1.0179e+08 > Current space PetscMalloc()ed: total 1.8808e+06 > max 7.8368e+04 min 7.8368e+04 > > > with petsc-3.12.2: (log 'toobig.log' attached) > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 3.1564e+10 > max 1.3662e+09 min 1.2604e+09 > Current process memory: total 3.0355e+10 > max 1.3082e+09 min 1.2254e+09 > Maximum (over computational time) space PetscMalloc()ed: total 2.7618e+09 > max 1.4339e+08 min 8.6493e+07 > Current space PetscMalloc()ed: total 3.6127e+06 > max 1.5053e+05 min 1.5053e+05 > > Strangely, monitoring with 'top' I can see *appreciably higher* peak > memory use, usually twice what -memory_view ends up reporting, both for > petsc-3.9.4 and current. Program fails usually at this peak if not enough > ram available > > The matrices for the example quoted above can be downloaded here (I use > slepc's tutorial ex7.c to solve the problem): > https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) > https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) > > I haven't been able to use a debugger successfully since I am using a > compute node without the possibility of an xterm ... note that I have no > experience using a debugger so any help on that will also be appreciated! > Hope I can switch to the current petsc/slepc version for my production > runs soon... > > Thanks again! > Santiago > > > > On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini > wrote: > >> Can you reproduce the issue with smaller matrices? Or with a debug build >> (i.e. using ?with-debugging=1 and compilation flags -02 -g)? >> >> The only changes in parmetis between the two PETSc releases are these >> below, but I don?t see how they could cause issues >> >> kl-18448:pkg-parmetis szampini$ git log -2 >> commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (*HEAD -> **master*, *tag: >> v4.0.3-p5*, *origin/master*, *origin/dalcinl/random*, *origin/HEAD*) >> Author: Lisandro Dalcin >> Date: Thu May 9 18:44:10 2019 +0300 >> >> GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices >> >> commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 >> Author: Lisandro Dalcin >> Date: Sat May 4 17:22:19 2019 +0300 >> >> GKlib: Use gk_randint32() to define the RandomInRange() macro >> >> >> >> On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> >> This is extremely worrisome: >> >> ==23361== Use of uninitialised value of size 8 >> ==23361== at 0x847E939: gk_randint64 (random.c:99) >> ==23361== by 0x847EF88: gk_randint32 (random.c:128) >> ==23361== by 0x81EBF0B: libparmetis__Match_Global (in >> /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) >> >> do you get that with PETSc-3.9.4 or only with 3.12.3? >> >> This may result in Parmetis using non-random numbers and then giving >> back an inappropriate ordering that requires more memory for SuperLU_DIST. >> >> Suggest looking at the code, or running in the debugger to see what is >> going on there. We use parmetis all the time and don't see this. >> >> Barry >> >> >> >> >> >> >> On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana >> wrote: >> >> Dear Matt, petsc-users: >> >> Finally back after the holidays to try to solve this issue, thanks for >> your patience! >> I compiled the latest petsc (3.12.3) with debugging enabled, the same >> problem appears: relatively large matrices result in out of memory errors. >> This is not the case for petsc-3.9.4, all fine there. >> This is a non-hermitian, generalized eigenvalue problem, I generate the A >> and B matrices myself and then I use example 7 (from the slepc tutorial at >> $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: >> >> mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 >> --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc >> -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol >> 1e-14 $opts >> >> where the $opts variable is: >> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu >> -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type >> superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm >> PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason >> -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' >> >> the output from valgrind (sample from one processor) and from the program >> are attached. >> If it's of any use the matrices are here (might need at least 180 Gb of >> ram to solve the problem succesfully under petsc-3.9.4): >> >> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 >> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 >> >> WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 >> Gb ram), but only up to 3Gb with the latest petsc/slepc. >> Any suggestions, comments or any other help are very much appreciated! >> >> Cheers, >> Santiago >> >> >> >> On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley >> wrote: >> On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana >> wrote: >> Dear all, >> >> After upgrading to petsc 3.12.2 my solver program crashes consistently. >> Before the upgrade I was using petsc 3.9.4 with no problems. >> >> My application deals with a complex-valued, generalized eigenvalue >> problem. The matrices involved are relatively large, typically 2 to 10 Gb >> in size, which is no problem for petsc 3.9.4. >> >> Are you sure that your indices do not exceed 4B? If so, you need to >> configure using >> >> --with-64-bit-indices >> >> Also, it would be nice if you ran with the debugger so we can get a stack >> trace for the SEGV. >> >> Thanks, >> >> Matt >> >> However, after the upgrade I can only obtain solutions when the matrices >> are small, the solver crashes when the matrices' size exceed about 1.5 Gb: >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the >> batch system) has told this process to end >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS >> X to find memory corruption errors >> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >> and run >> [0]PETSC ERROR: to get more information on the crash. >> >> and so on for each cpu. >> >> >> I tried using valgrind and this is the typical output: >> >> ==2874== Conditional jump or move depends on uninitialised value(s) >> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) >> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ >> ld-2.22.so) >> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) >> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) >> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) >> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) >> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) >> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) >> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) >> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) >> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) >> ==2874== by 0x12: ??? >> ==2874== >> >> >> These are my configuration options. Identical for both petsc 3.9.4 and >> 3.12.2: >> >> ./configure --with-scalar-type=complex --download-mumps >> --download-parmetis --download-metis --download-scalapack=1 >> --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 >> --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 >> -march=native' COPTFLAGS='-O3 -march=native' >> >> >> Thanks in advance for any comments or ideas! >> >> Cheers, >> Santiago >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gautam.bisht at pnnl.gov Thu Jan 9 16:29:37 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Thu, 9 Jan 2020 22:29:37 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning Message-ID: <72290340-E04B-4289-9CAA-01637B5082C4@pnnl.gov> Hi All, Here is the situation that I?m running into and am hoping you could provide some guidance. I created a mesh using DMPlexCreateBoxMesh() in which cells are ordered such that cells with increasing x coordinates are first, followed by cells with increasing y-corodinate and so forth. Next, I call DMPlexDistribute() that rearranges the cells after partitioning. How can I map cells after partitioning to cells before partitioning? Thanks, -Gautam From gautam.bisht at pnnl.gov Thu Jan 9 16:51:10 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Thu, 9 Jan 2020 22:51:10 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <875zhkmf0z.fsf@jedbrown.org> References: <875zhkmf0z.fsf@jedbrown.org> Message-ID: > On Jan 9, 2020, at 2:38 PM, Jed Brown wrote: > > "'Bisht, Gautam' via tdycores-dev" writes: > >> Hi Matt, >> >> Here is the situation that I?m running into and hoping you could provide some guidance. I create a mesh using DMPlexCreateBoxMesh() in which cells are ordered such that cells with increasing x coordinates are first, followed by cells with increasing y-corodinate and so forth. Next, I call DMPlexDistribute() that rearranges the cells after partitioning. How can I map the cell order after partitioning to cell order before partitioning? (In PFLOTRAN, we call this type of mapping as Ghosted?to?Application or Ghosted?to?Natural). > > Do you need to rely on the element number, or would coordinates (of a > centroid?) be sufficient for your purposes? I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. > I think DMLabel can be used > to track that numbering, but I also think it's brittle and probably best > avoided. I will look into DMLabel. > > Note that you should be able to get columns to appear on the same > process after partitioning by setting large weights for the vertical > coupling. Alternatively, we could partition a 2D mesh and extrude it > (see DMPlexExtrude). > Thanks, -Gautam PS: I also asked the same question on petsc-users mailing list, so I?m including it in this email thread. > -- > You received this message because you are subscribed to the Google Groups "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://protect2.fireeye.com/v1/url?k=7fbfa880-230a9739-7fbf8295-0cc47adc5fce-47455a25d1827a20&q=1&e=0edcbe39-9671-477e-b05b-0d3bf6a363e4&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F875zhkmf0z.fsf%2540jedbrown.org. From jed at jedbrown.org Thu Jan 9 16:58:09 2020 From: jed at jedbrown.org (Jed Brown) Date: Thu, 09 Jan 2020 15:58:09 -0700 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: References: <875zhkmf0z.fsf@jedbrown.org> Message-ID: <8736come4e.fsf@jedbrown.org> "'Bisht, Gautam' via tdycores-dev" writes: >> Do you need to rely on the element number, or would coordinates (of a >> centroid?) be sufficient for your purposes? > > I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. How does the mapping file work and how is it generated? We can locate points and create interpolation with unstructured grids. From gautam.bisht at pnnl.gov Thu Jan 9 17:34:55 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Thu, 9 Jan 2020 23:34:55 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <8736come4e.fsf@jedbrown.org> References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> Message-ID: <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> > On Jan 9, 2020, at 2:58 PM, Jed Brown wrote: > > "'Bisht, Gautam' via tdycores-dev" writes: > >>> Do you need to rely on the element number, or would coordinates (of a >>> centroid?) be sufficient for your purposes? >> >> I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. > > How does the mapping file work and how is it generated? In CESM/E3SM, the mapping file is used to map fluxes or states between grids of two components (e.g. land & atmosphere). The mapping method can be conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB script to create the mapping file. I?m surprised that this is not an issue for other codes that are using DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, they can specify material property for each grid cell. So, there should be a way to create a vectorscatter that will scatter material property read in the ?application?-order (i.e. order before calling DMPlexDistribute() ) to ghosted-order (i.e. order after calling DMPlexDistribute()). > We can locate points and create interpolation with unstructured grids. > > -- > You received this message because you are subscribed to the Google Groups "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org. From knepley at gmail.com Thu Jan 9 18:25:55 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Jan 2020 14:25:55 -1000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> Message-ID: On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev < tdycores-dev at googlegroups.com> wrote: > > > On Jan 9, 2020, at 2:58 PM, Jed Brown wrote: > > > > "'Bisht, Gautam' via tdycores-dev" > writes: > > > >>> Do you need to rely on the element number, or would coordinates (of a > >>> centroid?) be sufficient for your purposes? > >> > >> I do need to rely on the element number. In my case, I have a mapping > file that remaps data from one grid onto another grid. Though I?m currently > creating a hexahedron mesh, in the future I would be reading in an > unstructured grid from a file for which I cannot rely on coordinates. > > > > How does the mapping file work and how is it generated? > > In CESM/E3SM, the mapping file is used to map fluxes or states between > grids of two components (e.g. land & atmosphere). The mapping method can be > conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses > ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB > script to create the mapping file. > > I?m surprised that this is not an issue for other codes that are using > DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, > they can specify material property for each grid cell. So, there should be > a way to create a vectorscatter that will scatter material property read in > the ?application?-order (i.e. order before calling DMPlexDistribute() ) to > ghosted-order (i.e. order after calling DMPlexDistribute()). > We did build something specific for this because some people wanted it. I wish I could purge this from all simulations. Its definitely destructive, but this is the way the world currently is. You want this: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html Thanks, Matt > > We can locate points and create interpolation with unstructured grids. > > > > -- > > You received this message because you are subscribed to the Google > Groups "tdycores-dev" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to tdycores-dev+unsubscribe at googlegroups.com. > > To view this discussion on the web visit > https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org > . > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov > . > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gautam.bisht at pnnl.gov Thu Jan 9 18:57:33 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Fri, 10 Jan 2020 00:57:33 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> Message-ID: <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> On Jan 9, 2020, at 4:25 PM, Matthew Knepley > wrote: On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev > wrote: > On Jan 9, 2020, at 2:58 PM, Jed Brown > wrote: > > "'Bisht, Gautam' via tdycores-dev" > writes: > >>> Do you need to rely on the element number, or would coordinates (of a >>> centroid?) be sufficient for your purposes? >> >> I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. > > How does the mapping file work and how is it generated? In CESM/E3SM, the mapping file is used to map fluxes or states between grids of two components (e.g. land & atmosphere). The mapping method can be conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB script to create the mapping file. I?m surprised that this is not an issue for other codes that are using DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, they can specify material property for each grid cell. So, there should be a way to create a vectorscatter that will scatter material property read in the ?application?-order (i.e. order before calling DMPlexDistribute() ) to ghosted-order (i.e. order after calling DMPlexDistribute()). We did build something specific for this because some people wanted it. I wish I could purge this from all simulations. Its definitely destructive, but this is the way the world currently is. You want this: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html Perfect. Thanks. -Gautam Thanks, Matt > We can locate points and create interpolation with unstructured grids. > > -- > You received this message because you are subscribed to the Google Groups "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasrazinkovas at gmail.com Fri Jan 10 07:52:53 2020 From: lukasrazinkovas at gmail.com (Lukas Razinkovas) Date: Fri, 10 Jan 2020 15:52:53 +0200 Subject: [petsc-users] petsc4py mpi matrix size Message-ID: Hello, I am trying to use petsc4py and slepc4py for parallel sparse matrix diagonalization. However I am a bit confused about matrix size increase when I switch from single processor to multiple processors. For example 100 x 100 matrix with 298 nonzero elements consumes 8820 bytes of memory (mat.getInfo()["memory"]), however on two processes it consumes 20552 bytes of memory and on four 33528. My matrix is taken from the slepc4py/demo/ex1.py, where nonzero elements are on three diagonals. Why memory usage increases with MPI processes number? I thought that each process stores its own rows and it should stay the same. Or some elements are stored globally? Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Jan 10 09:21:35 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Fri, 10 Jan 2020 15:21:35 +0000 Subject: [petsc-users] petsc4py mpi matrix size In-Reply-To: References: Message-ID: <14A24E61-0C01-4BEE-A457-6E88D0FCB172@anl.gov> Yes, with, for example, MATMPAIJ, the matrix entries are distributed among the processes; first verify that you are using a MPI matrix, not Seq, since Seq will keep an entire copy on each process. But the parallel matrices do come with some overhead for meta data. So for small matrices like yours it can seem the memory grows unrealistically. Try a much bigger matrix, say 100 times as big and look at the memory usage then. You should see that the meta data is now a much smaller percentage of the memory usage. Also be careful if you use top or other such tools for determining memory usage; since malloced() is often not returned to the OS, the can indicate much higher memory usage than is really taking place. You can run PETSc with -log_view -log_view_memory to get a good idea of where PETSc is allocating memory and how much Barry > On Jan 10, 2020, at 7:52 AM, Lukas Razinkovas wrote: > > Hello, > > I am trying to use petsc4py and slepc4py for parallel sparse matrix diagonalization. > However I am a bit confused about matrix size increase when I switch from single processor to multiple processors. For example 100 x 100 matrix with 298 nonzero elements consumes > 8820 bytes of memory (mat.getInfo()["memory"]), however on two processes it consumes 20552 bytes of memory and on four 33528. My matrix is taken from the slepc4py/demo/ex1.py, > where nonzero elements are on three diagonals. > > Why memory usage increases with MPI processes number? > I thought that each process stores its own rows and it should stay the same. Or some elements are stored globally? > > Lukas From lukasrazinkovas at gmail.com Fri Jan 10 11:21:52 2020 From: lukasrazinkovas at gmail.com (Lukas Razinkovas) Date: Fri, 10 Jan 2020 19:21:52 +0200 Subject: [petsc-users] petsc4py mpi matrix size In-Reply-To: <14A24E61-0C01-4BEE-A457-6E88D0FCB172@anl.gov> References: <14A24E61-0C01-4BEE-A457-6E88D0FCB172@anl.gov> Message-ID: Thank you very much! I already checked that its MPIAIJ matrix and for size I use MatGetInfo routine. You are right. With matrix of dim 100000x10000 i get sizes: - serial: 5.603 Mb - 4 proc.: 7.626 Mb - 36 proc.: 7.834 Mb That looks fine to me. Thank you again for such quick response. I am really impressed with python interface to petsc and slepc. I think it is missing detailed documentation and that discouraged me to use it initially, so I was writing C code and then wrapping it with python. I am still confused, how for example to set MUMPS parameters from python code, but that is different topic. Lukas On Fri, Jan 10, 2020 at 5:21 PM Smith, Barry F. wrote: > > Yes, with, for example, MATMPAIJ, the matrix entries are distributed > among the processes; first verify that you are using a MPI matrix, not Seq, > since Seq will keep an entire copy on each process. > > But the parallel matrices do come with some overhead for meta data. So > for small matrices like yours it can seem the memory grows unrealistically. > Try a much bigger matrix, say 100 times as big and look at the memory usage > then. You should see that the meta data is now a much smaller percentage of > the memory usage. > > Also be careful if you use top or other such tools for determining > memory usage; since malloced() is often not returned to the OS, the can > indicate much higher memory usage than is really taking place. You can run > PETSc with -log_view -log_view_memory to get a good idea of where PETSc is > allocating memory and how much > > Barry > > > > On Jan 10, 2020, at 7:52 AM, Lukas Razinkovas > wrote: > > > > Hello, > > > > I am trying to use petsc4py and slepc4py for parallel sparse matrix > diagonalization. > > However I am a bit confused about matrix size increase when I switch > from single processor to multiple processors. For example 100 x 100 matrix > with 298 nonzero elements consumes > > 8820 bytes of memory (mat.getInfo()["memory"]), however on two processes > it consumes 20552 bytes of memory and on four 33528. My matrix is taken > from the slepc4py/demo/ex1.py, > > where nonzero elements are on three diagonals. > > > > Why memory usage increases with MPI processes number? > > I thought that each process stores its own rows and it should stay the > same. Or some elements are stored globally? > > > > Lukas > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From repepo at gmail.com Fri Jan 10 13:57:31 2020 From: repepo at gmail.com (Santiago Andres Triana) Date: Fri, 10 Jan 2020 20:57:31 +0100 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: Dear all, I ran the program with valgrind --tool=massif, the results are cryptic to me ... not sure who's the memory hog! the logs are attached. The command I used is: mpiexec -n 24 valgrind --tool=massif --num-callers=20 --log-file=valgrind.log.%p ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 $opts -eps_target -4.008e-3+1.57142i -eps_target_magnitude -eps_tol 1e-14 Is there any possibility to install a version of superlu_dist (or mumps) different from what the petsc version automatically downloads? Thanks! Santiago On Thu, Jan 9, 2020 at 10:04 PM Dave May wrote: > This kind of issue is difficult to untangle because you have potentially > three pieces of software which might have changed between v3.9 and v3.12, > namely > PETSc, SLEPC and SuperLU_dist. > You need to isolate which software component is responsible for the 2x > increase in memory. > > When I look at the memory usage in the log files, things look very very > similar for the raw PETSc objects. > > [v3.9] > --- Event Stage 0: Main Stage > > > Viewer 4 3 2520 0. > > Matrix 15 15 125236536 0. > > Vector 22 22 19713856 0. > > Index Set 10 10 995280 0. > > Vec Scatter 4 4 4928 0. > > EPS Solver 1 1 2276 0. > > Spectral Transform 1 1 848 0. > > Basis Vectors 1 1 2168 0. > > PetscRandom 1 1 662 0. > > Region 1 1 672 0. > > Direct Solver 1 1 17440 0. > > Krylov Solver 1 1 1176 0. > > Preconditioner 1 1 1000 0. > > versus > > [v3.12] > > --- Event Stage 0: Main Stage > > > Viewer 4 3 2520 0. > > Matrix 15 15 125237144 0. > > Vector 22 22 19714528 0. > > Index Set 10 10 995096 0. > > Vec Scatter 4 4 3168 0. > > Star Forest Graph 4 4 3936 0. > > EPS Solver 1 1 2292 0. > > Spectral Transform 1 1 848 0. > > Basis Vectors 1 1 2184 0. > > PetscRandom 1 1 662 0. > > Region 1 1 672 0. > > Direct Solver 1 1 17456 0. > > Krylov Solver 1 1 1400 0. > > Preconditioner 1 1 1000 0. > > Certainly there is no apparent factor 2x increase in memory usage in the > underlying petsc objects themselves. > Furthermore, the counts of creations of petsc objects in toobig.log and > justfine.log match, indicating that none of the implementations used in > either PETSc or SLEPc have fundamentally changed wrt the usage of the > native petsc objects. > > It is also curious that VecNorm is called 3 times in "justfine.log" and 19 > times in "toobig.log" - although I don't see how that could be related to > you problem... > > The above at least gives me the impression that issue of memory increase > is likely not coming from PETSc. > I just read Barry's useful email which is even more compelling and also > indicates SLEPc is not the likely culprit either as it uses PetscMalloc() > internally. > > Some options to identify the problem: > > 1/ Eliminate SLEPc as a possible culprit by not calling EPSSolve() and > rather just call KSPSolve() with some RHS vector. > * If you still see a 2x increase, switch the preconditioner to using > -pc_type bjacobi -ksp_max_it 10 rather than superlu_dist. > If the memory usage is good, you can be pretty certain the issue arises > internally to superl_dist. > > 2/ Leave your code as is and perform your profiling using mumps rather > than superlu_dist. > This is a less reliable test than 1/ since the mumps implementation used > with v3.9 and v3.12 may differ... > > Thanks > Dave > > On Thu, 9 Jan 2020 at 20:17, Santiago Andres Triana > wrote: > >> Dear all, >> >> I think parmetis is not involved since I still run out of memory if I use >> the following options: >> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu >> -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' >> and issuing: >> mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target >> -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view >> >> Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is much >> lower than current version. I can only solve relatively small problems >> using the 3.12 series :( >> I have an example with smaller matrices that will likely fail in a 32 Gb >> ram machine with petsc-3.12 but runs just fine with petsc-3.9. The >> -memory_view output is >> >> with petsc-3.9.4: (log 'justfine.log' attached) >> >> Summary of Memory Usage in PETSc >> Maximum (over computational time) process memory: total 1.6665e+10 >> max 7.5674e+08 min 6.4215e+08 >> Current process memory: total 1.5841e+10 >> max 7.2881e+08 min 6.0905e+08 >> Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 >> max 1.5868e+08 min 1.0179e+08 >> Current space PetscMalloc()ed: total 1.8808e+06 >> max 7.8368e+04 min 7.8368e+04 >> >> >> with petsc-3.12.2: (log 'toobig.log' attached) >> >> Summary of Memory Usage in PETSc >> Maximum (over computational time) process memory: total 3.1564e+10 >> max 1.3662e+09 min 1.2604e+09 >> Current process memory: total 3.0355e+10 >> max 1.3082e+09 min 1.2254e+09 >> Maximum (over computational time) space PetscMalloc()ed: total 2.7618e+09 >> max 1.4339e+08 min 8.6493e+07 >> Current space PetscMalloc()ed: total 3.6127e+06 >> max 1.5053e+05 min 1.5053e+05 >> >> Strangely, monitoring with 'top' I can see *appreciably higher* peak >> memory use, usually twice what -memory_view ends up reporting, both for >> petsc-3.9.4 and current. Program fails usually at this peak if not enough >> ram available >> >> The matrices for the example quoted above can be downloaded here (I use >> slepc's tutorial ex7.c to solve the problem): >> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) >> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) >> >> I haven't been able to use a debugger successfully since I am using a >> compute node without the possibility of an xterm ... note that I have no >> experience using a debugger so any help on that will also be appreciated! >> Hope I can switch to the current petsc/slepc version for my production >> runs soon... >> >> Thanks again! >> Santiago >> >> >> >> On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini >> wrote: >> >>> Can you reproduce the issue with smaller matrices? Or with a debug build >>> (i.e. using ?with-debugging=1 and compilation flags -02 -g)? >>> >>> The only changes in parmetis between the two PETSc releases are these >>> below, but I don?t see how they could cause issues >>> >>> kl-18448:pkg-parmetis szampini$ git log -2 >>> commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (*HEAD -> **master*, *tag: >>> v4.0.3-p5*, *origin/master*, *origin/dalcinl/random*, *origin/HEAD*) >>> Author: Lisandro Dalcin >>> Date: Thu May 9 18:44:10 2019 +0300 >>> >>> GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices >>> >>> commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 >>> Author: Lisandro Dalcin >>> Date: Sat May 4 17:22:19 2019 +0300 >>> >>> GKlib: Use gk_randint32() to define the RandomInRange() macro >>> >>> >>> >>> On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>> >>> This is extremely worrisome: >>> >>> ==23361== Use of uninitialised value of size 8 >>> ==23361== at 0x847E939: gk_randint64 (random.c:99) >>> ==23361== by 0x847EF88: gk_randint32 (random.c:128) >>> ==23361== by 0x81EBF0B: libparmetis__Match_Global (in >>> /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) >>> >>> do you get that with PETSc-3.9.4 or only with 3.12.3? >>> >>> This may result in Parmetis using non-random numbers and then giving >>> back an inappropriate ordering that requires more memory for SuperLU_DIST. >>> >>> Suggest looking at the code, or running in the debugger to see what is >>> going on there. We use parmetis all the time and don't see this. >>> >>> Barry >>> >>> >>> >>> >>> >>> >>> On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana >>> wrote: >>> >>> Dear Matt, petsc-users: >>> >>> Finally back after the holidays to try to solve this issue, thanks for >>> your patience! >>> I compiled the latest petsc (3.12.3) with debugging enabled, the same >>> problem appears: relatively large matrices result in out of memory errors. >>> This is not the case for petsc-3.9.4, all fine there. >>> This is a non-hermitian, generalized eigenvalue problem, I generate the >>> A and B matrices myself and then I use example 7 (from the slepc tutorial >>> at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: >>> >>> mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 >>> --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc >>> -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol >>> 1e-14 $opts >>> >>> where the $opts variable is: >>> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu >>> -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type >>> superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm >>> PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason >>> -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' >>> >>> the output from valgrind (sample from one processor) and from the >>> program are attached. >>> If it's of any use the matrices are here (might need at least 180 Gb of >>> ram to solve the problem succesfully under petsc-3.9.4): >>> >>> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 >>> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 >>> >>> WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 >>> Gb ram), but only up to 3Gb with the latest petsc/slepc. >>> Any suggestions, comments or any other help are very much appreciated! >>> >>> Cheers, >>> Santiago >>> >>> >>> >>> On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley >>> wrote: >>> On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana >>> wrote: >>> Dear all, >>> >>> After upgrading to petsc 3.12.2 my solver program crashes consistently. >>> Before the upgrade I was using petsc 3.9.4 with no problems. >>> >>> My application deals with a complex-valued, generalized eigenvalue >>> problem. The matrices involved are relatively large, typically 2 to 10 Gb >>> in size, which is no problem for petsc 3.9.4. >>> >>> Are you sure that your indices do not exceed 4B? If so, you need to >>> configure using >>> >>> --with-64-bit-indices >>> >>> Also, it would be nice if you ran with the debugger so we can get a >>> stack trace for the SEGV. >>> >>> Thanks, >>> >>> Matt >>> >>> However, after the upgrade I can only obtain solutions when the matrices >>> are small, the solver crashes when the matrices' size exceed about 1.5 Gb: >>> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the >>> batch system) has told this process to end >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac >>> OS X to find memory corruption errors >>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, >>> and run >>> [0]PETSC ERROR: to get more information on the crash. >>> >>> and so on for each cpu. >>> >>> >>> I tried using valgrind and this is the typical output: >>> >>> ==2874== Conditional jump or move depends on uninitialised value(s) >>> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) >>> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ >>> ld-2.22.so) >>> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) >>> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) >>> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) >>> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) >>> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) >>> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) >>> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) >>> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) >>> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) >>> ==2874== by 0x12: ??? >>> ==2874== >>> >>> >>> These are my configuration options. Identical for both petsc 3.9.4 and >>> 3.12.2: >>> >>> ./configure --with-scalar-type=complex --download-mumps >>> --download-parmetis --download-metis --download-scalapack=1 >>> --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 >>> --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 >>> -march=native' COPTFLAGS='-O3 -march=native' >>> >>> >>> Thanks in advance for any comments or ideas! >>> >>> Cheers, >>> Santiago >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: massif.out.petsc-3.9 Type: application/octet-stream Size: 140597 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: massif.out.petsc-3.12 Type: application/octet-stream Size: 113944 bytes Desc: not available URL: From bsmith at mcs.anl.gov Fri Jan 10 14:19:36 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Fri, 10 Jan 2020 20:19:36 +0000 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: Can you please try v3.12.3 There was some funky business mistakenly added related to partitioning that has been fixed in 3.12.3 Barry > On Jan 10, 2020, at 1:57 PM, Santiago Andres Triana wrote: > > Dear all, > > I ran the program with valgrind --tool=massif, the results are cryptic to me ... not sure who's the memory hog! the logs are attached. > > The command I used is: > mpiexec -n 24 valgrind --tool=massif --num-callers=20 --log-file=valgrind.log.%p ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 $opts -eps_target -4.008e-3+1.57142i -eps_target_magnitude -eps_tol 1e-14 > > Is there any possibility to install a version of superlu_dist (or mumps) different from what the petsc version automatically downloads? > > Thanks! > Santiago > > > On Thu, Jan 9, 2020 at 10:04 PM Dave May wrote: > This kind of issue is difficult to untangle because you have potentially three pieces of software which might have changed between v3.9 and v3.12, namely > PETSc, SLEPC and SuperLU_dist. > You need to isolate which software component is responsible for the 2x increase in memory. > > When I look at the memory usage in the log files, things look very very similar for the raw PETSc objects. > > [v3.9] > --- Event Stage 0: Main Stage > > Viewer 4 3 2520 0. > Matrix 15 15 125236536 0. > Vector 22 22 19713856 0. > Index Set 10 10 995280 0. > Vec Scatter 4 4 4928 0. > EPS Solver 1 1 2276 0. > Spectral Transform 1 1 848 0. > Basis Vectors 1 1 2168 0. > PetscRandom 1 1 662 0. > Region 1 1 672 0. > Direct Solver 1 1 17440 0. > Krylov Solver 1 1 1176 0. > Preconditioner 1 1 1000 0. > > versus > > [v3.12] > --- Event Stage 0: Main Stage > > Viewer 4 3 2520 0. > Matrix 15 15 125237144 0. > Vector 22 22 19714528 0. > Index Set 10 10 995096 0. > Vec Scatter 4 4 3168 0. > Star Forest Graph 4 4 3936 0. > EPS Solver 1 1 2292 0. > Spectral Transform 1 1 848 0. > Basis Vectors 1 1 2184 0. > PetscRandom 1 1 662 0. > Region 1 1 672 0. > Direct Solver 1 1 17456 0. > Krylov Solver 1 1 1400 0. > Preconditioner 1 1 1000 0. > > Certainly there is no apparent factor 2x increase in memory usage in the underlying petsc objects themselves. > Furthermore, the counts of creations of petsc objects in toobig.log and justfine.log match, indicating that none of the implementations used in either PETSc or SLEPc have fundamentally changed wrt the usage of the native petsc objects. > > It is also curious that VecNorm is called 3 times in "justfine.log" and 19 times in "toobig.log" - although I don't see how that could be related to you problem... > > The above at least gives me the impression that issue of memory increase is likely not coming from PETSc. > I just read Barry's useful email which is even more compelling and also indicates SLEPc is not the likely culprit either as it uses PetscMalloc() internally. > > Some options to identify the problem: > > 1/ Eliminate SLEPc as a possible culprit by not calling EPSSolve() and rather just call KSPSolve() with some RHS vector. > * If you still see a 2x increase, switch the preconditioner to using -pc_type bjacobi -ksp_max_it 10 rather than superlu_dist. > If the memory usage is good, you can be pretty certain the issue arises internally to superl_dist. > > 2/ Leave your code as is and perform your profiling using mumps rather than superlu_dist. > This is a less reliable test than 1/ since the mumps implementation used with v3.9 and v3.12 may differ... > > Thanks > Dave > > On Thu, 9 Jan 2020 at 20:17, Santiago Andres Triana wrote: > Dear all, > > I think parmetis is not involved since I still run out of memory if I use the following options: > export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' > and issuing: > mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view > > Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is much lower than current version. I can only solve relatively small problems using the 3.12 series :( > I have an example with smaller matrices that will likely fail in a 32 Gb ram machine with petsc-3.12 but runs just fine with petsc-3.9. The -memory_view output is > > with petsc-3.9.4: (log 'justfine.log' attached) > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 1.6665e+10 max 7.5674e+08 min 6.4215e+08 > Current process memory: total 1.5841e+10 max 7.2881e+08 min 6.0905e+08 > Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 max 1.5868e+08 min 1.0179e+08 > Current space PetscMalloc()ed: total 1.8808e+06 max 7.8368e+04 min 7.8368e+04 > > > with petsc-3.12.2: (log 'toobig.log' attached) > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 3.1564e+10 max 1.3662e+09 min 1.2604e+09 > Current process memory: total 3.0355e+10 max 1.3082e+09 min 1.2254e+09 > Maximum (over computational time) space PetscMalloc()ed: total 2.7618e+09 max 1.4339e+08 min 8.6493e+07 > Current space PetscMalloc()ed: total 3.6127e+06 max 1.5053e+05 min 1.5053e+05 > > Strangely, monitoring with 'top' I can see *appreciably higher* peak memory use, usually twice what -memory_view ends up reporting, both for petsc-3.9.4 and current. Program fails usually at this peak if not enough ram available > > The matrices for the example quoted above can be downloaded here (I use slepc's tutorial ex7.c to solve the problem): > https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) > https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) > > I haven't been able to use a debugger successfully since I am using a compute node without the possibility of an xterm ... note that I have no experience using a debugger so any help on that will also be appreciated! > Hope I can switch to the current petsc/slepc version for my production runs soon... > > Thanks again! > Santiago > > > > On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini wrote: > Can you reproduce the issue with smaller matrices? Or with a debug build (i.e. using ?with-debugging=1 and compilation flags -02 -g)? > > The only changes in parmetis between the two PETSc releases are these below, but I don?t see how they could cause issues > > kl-18448:pkg-parmetis szampini$ git log -2 > commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag: v4.0.3-p5, origin/master, origin/dalcinl/random, origin/HEAD) > Author: Lisandro Dalcin > Date: Thu May 9 18:44:10 2019 +0300 > > GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices > > commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 > Author: Lisandro Dalcin > Date: Sat May 4 17:22:19 2019 +0300 > > GKlib: Use gk_randint32() to define the RandomInRange() macro > > > >> On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users wrote: >> >> >> This is extremely worrisome: >> >> ==23361== Use of uninitialised value of size 8 >> ==23361== at 0x847E939: gk_randint64 (random.c:99) >> ==23361== by 0x847EF88: gk_randint32 (random.c:128) >> ==23361== by 0x81EBF0B: libparmetis__Match_Global (in /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) >> >> do you get that with PETSc-3.9.4 or only with 3.12.3? >> >> This may result in Parmetis using non-random numbers and then giving back an inappropriate ordering that requires more memory for SuperLU_DIST. >> >> Suggest looking at the code, or running in the debugger to see what is going on there. We use parmetis all the time and don't see this. >> >> Barry >> >> >> >> >> >> >>> On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana wrote: >>> >>> Dear Matt, petsc-users: >>> >>> Finally back after the holidays to try to solve this issue, thanks for your patience! >>> I compiled the latest petsc (3.12.3) with debugging enabled, the same problem appears: relatively large matrices result in out of memory errors. This is not the case for petsc-3.9.4, all fine there. >>> This is a non-hermitian, generalized eigenvalue problem, I generate the A and B matrices myself and then I use example 7 (from the slepc tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the problem: >>> >>> mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol 1e-14 $opts >>> >>> where the $opts variable is: >>> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' >>> >>> the output from valgrind (sample from one processor) and from the program are attached. >>> If it's of any use the matrices are here (might need at least 180 Gb of ram to solve the problem succesfully under petsc-3.9.4): >>> >>> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 >>> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 >>> >>> WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. >>> Any suggestions, comments or any other help are very much appreciated! >>> >>> Cheers, >>> Santiago >>> >>> >>> >>> On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley wrote: >>> On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana wrote: >>> Dear all, >>> >>> After upgrading to petsc 3.12.2 my solver program crashes consistently. Before the upgrade I was using petsc 3.9.4 with no problems. >>> >>> My application deals with a complex-valued, generalized eigenvalue problem. The matrices involved are relatively large, typically 2 to 10 Gb in size, which is no problem for petsc 3.9.4. >>> >>> Are you sure that your indices do not exceed 4B? If so, you need to configure using >>> >>> --with-64-bit-indices >>> >>> Also, it would be nice if you ran with the debugger so we can get a stack trace for the SEGV. >>> >>> Thanks, >>> >>> Matt >>> >>> However, after the upgrade I can only obtain solutions when the matrices are small, the solver crashes when the matrices' size exceed about 1.5 Gb: >>> >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end >>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >>> [0]PETSC ERROR: to get more information on the crash. >>> >>> and so on for each cpu. >>> >>> >>> I tried using valgrind and this is the typical output: >>> >>> ==2874== Conditional jump or move depends on uninitialised value(s) >>> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) >>> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ld-2.22.so) >>> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) >>> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) >>> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) >>> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) >>> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) >>> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) >>> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) >>> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) >>> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) >>> ==2874== by 0x12: ??? >>> ==2874== >>> >>> >>> These are my configuration options. Identical for both petsc 3.9.4 and 3.12.2: >>> >>> ./configure --with-scalar-type=complex --download-mumps --download-parmetis --download-metis --download-scalapack=1 --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 -march=native' COPTFLAGS='-O3 -march=native' >>> >>> >>> Thanks in advance for any comments or ideas! >>> >>> Cheers, >>> Santiago >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> > > From knepley at gmail.com Fri Jan 10 15:14:11 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 10 Jan 2020 11:14:11 -1000 Subject: [petsc-users] petsc4py mpi matrix size In-Reply-To: References: <14A24E61-0C01-4BEE-A457-6E88D0FCB172@anl.gov> Message-ID: On Fri, Jan 10, 2020 at 7:23 AM Lukas Razinkovas wrote: > Thank you very much! > > I already checked that its MPIAIJ matrix and for size I use MatGetInfo > routine. > You are right. With matrix of dim 100000x10000 i get sizes: > > - serial: 5.603 Mb > - 4 proc.: 7.626 Mb > - 36 proc.: 7.834 Mb > > That looks fine to me. Thank you again for such quick response. > > I am really impressed with python interface to petsc and slepc. > I think it is missing detailed documentation and that discouraged me to > use it initially, > so I was writing C code and then wrapping it with python. I am still > confused, > how for example to set MUMPS parameters from python code, but that is > different topic. > We would discourage you from setting parameters in the code, and rather use the command line interface to MUMPS parameters. However, you can put that in the code itself using PetscOptionSetValue(). Thanks, Matt > Lukas > > On Fri, Jan 10, 2020 at 5:21 PM Smith, Barry F. > wrote: > >> >> Yes, with, for example, MATMPAIJ, the matrix entries are distributed >> among the processes; first verify that you are using a MPI matrix, not Seq, >> since Seq will keep an entire copy on each process. >> >> But the parallel matrices do come with some overhead for meta data. So >> for small matrices like yours it can seem the memory grows unrealistically. >> Try a much bigger matrix, say 100 times as big and look at the memory usage >> then. You should see that the meta data is now a much smaller percentage of >> the memory usage. >> >> Also be careful if you use top or other such tools for determining >> memory usage; since malloced() is often not returned to the OS, the can >> indicate much higher memory usage than is really taking place. You can run >> PETSc with -log_view -log_view_memory to get a good idea of where PETSc is >> allocating memory and how much >> >> Barry >> >> >> > On Jan 10, 2020, at 7:52 AM, Lukas Razinkovas < >> lukasrazinkovas at gmail.com> wrote: >> > >> > Hello, >> > >> > I am trying to use petsc4py and slepc4py for parallel sparse matrix >> diagonalization. >> > However I am a bit confused about matrix size increase when I switch >> from single processor to multiple processors. For example 100 x 100 matrix >> with 298 nonzero elements consumes >> > 8820 bytes of memory (mat.getInfo()["memory"]), however on two >> processes it consumes 20552 bytes of memory and on four 33528. My matrix >> is taken from the slepc4py/demo/ex1.py, >> > where nonzero elements are on three diagonals. >> > >> > Why memory usage increases with MPI processes number? >> > I thought that each process stores its own rows and it should stay the >> same. Or some elements are stored globally? >> > >> > Lukas >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From repepo at gmail.com Fri Jan 10 17:04:15 2020 From: repepo at gmail.com (Santiago Andres Triana) Date: Sat, 11 Jan 2020 00:04:15 +0100 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: Hi Barry, petsc-users: Just updated to petsc-3.12.3 and the performance is about the same as 3.12.2, i.e. about 2x the memory use of petsc-3.9.4 petsc-3.12.3 (uses superlu_dist-6.2.0) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 2.9368e+10 max 1.2922e+09 min 1.1784e+09 Current process memory: total 2.8192e+10 max 1.2263e+09 min 1.1456e+09 Maximum (over computational time) space PetscMalloc()ed: total 2.7619e+09 max 1.4339e+08 min 8.6494e+07 Current space PetscMalloc()ed: total 3.6127e+06 max 1.5053e+05 min 1.5053e+05 petsc-3.9.4 Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 1.5695e+10 max 7.1985e+08 min 6.0131e+08 Current process memory: total 1.3186e+10 max 6.9240e+08 min 4.2821e+08 Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 max 1.5869e+08 min 1.0179e+08 Current space PetscMalloc()ed: total 1.8808e+06 max 7.8368e+04 min 7.8368e+04 However, it seems that the culprit is superlu_dist: I recompiled current petsc/slepc with superlu_dist-5.4.0 (used option --download-superlu_dist=/home/spin/superlu_dist-5.4.0.tar.gz) and the result is this: petsc-3.12.3 with superlu_dist-5.4.0: Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 1.5636e+10 max 7.1217e+08 min 5.9963e+08 Current process memory: total 1.3401e+10 max 6.5498e+08 min 4.2626e+08 Maximum (over computational time) space PetscMalloc()ed: total 2.7619e+09 max 1.4339e+08 min 8.6494e+07 Current space PetscMalloc()ed: total 3.6127e+06 max 1.5053e+05 min 1.5053e+05 I could not compile petsc-3.12.3 with the exact superlu_dist version that petsc-3.9.4 uses (5.3.0), but will try newer versions to see how they perform ... I guess I should address this issue to the superlu mantainers? Thanks! Santiago On Fri, Jan 10, 2020 at 9:19 PM Smith, Barry F. wrote: > > Can you please try v3.12.3 There was some funky business mistakenly > added related to partitioning that has been fixed in 3.12.3 > > Barry > > > > On Jan 10, 2020, at 1:57 PM, Santiago Andres Triana > wrote: > > > > Dear all, > > > > I ran the program with valgrind --tool=massif, the results are cryptic > to me ... not sure who's the memory hog! the logs are attached. > > > > The command I used is: > > mpiexec -n 24 valgrind --tool=massif --num-callers=20 > --log-file=valgrind.log.%p ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 $opts > -eps_target -4.008e-3+1.57142i -eps_target_magnitude -eps_tol 1e-14 > > > > Is there any possibility to install a version of superlu_dist (or mumps) > different from what the petsc version automatically downloads? > > > > Thanks! > > Santiago > > > > > > On Thu, Jan 9, 2020 at 10:04 PM Dave May > wrote: > > This kind of issue is difficult to untangle because you have potentially > three pieces of software which might have changed between v3.9 and v3.12, > namely > > PETSc, SLEPC and SuperLU_dist. > > You need to isolate which software component is responsible for the 2x > increase in memory. > > > > When I look at the memory usage in the log files, things look very very > similar for the raw PETSc objects. > > > > [v3.9] > > --- Event Stage 0: Main Stage > > > > Viewer 4 3 2520 0. > > Matrix 15 15 125236536 0. > > Vector 22 22 19713856 0. > > Index Set 10 10 995280 0. > > Vec Scatter 4 4 4928 0. > > EPS Solver 1 1 2276 0. > > Spectral Transform 1 1 848 0. > > Basis Vectors 1 1 2168 0. > > PetscRandom 1 1 662 0. > > Region 1 1 672 0. > > Direct Solver 1 1 17440 0. > > Krylov Solver 1 1 1176 0. > > Preconditioner 1 1 1000 0. > > > > versus > > > > [v3.12] > > --- Event Stage 0: Main Stage > > > > Viewer 4 3 2520 0. > > Matrix 15 15 125237144 0. > > Vector 22 22 19714528 0. > > Index Set 10 10 995096 0. > > Vec Scatter 4 4 3168 0. > > Star Forest Graph 4 4 3936 0. > > EPS Solver 1 1 2292 0. > > Spectral Transform 1 1 848 0. > > Basis Vectors 1 1 2184 0. > > PetscRandom 1 1 662 0. > > Region 1 1 672 0. > > Direct Solver 1 1 17456 0. > > Krylov Solver 1 1 1400 0. > > Preconditioner 1 1 1000 0. > > > > Certainly there is no apparent factor 2x increase in memory usage in the > underlying petsc objects themselves. > > Furthermore, the counts of creations of petsc objects in toobig.log and > justfine.log match, indicating that none of the implementations used in > either PETSc or SLEPc have fundamentally changed wrt the usage of the > native petsc objects. > > > > It is also curious that VecNorm is called 3 times in "justfine.log" and > 19 times in "toobig.log" - although I don't see how that could be related > to you problem... > > > > The above at least gives me the impression that issue of memory increase > is likely not coming from PETSc. > > I just read Barry's useful email which is even more compelling and also > indicates SLEPc is not the likely culprit either as it uses PetscMalloc() > internally. > > > > Some options to identify the problem: > > > > 1/ Eliminate SLEPc as a possible culprit by not calling EPSSolve() and > rather just call KSPSolve() with some RHS vector. > > * If you still see a 2x increase, switch the preconditioner to using > -pc_type bjacobi -ksp_max_it 10 rather than superlu_dist. > > If the memory usage is good, you can be pretty certain the issue arises > internally to superl_dist. > > > > 2/ Leave your code as is and perform your profiling using mumps rather > than superlu_dist. > > This is a less reliable test than 1/ since the mumps implementation used > with v3.9 and v3.12 may differ... > > > > Thanks > > Dave > > > > On Thu, 9 Jan 2020 at 20:17, Santiago Andres Triana > wrote: > > Dear all, > > > > I think parmetis is not involved since I still run out of memory if I > use the following options: > > export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu > -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' > > and issuing: > > mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target > -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view > > > > Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is > much lower than current version. I can only solve relatively small problems > using the 3.12 series :( > > I have an example with smaller matrices that will likely fail in a 32 Gb > ram machine with petsc-3.12 but runs just fine with petsc-3.9. The > -memory_view output is > > > > with petsc-3.9.4: (log 'justfine.log' attached) > > > > Summary of Memory Usage in PETSc > > Maximum (over computational time) process memory: total > 1.6665e+10 max 7.5674e+08 min 6.4215e+08 > > Current process memory: total > 1.5841e+10 max 7.2881e+08 min 6.0905e+08 > > Maximum (over computational time) space PetscMalloc()ed: total > 3.1290e+09 max 1.5868e+08 min 1.0179e+08 > > Current space PetscMalloc()ed: total > 1.8808e+06 max 7.8368e+04 min 7.8368e+04 > > > > > > with petsc-3.12.2: (log 'toobig.log' attached) > > > > Summary of Memory Usage in PETSc > > Maximum (over computational time) process memory: total > 3.1564e+10 max 1.3662e+09 min 1.2604e+09 > > Current process memory: total > 3.0355e+10 max 1.3082e+09 min 1.2254e+09 > > Maximum (over computational time) space PetscMalloc()ed: total > 2.7618e+09 max 1.4339e+08 min 8.6493e+07 > > Current space PetscMalloc()ed: total > 3.6127e+06 max 1.5053e+05 min 1.5053e+05 > > > > Strangely, monitoring with 'top' I can see *appreciably higher* peak > memory use, usually twice what -memory_view ends up reporting, both for > petsc-3.9.4 and current. Program fails usually at this peak if not enough > ram available > > > > The matrices for the example quoted above can be downloaded here (I use > slepc's tutorial ex7.c to solve the problem): > > https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) > > https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) > > > > I haven't been able to use a debugger successfully since I am using a > compute node without the possibility of an xterm ... note that I have no > experience using a debugger so any help on that will also be appreciated! > > Hope I can switch to the current petsc/slepc version for my production > runs soon... > > > > Thanks again! > > Santiago > > > > > > > > On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini < > stefano.zampini at gmail.com> wrote: > > Can you reproduce the issue with smaller matrices? Or with a debug build > (i.e. using ?with-debugging=1 and compilation flags -02 -g)? > > > > The only changes in parmetis between the two PETSc releases are these > below, but I don?t see how they could cause issues > > > > kl-18448:pkg-parmetis szampini$ git log -2 > > commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag: > v4.0.3-p5, origin/master, origin/dalcinl/random, origin/HEAD) > > Author: Lisandro Dalcin > > Date: Thu May 9 18:44:10 2019 +0300 > > > > GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices > > > > commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 > > Author: Lisandro Dalcin > > Date: Sat May 4 17:22:19 2019 +0300 > > > > GKlib: Use gk_randint32() to define the RandomInRange() macro > > > > > > > >> On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> > >> > >> This is extremely worrisome: > >> > >> ==23361== Use of uninitialised value of size 8 > >> ==23361== at 0x847E939: gk_randint64 (random.c:99) > >> ==23361== by 0x847EF88: gk_randint32 (random.c:128) > >> ==23361== by 0x81EBF0B: libparmetis__Match_Global (in > /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) > >> > >> do you get that with PETSc-3.9.4 or only with 3.12.3? > >> > >> This may result in Parmetis using non-random numbers and then giving > back an inappropriate ordering that requires more memory for SuperLU_DIST. > >> > >> Suggest looking at the code, or running in the debugger to see what is > going on there. We use parmetis all the time and don't see this. > >> > >> Barry > >> > >> > >> > >> > >> > >> > >>> On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana > wrote: > >>> > >>> Dear Matt, petsc-users: > >>> > >>> Finally back after the holidays to try to solve this issue, thanks for > your patience! > >>> I compiled the latest petsc (3.12.3) with debugging enabled, the same > problem appears: relatively large matrices result in out of memory errors. > This is not the case for petsc-3.9.4, all fine there. > >>> This is a non-hermitian, generalized eigenvalue problem, I generate > the A and B matrices myself and then I use example 7 (from the slepc > tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the > problem: > >>> > >>> mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 > --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc > -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol > 1e-14 $opts > >>> > >>> where the $opts variable is: > >>> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu > -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type > superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm > PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason > -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' > >>> > >>> the output from valgrind (sample from one processor) and from the > program are attached. > >>> If it's of any use the matrices are here (might need at least 180 Gb > of ram to solve the problem succesfully under petsc-3.9.4): > >>> > >>> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 > >>> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 > >>> > >>> WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with > 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. > >>> Any suggestions, comments or any other help are very much appreciated! > >>> > >>> Cheers, > >>> Santiago > >>> > >>> > >>> > >>> On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley > wrote: > >>> On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana < > repepo at gmail.com> wrote: > >>> Dear all, > >>> > >>> After upgrading to petsc 3.12.2 my solver program crashes > consistently. Before the upgrade I was using petsc 3.9.4 with no problems. > >>> > >>> My application deals with a complex-valued, generalized eigenvalue > problem. The matrices involved are relatively large, typically 2 to 10 Gb > in size, which is no problem for petsc 3.9.4. > >>> > >>> Are you sure that your indices do not exceed 4B? If so, you need to > configure using > >>> > >>> --with-64-bit-indices > >>> > >>> Also, it would be nice if you ran with the debugger so we can get a > stack trace for the SEGV. > >>> > >>> Thanks, > >>> > >>> Matt > >>> > >>> However, after the upgrade I can only obtain solutions when the > matrices are small, the solver crashes when the matrices' size exceed about > 1.5 Gb: > >>> > >>> [0]PETSC ERROR: > ------------------------------------------------------------------------ > >>> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or > the batch system) has told this process to end > >>> [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > >>> [0]PETSC ERROR: or see > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X to find memory corruption errors > >>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, > and run > >>> [0]PETSC ERROR: to get more information on the crash. > >>> > >>> and so on for each cpu. > >>> > >>> > >>> I tried using valgrind and this is the typical output: > >>> > >>> ==2874== Conditional jump or move depends on uninitialised value(s) > >>> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) > >>> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ > ld-2.22.so) > >>> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) > >>> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) > >>> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) > >>> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) > >>> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) > >>> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) > >>> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) > >>> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) > >>> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) > >>> ==2874== by 0x12: ??? > >>> ==2874== > >>> > >>> > >>> These are my configuration options. Identical for both petsc 3.9.4 and > 3.12.2: > >>> > >>> ./configure --with-scalar-type=complex --download-mumps > --download-parmetis --download-metis --download-scalapack=1 > --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 > --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 > -march=native' COPTFLAGS='-O3 -march=native' > >>> > >>> > >>> Thanks in advance for any comments or ideas! > >>> > >>> Cheers, > >>> Santiago > >>> > >>> > >>> -- > >>> What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > >>> -- Norbert Wiener > >>> > >>> https://www.cse.buffalo.edu/~knepley/ > >>> > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Sat Jan 11 01:34:04 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Sat, 11 Jan 2020 08:34:04 +0100 Subject: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2 In-Reply-To: References: Message-ID: On Sat 11. Jan 2020 at 00:04, Santiago Andres Triana wrote: > Hi Barry, petsc-users: > > Just updated to petsc-3.12.3 and the performance is about the same as > 3.12.2, i.e. about 2x the memory use of petsc-3.9.4 > > > petsc-3.12.3 (uses superlu_dist-6.2.0) > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 2.9368e+10 > max 1.2922e+09 min 1.1784e+09 > Current process memory: total 2.8192e+10 > max 1.2263e+09 min 1.1456e+09 > Maximum (over computational time) space PetscMalloc()ed: total 2.7619e+09 > max 1.4339e+08 min 8.6494e+07 > Current space PetscMalloc()ed: total 3.6127e+06 > max 1.5053e+05 min 1.5053e+05 > > > petsc-3.9.4 > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 1.5695e+10 > max 7.1985e+08 min 6.0131e+08 > Current process memory: total 1.3186e+10 > max 6.9240e+08 min 4.2821e+08 > Maximum (over computational time) space PetscMalloc()ed: total 3.1290e+09 > max 1.5869e+08 min 1.0179e+08 > Current space PetscMalloc()ed: total 1.8808e+06 > max 7.8368e+04 min 7.8368e+04 > > > However, it seems that the culprit is superlu_dist: I recompiled current > petsc/slepc with superlu_dist-5.4.0 (used option > --download-superlu_dist=/home/spin/superlu_dist-5.4.0.tar.gz) and the > result is this: > > petsc-3.12.3 with superlu_dist-5.4.0: > > Summary of Memory Usage in PETSc > Maximum (over computational time) process memory: total 1.5636e+10 > max 7.1217e+08 min 5.9963e+08 > Current process memory: total 1.3401e+10 > max 6.5498e+08 min 4.2626e+08 > Maximum (over computational time) space PetscMalloc()ed: total 2.7619e+09 > max 1.4339e+08 min 8.6494e+07 > Current space PetscMalloc()ed: total 3.6127e+06 > max 1.5053e+05 min 1.5053e+05 > > I could not compile petsc-3.12.3 with the exact superlu_dist version that > petsc-3.9.4 uses (5.3.0), but will try newer versions to see how they > perform ... I guess I should address this issue to the superlu mantainers? > Yes. > Thanks! > Santiago > > On Fri, Jan 10, 2020 at 9:19 PM Smith, Barry F. > wrote: > >> >> Can you please try v3.12.3 There was some funky business mistakenly >> added related to partitioning that has been fixed in 3.12.3 >> >> Barry >> >> >> > On Jan 10, 2020, at 1:57 PM, Santiago Andres Triana >> wrote: >> > >> > Dear all, >> > >> > I ran the program with valgrind --tool=massif, the results are cryptic >> to me ... not sure who's the memory hog! the logs are attached. >> > >> > The command I used is: >> > mpiexec -n 24 valgrind --tool=massif --num-callers=20 >> --log-file=valgrind.log.%p ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 $opts >> -eps_target -4.008e-3+1.57142i -eps_target_magnitude -eps_tol 1e-14 >> > >> > Is there any possibility to install a version of superlu_dist (or >> mumps) different from what the petsc version automatically downloads? >> > >> > Thanks! >> > Santiago >> > >> > >> > On Thu, Jan 9, 2020 at 10:04 PM Dave May >> wrote: >> > This kind of issue is difficult to untangle because you have >> potentially three pieces of software which might have changed between v3.9 >> and v3.12, namely >> > PETSc, SLEPC and SuperLU_dist. >> > You need to isolate which software component is responsible for the 2x >> increase in memory. >> > >> > When I look at the memory usage in the log files, things look very very >> similar for the raw PETSc objects. >> > >> > [v3.9] >> > --- Event Stage 0: Main Stage >> > >> > Viewer 4 3 2520 0. >> > Matrix 15 15 125236536 0. >> > Vector 22 22 19713856 0. >> > Index Set 10 10 995280 0. >> > Vec Scatter 4 4 4928 0. >> > EPS Solver 1 1 2276 0. >> > Spectral Transform 1 1 848 0. >> > Basis Vectors 1 1 2168 0. >> > PetscRandom 1 1 662 0. >> > Region 1 1 672 0. >> > Direct Solver 1 1 17440 0. >> > Krylov Solver 1 1 1176 0. >> > Preconditioner 1 1 1000 0. >> > >> > versus >> > >> > [v3.12] >> > --- Event Stage 0: Main Stage >> > >> > Viewer 4 3 2520 0. >> > Matrix 15 15 125237144 0. >> > Vector 22 22 19714528 0. >> > Index Set 10 10 995096 0. >> > Vec Scatter 4 4 3168 0. >> > Star Forest Graph 4 4 3936 0. >> > EPS Solver 1 1 2292 0. >> > Spectral Transform 1 1 848 0. >> > Basis Vectors 1 1 2184 0. >> > PetscRandom 1 1 662 0. >> > Region 1 1 672 0. >> > Direct Solver 1 1 17456 0. >> > Krylov Solver 1 1 1400 0. >> > Preconditioner 1 1 1000 0. >> > >> > Certainly there is no apparent factor 2x increase in memory usage in >> the underlying petsc objects themselves. >> > Furthermore, the counts of creations of petsc objects in toobig.log and >> justfine.log match, indicating that none of the implementations used in >> either PETSc or SLEPc have fundamentally changed wrt the usage of the >> native petsc objects. >> > >> > It is also curious that VecNorm is called 3 times in "justfine.log" and >> 19 times in "toobig.log" - although I don't see how that could be related >> to you problem... >> > >> > The above at least gives me the impression that issue of memory >> increase is likely not coming from PETSc. >> > I just read Barry's useful email which is even more compelling and also >> indicates SLEPc is not the likely culprit either as it uses PetscMalloc() >> internally. >> > >> > Some options to identify the problem: >> > >> > 1/ Eliminate SLEPc as a possible culprit by not calling EPSSolve() and >> rather just call KSPSolve() with some RHS vector. >> > * If you still see a 2x increase, switch the preconditioner to using >> -pc_type bjacobi -ksp_max_it 10 rather than superlu_dist. >> > If the memory usage is good, you can be pretty certain the issue arises >> internally to superl_dist. >> > >> > 2/ Leave your code as is and perform your profiling using mumps rather >> than superlu_dist. >> > This is a less reliable test than 1/ since the mumps implementation >> used with v3.9 and v3.12 may differ... >> > >> > Thanks >> > Dave >> > >> > On Thu, 9 Jan 2020 at 20:17, Santiago Andres Triana >> wrote: >> > Dear all, >> > >> > I think parmetis is not involved since I still run out of memory if I >> use the following options: >> > export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu >> -st_pc_factor_mat_solver_type superlu_dist -eps_true_residual 1' >> > and issuing: >> > mpiexec -n 24 ./ex7 -f1 A.petsc -f2 B.petsc -eps_nev 1 -eps_target >> -4.008e-3+1.57142i $opts -eps_target_magnitude -eps_tol 1e-14 -memory_view >> > >> > Bottom line is that the memory usage of petsc-3.9.4 / slepc-3.9.2 is >> much lower than current version. I can only solve relatively small problems >> using the 3.12 series :( >> > I have an example with smaller matrices that will likely fail in a 32 >> Gb ram machine with petsc-3.12 but runs just fine with petsc-3.9. The >> -memory_view output is >> > >> > with petsc-3.9.4: (log 'justfine.log' attached) >> > >> > Summary of Memory Usage in PETSc >> > Maximum (over computational time) process memory: total >> 1.6665e+10 max 7.5674e+08 min 6.4215e+08 >> > Current process memory: total >> 1.5841e+10 max 7.2881e+08 min 6.0905e+08 >> > Maximum (over computational time) space PetscMalloc()ed: total >> 3.1290e+09 max 1.5868e+08 min 1.0179e+08 >> > Current space PetscMalloc()ed: total >> 1.8808e+06 max 7.8368e+04 min 7.8368e+04 >> > >> > >> > with petsc-3.12.2: (log 'toobig.log' attached) >> > >> > Summary of Memory Usage in PETSc >> > Maximum (over computational time) process memory: total >> 3.1564e+10 max 1.3662e+09 min 1.2604e+09 >> > Current process memory: total >> 3.0355e+10 max 1.3082e+09 min 1.2254e+09 >> > Maximum (over computational time) space PetscMalloc()ed: total >> 2.7618e+09 max 1.4339e+08 min 8.6493e+07 >> > Current space PetscMalloc()ed: total >> 3.6127e+06 max 1.5053e+05 min 1.5053e+05 >> > >> > Strangely, monitoring with 'top' I can see *appreciably higher* peak >> memory use, usually twice what -memory_view ends up reporting, both for >> petsc-3.9.4 and current. Program fails usually at this peak if not enough >> ram available >> > >> > The matrices for the example quoted above can be downloaded here (I use >> slepc's tutorial ex7.c to solve the problem): >> > https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 (about 600 Mb) >> > https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 (about 210 Mb) >> > >> > I haven't been able to use a debugger successfully since I am using a >> compute node without the possibility of an xterm ... note that I have no >> experience using a debugger so any help on that will also be appreciated! >> > Hope I can switch to the current petsc/slepc version for my production >> runs soon... >> > >> > Thanks again! >> > Santiago >> > >> > >> > >> > On Thu, Jan 9, 2020 at 4:25 PM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> > Can you reproduce the issue with smaller matrices? Or with a debug >> build (i.e. using ?with-debugging=1 and compilation flags -02 -g)? >> > >> > The only changes in parmetis between the two PETSc releases are these >> below, but I don?t see how they could cause issues >> > >> > kl-18448:pkg-parmetis szampini$ git log -2 >> > commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag: >> v4.0.3-p5, origin/master, origin/dalcinl/random, origin/HEAD) >> > Author: Lisandro Dalcin >> > Date: Thu May 9 18:44:10 2019 +0300 >> > >> > GKLib: Make FPRFX##randInRange() portable for 32bit/64bit indices >> > >> > commit 2b4afc79a79ef063f369c43da2617fdb64746dd7 >> > Author: Lisandro Dalcin >> > Date: Sat May 4 17:22:19 2019 +0300 >> > >> > GKlib: Use gk_randint32() to define the RandomInRange() macro >> > >> > >> > >> >> On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> >> >> >> >> This is extremely worrisome: >> >> >> >> ==23361== Use of uninitialised value of size 8 >> >> ==23361== at 0x847E939: gk_randint64 (random.c:99) >> >> ==23361== by 0x847EF88: gk_randint32 (random.c:128) >> >> ==23361== by 0x81EBF0B: libparmetis__Match_Global (in >> /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-debug/lib/libparmetis.so) >> >> >> >> do you get that with PETSc-3.9.4 or only with 3.12.3? >> >> >> >> This may result in Parmetis using non-random numbers and then giving >> back an inappropriate ordering that requires more memory for SuperLU_DIST. >> >> >> >> Suggest looking at the code, or running in the debugger to see what >> is going on there. We use parmetis all the time and don't see this. >> >> >> >> Barry >> >> >> >> >> >> >> >> >> >> >> >> >> >>> On Jan 8, 2020, at 4:34 PM, Santiago Andres Triana >> wrote: >> >>> >> >>> Dear Matt, petsc-users: >> >>> >> >>> Finally back after the holidays to try to solve this issue, thanks >> for your patience! >> >>> I compiled the latest petsc (3.12.3) with debugging enabled, the same >> problem appears: relatively large matrices result in out of memory errors. >> This is not the case for petsc-3.9.4, all fine there. >> >>> This is a non-hermitian, generalized eigenvalue problem, I generate >> the A and B matrices myself and then I use example 7 (from the slepc >> tutorial at $SLEPC_DIR/src/eps/examples/tutorials/ex7.c ) to solve the >> problem: >> >>> >> >>> mpiexec -n 24 valgrind --tool=memcheck -q --num-callers=20 >> --log-file=valgrind.log.%p ./ex7 -malloc off -f1 A.petsc -f2 B.petsc >> -eps_nev 1 -eps_target -2.5e-4+1.56524i -eps_target_magnitude -eps_tol >> 1e-14 $opts >> >>> >> >>> where the $opts variable is: >> >>> export opts='-st_type sinvert -st_ksp_type preonly -st_pc_type lu >> -eps_error_relative ::ascii_info_detail -st_pc_factor_mat_solver_type >> superlu_dist -mat_superlu_dist_iterrefine 1 -mat_superlu_dist_colperm >> PARMETIS -mat_superlu_dist_parsymbfact 1 -eps_converged_reason >> -eps_conv_rel -eps_monitor_conv -eps_true_residual 1' >> >>> >> >>> the output from valgrind (sample from one processor) and from the >> program are attached. >> >>> If it's of any use the matrices are here (might need at least 180 Gb >> of ram to solve the problem succesfully under petsc-3.9.4): >> >>> >> >>> https://www.dropbox.com/s/as9bec9iurjra6r/A.petsc?dl=0 >> >>> https://www.dropbox.com/s/u2bbmng23rp8l91/B.petsc?dl=0 >> >>> >> >>> WIth petsc-3.9.4 and slepc-3.9.2 I can use matrices up to 10Gb (with >> 240 Gb ram), but only up to 3Gb with the latest petsc/slepc. >> >>> Any suggestions, comments or any other help are very much appreciated! >> >>> >> >>> Cheers, >> >>> Santiago >> >>> >> >>> >> >>> >> >>> On Mon, Dec 23, 2019 at 11:19 PM Matthew Knepley >> wrote: >> >>> On Mon, Dec 23, 2019 at 3:14 PM Santiago Andres Triana < >> repepo at gmail.com> wrote: >> >>> Dear all, >> >>> >> >>> After upgrading to petsc 3.12.2 my solver program crashes >> consistently. Before the upgrade I was using petsc 3.9.4 with no problems. >> >>> >> >>> My application deals with a complex-valued, generalized eigenvalue >> problem. The matrices involved are relatively large, typically 2 to 10 Gb >> in size, which is no problem for petsc 3.9.4. >> >>> >> >>> Are you sure that your indices do not exceed 4B? If so, you need to >> configure using >> >>> >> >>> --with-64-bit-indices >> >>> >> >>> Also, it would be nice if you ran with the debugger so we can get a >> stack trace for the SEGV. >> >>> >> >>> Thanks, >> >>> >> >>> Matt >> >>> >> >>> However, after the upgrade I can only obtain solutions when the >> matrices are small, the solver crashes when the matrices' size exceed about >> 1.5 Gb: >> >>> >> >>> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> >>> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or >> the batch system) has told this process to end >> >>> [0]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger >> >>> [0]PETSC ERROR: or see >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple >> Mac OS X to find memory corruption errors >> >>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, >> link, and run >> >>> [0]PETSC ERROR: to get more information on the crash. >> >>> >> >>> and so on for each cpu. >> >>> >> >>> >> >>> I tried using valgrind and this is the typical output: >> >>> >> >>> ==2874== Conditional jump or move depends on uninitialised value(s) >> >>> ==2874== at 0x4018178: index (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x400752D: expand_dynamic_string_token (in /lib64/ >> ld-2.22.so) >> >>> ==2874== by 0x4008009: _dl_map_object (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x40013E4: map_doit (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x400EA53: _dl_catch_error (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x4000ABE: do_preload (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x4000EC0: handle_ld_preload (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x40034F0: dl_main (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x4016274: _dl_sysdep_start (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x4004A99: _dl_start (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x40011F7: ??? (in /lib64/ld-2.22.so) >> >>> ==2874== by 0x12: ??? >> >>> ==2874== >> >>> >> >>> >> >>> These are my configuration options. Identical for both petsc 3.9.4 >> and 3.12.2: >> >>> >> >>> ./configure --with-scalar-type=complex --download-mumps >> --download-parmetis --download-metis --download-scalapack=1 >> --download-fblaslapack=1 --with-debugging=0 --download-superlu_dist=1 >> --download-ptscotch=1 CXXOPTFLAGS='-O3 -march=native' FOPTFLAGS='-O3 >> -march=native' COPTFLAGS='-O3 -march=native' >> >>> >> >>> >> >>> Thanks in advance for any comments or ideas! >> >>> >> >>> Cheers, >> >>> Santiago >> >>> >> >>> >> >>> -- >> >>> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> >>> -- Norbert Wiener >> >>> >> >>> https://www.cse.buffalo.edu/~knepley/ >> >>> >> >> >> > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From colin.cotter at imperial.ac.uk Sat Jan 11 05:35:50 2020 From: colin.cotter at imperial.ac.uk (Cotter, Colin J) Date: Sat, 11 Jan 2020 11:35:50 +0000 Subject: [petsc-users] Firedrake '20 Message-ID: Dear Petsc Users, Just to let you know about the Firedrake '20 conference, which will be held at the Universtiy of Washington on 10 and 11 February, right before SIAM PP20 in the same city, Seattle. Firedrake is a finite element code generation framework with PETsc and DMPlex at the core, so this might be interesting to some of you. This is a reminder to register and submit an abstract by next Wednesday. Full details and abstract submission are at: https://www.firedrakeproject.org/firedrake_usa_20.html all the best --Colin Professor Colin Cotter (he/him) Department of Mathematics 755, Huxley Building Imperial College London South Kensington Campus United Kingdom of Great Britain and Northern Ireland +44 2075943468 -------------- next part -------------- An HTML attachment was scrubbed... URL: From colin.cotter at imperial.ac.uk Sat Jan 11 06:00:15 2020 From: colin.cotter at imperial.ac.uk (Cotter, Colin J) Date: Sat, 11 Jan 2020 12:00:15 +0000 Subject: [petsc-users] Firedrake '20 In-Reply-To: References: Message-ID: Sorry, correction, this is called Firedrake USA, and Firedrake '20 will be in Exeter later in 2020. ________________________________ From: petsc-users on behalf of Cotter, Colin J Sent: 11 January 2020 11:35 To: petsc-users at mcs.anl.gov Subject: [petsc-users] Firedrake '20 Dear Petsc Users, Just to let you know about the Firedrake '20 conference, which will be held at the Universtiy of Washington on 10 and 11 February, right before SIAM PP20 in the same city, Seattle. Firedrake is a finite element code generation framework with PETsc and DMPlex at the core, so this might be interesting to some of you. This is a reminder to register and submit an abstract by next Wednesday. Full details and abstract submission are at: https://www.firedrakeproject.org/firedrake_usa_20.html all the best --Colin Professor Colin Cotter (he/him) Department of Mathematics 755, Huxley Building Imperial College London South Kensington Campus United Kingdom of Great Britain and Northern Ireland +44 2075943468 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ys453 at cam.ac.uk Tue Jan 14 04:44:05 2020 From: ys453 at cam.ac.uk (Y. Shidi) Date: Tue, 14 Jan 2020 10:44:05 +0000 Subject: [petsc-users] error related to nested vector Message-ID: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> Dear developers, I have a 2x2 nested matrix and the corresponding nested vector. When I running the code with field splitting, it gets the following errors: [0]PETSC ERROR: PetscTrFreeDefault() called from VecRestoreArray_Nest() line 678 in /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c [0]PETSC ERROR: Block at address 0x3f95f60 is corrupted; cannot free; may be block not allocated with PetscMalloc() [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind [0]PETSC ERROR: Bad location or corrupted memory [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.9.3, unknown [0]PETSC ERROR: 2DPetscSpuriousTest on a arch-linux2-c-debug named merlin by ys453 Tue Jan 14 10:36:53 2020 [0]PETSC ERROR: Configure options --download-scalapack --download-mumps --download-parmetis --download-metis --download-ptscotch --download-superlu_dist --download-hypre [0]PETSC ERROR: #1 PetscTrFreeDefault() line 269 in /home/ys453/Sources/petsc/src/sys/memory/mtr.c [0]PETSC ERROR: #2 VecRestoreArray_Nest() line 678 in /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c [0]PETSC ERROR: #3 VecRestoreArrayRead() line 1835 in /home/ys453/Sources/petsc/src/vec/vec/interface/rvector.c [0]PETSC ERROR: #4 VecRestoreArrayPair() line 511 in /home/ys453/Sources/petsc/include/petscvec.h [0]PETSC ERROR: #5 VecScatterBegin_SSToSS() line 671 in /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c [0]PETSC ERROR: #6 VecScatterBegin() line 1779 in /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c [0]PETSC ERROR: #7 PCApply_FieldSplit() line 1010 in /home/ys453/Sources/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: #8 PCApply() line 457 in /home/ys453/Sources/petsc/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #9 KSP_PCApply() line 276 in /home/ys453/Sources/petsc/include/petsc/private/kspimpl.h [0]PETSC ERROR: #10 KSPFGMRESCycle() line 166 in /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c [0]PETSC ERROR: #11 KSPSolve_FGMRES() line 291 in /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c [0]PETSC ERROR: #12 KSPSolve() line 669 in /home/ys453/Sources/petsc/src/ksp/ksp/interface/itfunc.c I am not sure why it happens. Thank you for your time. Kind Regards, Shidi From knepley at gmail.com Tue Jan 14 05:53:38 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Jan 2020 06:53:38 -0500 Subject: [petsc-users] error related to nested vector In-Reply-To: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> References: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> Message-ID: It says that memory is being overwritten somewhere. You can track this down using valgrind, as it suggests in the error message. Thanks, Matt On Tue, Jan 14, 2020 at 5:44 AM Y. Shidi wrote: > Dear developers, > > I have a 2x2 nested matrix and the corresponding nested vector. > When I running the code with field splitting, it gets the following > errors: > > [0]PETSC ERROR: PetscTrFreeDefault() called from VecRestoreArray_Nest() > line 678 in /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c > [0]PETSC ERROR: Block at address 0x3f95f60 is corrupted; cannot free; > may be block not allocated with PetscMalloc() > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Memory corruption: > http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind > [0]PETSC ERROR: Bad location or corrupted memory > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.9.3, unknown > [0]PETSC ERROR: 2DPetscSpuriousTest on a arch-linux2-c-debug named > merlin by ys453 Tue Jan 14 10:36:53 2020 > [0]PETSC ERROR: Configure options --download-scalapack --download-mumps > --download-parmetis --download-metis --download-ptscotch > --download-superlu_dist --download-hypre > [0]PETSC ERROR: #1 PetscTrFreeDefault() line 269 in > /home/ys453/Sources/petsc/src/sys/memory/mtr.c > [0]PETSC ERROR: #2 VecRestoreArray_Nest() line 678 in > /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c > [0]PETSC ERROR: #3 VecRestoreArrayRead() line 1835 in > /home/ys453/Sources/petsc/src/vec/vec/interface/rvector.c > [0]PETSC ERROR: #4 VecRestoreArrayPair() line 511 in > /home/ys453/Sources/petsc/include/petscvec.h > [0]PETSC ERROR: #5 VecScatterBegin_SSToSS() line 671 in > /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c > [0]PETSC ERROR: #6 VecScatterBegin() line 1779 in > /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c > [0]PETSC ERROR: #7 PCApply_FieldSplit() line 1010 in > /home/ys453/Sources/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [0]PETSC ERROR: #8 PCApply() line 457 in > /home/ys453/Sources/petsc/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #9 KSP_PCApply() line 276 in > /home/ys453/Sources/petsc/include/petsc/private/kspimpl.h > [0]PETSC ERROR: #10 KSPFGMRESCycle() line 166 in > /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [0]PETSC ERROR: #11 KSPSolve_FGMRES() line 291 in > /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c > [0]PETSC ERROR: #12 KSPSolve() line 669 in > /home/ys453/Sources/petsc/src/ksp/ksp/interface/itfunc.c > > I am not sure why it happens. > > Thank you for your time. > > Kind Regards, > Shidi > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ys453 at cam.ac.uk Tue Jan 14 05:58:13 2020 From: ys453 at cam.ac.uk (Y. Shidi) Date: Tue, 14 Jan 2020 11:58:13 +0000 Subject: [petsc-users] error related to nested vector In-Reply-To: References: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> Message-ID: Thanks Matt, Instead of constructing the nested vector by VecCreateNest(), I use VecRestoreSubVector() to solve this issue. But I got problems for field splitting method, I use the following options: #PETSc Option Table entries: -fieldsplit_0_ksp_converged_reason true -fieldsplit_0_ksp_rtol 1e-12 -fieldsplit_0_ksp_type cg -fieldsplit_0_pc_type ilu -fieldsplit_1_ksp_converged_reason true -fieldsplit_1_ksp_rtol 1e-12 -fieldsplit_1_ksp_type cg -fieldsplit_1_pc_type hypre -ksp_converged_reason -ksp_rtol 1e-12 -ksp_type fgmres -ksp_view -pc_fieldsplit_schur_fact_type upper -pc_fieldsplit_schur_precondition seflp -pc_type fieldsplit #End of PETSc Option Table entries There are some options that were not used. WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-fieldsplit_0_ksp_converged_reason value: true Option left: name:-fieldsplit_1_ksp_converged_reason value: true Option left: name:-pc_fieldsplit_schur_fact_type value: upper Option left: name:-pc_fieldsplit_schur_precondition value: seflp Kind Regards, Shidi On 2020-01-14 11:53, Matthew Knepley wrote: > It says that memory is being overwritten somewhere. You can track this > down using valgrind, > as it suggests in the error message. > > Thanks, > > Matt > > On Tue, Jan 14, 2020 at 5:44 AM Y. Shidi wrote: > >> Dear developers, >> >> I have a 2x2 nested matrix and the corresponding nested vector. >> When I running the code with field splitting, it gets the following >> errors: >> >> [0]PETSC ERROR: PetscTrFreeDefault() called from >> VecRestoreArray_Nest() >> line 678 in >> /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c >> [0]PETSC ERROR: Block at address 0x3f95f60 is corrupted; cannot >> free; >> may be block not allocated with PetscMalloc() >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Memory corruption: >> > http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind >> [0]PETSC ERROR: Bad location or corrupted memory >> [0]PETSC ERROR: See >> http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.9.3, unknown >> [0]PETSC ERROR: 2DPetscSpuriousTest on a arch-linux2-c-debug named >> merlin by ys453 Tue Jan 14 10:36:53 2020 >> [0]PETSC ERROR: Configure options --download-scalapack >> --download-mumps >> --download-parmetis --download-metis --download-ptscotch >> --download-superlu_dist --download-hypre >> [0]PETSC ERROR: #1 PetscTrFreeDefault() line 269 in >> /home/ys453/Sources/petsc/src/sys/memory/mtr.c >> [0]PETSC ERROR: #2 VecRestoreArray_Nest() line 678 in >> /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c >> [0]PETSC ERROR: #3 VecRestoreArrayRead() line 1835 in >> /home/ys453/Sources/petsc/src/vec/vec/interface/rvector.c >> [0]PETSC ERROR: #4 VecRestoreArrayPair() line 511 in >> /home/ys453/Sources/petsc/include/petscvec.h >> [0]PETSC ERROR: #5 VecScatterBegin_SSToSS() line 671 in >> /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c >> [0]PETSC ERROR: #6 VecScatterBegin() line 1779 in >> /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c >> [0]PETSC ERROR: #7 PCApply_FieldSplit() line 1010 in >> /home/ys453/Sources/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c >> [0]PETSC ERROR: #8 PCApply() line 457 in >> /home/ys453/Sources/petsc/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: #9 KSP_PCApply() line 276 in >> /home/ys453/Sources/petsc/include/petsc/private/kspimpl.h >> [0]PETSC ERROR: #10 KSPFGMRESCycle() line 166 in >> /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c >> [0]PETSC ERROR: #11 KSPSolve_FGMRES() line 291 in >> /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c >> [0]PETSC ERROR: #12 KSPSolve() line 669 in >> /home/ys453/Sources/petsc/src/ksp/ksp/interface/itfunc.c >> >> I am not sure why it happens. >> >> Thank you for your time. >> >> Kind Regards, >> Shidi > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ [1] > > > Links: > ------ > [1] http://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Tue Jan 14 06:08:41 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Jan 2020 07:08:41 -0500 Subject: [petsc-users] error related to nested vector In-Reply-To: References: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> Message-ID: On Tue, Jan 14, 2020 at 6:58 AM Y. Shidi wrote: > Thanks Matt, > > Instead of constructing the nested vector by VecCreateNest(), I use > VecRestoreSubVector() to solve this issue. > > But I got problems for field splitting method, > I use the following options: > #PETSc Option Table entries: > -fieldsplit_0_ksp_converged_reason true > -fieldsplit_0_ksp_rtol 1e-12 > -fieldsplit_0_ksp_type cg > -fieldsplit_0_pc_type ilu > -fieldsplit_1_ksp_converged_reason true > -fieldsplit_1_ksp_rtol 1e-12 > -fieldsplit_1_ksp_type cg > -fieldsplit_1_pc_type hypre > -ksp_converged_reason > -ksp_rtol 1e-12 > -ksp_type fgmres > -ksp_view > -pc_fieldsplit_schur_fact_type upper > -pc_fieldsplit_schur_precondition seflp > -pc_type fieldsplit > #End of PETSc Option Table entries > > There are some options that were not used. > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-fieldsplit_0_ksp_converged_reason value: true > Option left: name:-fieldsplit_1_ksp_converged_reason value: true > Option left: name:-pc_fieldsplit_schur_fact_type value: upper > Option left: name:-pc_fieldsplit_schur_precondition value: seflp > PCFIELDSPLIT has to know how to divide your vector into fields. VecNest provides that information since it is divided into the nested pieces. Other ways you can provide this: 1) Use a DM to describe the data layout. This is the best in my opinion. 2) Fix the VecNest. If an overwrite is happening, its best to track it down. 3) Provide an IS to PCFIELDSPLIT using PCFieldSplitSetIS(). 4) Split by block component Thanks, Matt > Kind Regards, > Shidi > > On 2020-01-14 11:53, Matthew Knepley wrote: > > It says that memory is being overwritten somewhere. You can track this > > down using valgrind, > > as it suggests in the error message. > > > > Thanks, > > > > Matt > > > > On Tue, Jan 14, 2020 at 5:44 AM Y. Shidi wrote: > > > >> Dear developers, > >> > >> I have a 2x2 nested matrix and the corresponding nested vector. > >> When I running the code with field splitting, it gets the following > >> errors: > >> > >> [0]PETSC ERROR: PetscTrFreeDefault() called from > >> VecRestoreArray_Nest() > >> line 678 in > >> /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c > >> [0]PETSC ERROR: Block at address 0x3f95f60 is corrupted; cannot > >> free; > >> may be block not allocated with PetscMalloc() > >> [0]PETSC ERROR: --------------------- Error Message > >> -------------------------------------------------------------- > >> [0]PETSC ERROR: Memory corruption: > >> > > http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind > >> [0]PETSC ERROR: Bad location or corrupted memory > >> [0]PETSC ERROR: See > >> http://www.mcs.anl.gov/petsc/documentation/faq.html > >> for trouble shooting. > >> [0]PETSC ERROR: Petsc Release Version 3.9.3, unknown > >> [0]PETSC ERROR: 2DPetscSpuriousTest on a arch-linux2-c-debug named > >> merlin by ys453 Tue Jan 14 10:36:53 2020 > >> [0]PETSC ERROR: Configure options --download-scalapack > >> --download-mumps > >> --download-parmetis --download-metis --download-ptscotch > >> --download-superlu_dist --download-hypre > >> [0]PETSC ERROR: #1 PetscTrFreeDefault() line 269 in > >> /home/ys453/Sources/petsc/src/sys/memory/mtr.c > >> [0]PETSC ERROR: #2 VecRestoreArray_Nest() line 678 in > >> /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c > >> [0]PETSC ERROR: #3 VecRestoreArrayRead() line 1835 in > >> /home/ys453/Sources/petsc/src/vec/vec/interface/rvector.c > >> [0]PETSC ERROR: #4 VecRestoreArrayPair() line 511 in > >> /home/ys453/Sources/petsc/include/petscvec.h > >> [0]PETSC ERROR: #5 VecScatterBegin_SSToSS() line 671 in > >> /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c > >> [0]PETSC ERROR: #6 VecScatterBegin() line 1779 in > >> /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c > >> [0]PETSC ERROR: #7 PCApply_FieldSplit() line 1010 in > >> /home/ys453/Sources/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c > >> [0]PETSC ERROR: #8 PCApply() line 457 in > >> /home/ys453/Sources/petsc/src/ksp/pc/interface/precon.c > >> [0]PETSC ERROR: #9 KSP_PCApply() line 276 in > >> /home/ys453/Sources/petsc/include/petsc/private/kspimpl.h > >> [0]PETSC ERROR: #10 KSPFGMRESCycle() line 166 in > >> /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c > >> [0]PETSC ERROR: #11 KSPSolve_FGMRES() line 291 in > >> /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c > >> [0]PETSC ERROR: #12 KSPSolve() line 669 in > >> /home/ys453/Sources/petsc/src/ksp/ksp/interface/itfunc.c > >> > >> I am not sure why it happens. > >> > >> Thank you for your time. > >> > >> Kind Regards, > >> Shidi > > > > -- > > > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ [1] > > > > > > Links: > > ------ > > [1] http://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Tue Jan 14 09:25:29 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Tue, 14 Jan 2020 15:25:29 +0000 Subject: [petsc-users] error related to nested vector In-Reply-To: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> References: <3fb47a31b0ecda2cd7ae06b6e830ac42@cam.ac.uk> Message-ID: Do you have a test example? --Junchao Zhang On Tue, Jan 14, 2020 at 4:44 AM Y. Shidi > wrote: Dear developers, I have a 2x2 nested matrix and the corresponding nested vector. When I running the code with field splitting, it gets the following errors: [0]PETSC ERROR: PetscTrFreeDefault() called from VecRestoreArray_Nest() line 678 in /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c [0]PETSC ERROR: Block at address 0x3f95f60 is corrupted; cannot free; may be block not allocated with PetscMalloc() [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind [0]PETSC ERROR: Bad location or corrupted memory [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.9.3, unknown [0]PETSC ERROR: 2DPetscSpuriousTest on a arch-linux2-c-debug named merlin by ys453 Tue Jan 14 10:36:53 2020 [0]PETSC ERROR: Configure options --download-scalapack --download-mumps --download-parmetis --download-metis --download-ptscotch --download-superlu_dist --download-hypre [0]PETSC ERROR: #1 PetscTrFreeDefault() line 269 in /home/ys453/Sources/petsc/src/sys/memory/mtr.c [0]PETSC ERROR: #2 VecRestoreArray_Nest() line 678 in /home/ys453/Sources/petsc/src/vec/vec/impls/nest/vecnest.c [0]PETSC ERROR: #3 VecRestoreArrayRead() line 1835 in /home/ys453/Sources/petsc/src/vec/vec/interface/rvector.c [0]PETSC ERROR: #4 VecRestoreArrayPair() line 511 in /home/ys453/Sources/petsc/include/petscvec.h [0]PETSC ERROR: #5 VecScatterBegin_SSToSS() line 671 in /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c [0]PETSC ERROR: #6 VecScatterBegin() line 1779 in /home/ys453/Sources/petsc/src/vec/vscat/impls/vscat.c [0]PETSC ERROR: #7 PCApply_FieldSplit() line 1010 in /home/ys453/Sources/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: #8 PCApply() line 457 in /home/ys453/Sources/petsc/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #9 KSP_PCApply() line 276 in /home/ys453/Sources/petsc/include/petsc/private/kspimpl.h [0]PETSC ERROR: #10 KSPFGMRESCycle() line 166 in /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c [0]PETSC ERROR: #11 KSPSolve_FGMRES() line 291 in /home/ys453/Sources/petsc/src/ksp/ksp/impls/gmres/fgmres/fgmres.c [0]PETSC ERROR: #12 KSPSolve() line 669 in /home/ys453/Sources/petsc/src/ksp/ksp/interface/itfunc.c I am not sure why it happens. Thank you for your time. Kind Regards, Shidi -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Tue Jan 14 14:52:46 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Tue, 14 Jan 2020 20:52:46 +0000 Subject: [petsc-users] Null space for the Stokes laplacian operator Message-ID: GAMG needs the kernel of the operator to build the coarsening spaces. In elasticity, these are the translation and the rotations. If I were to solve the Stokes problem using the Schur complement approach and I wanted to use GAMG for the Laplacian of the velocity block, should I pass the kernel as well? Is this kernel made of translation and rotation (of the velocity)? I haven?t found anything like this in the literature, hence my question. Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 14 15:46:24 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Jan 2020 16:46:24 -0500 Subject: [petsc-users] Null space for the Stokes laplacian operator In-Reply-To: References: Message-ID: On Tue, Jan 14, 2020 at 3:52 PM Salazar De Troya, Miguel via petsc-users < petsc-users at mcs.anl.gov> wrote: > GAMG needs the kernel of the operator to build the coarsening spaces. In > elasticity, these are the translation and the rotations. If I were to solve > the Stokes problem using the Schur complement approach and I wanted to use > GAMG for the Laplacian of the velocity block, should I pass the kernel as > well? Is this kernel made of translation and rotation (of the velocity)? I > haven?t found anything like this in the literature, hence my question. > The kernel should be just the constant functions, which all AMG implementations include by default. You can tell people only solve the Laplacian. Thanks, Matt > Thanks > > Miguel > > > > Miguel A. Salazar de Troya > > Postdoctoral Researcher, Lawrence Livermore National Laboratory > > B141 > > Rm: 1085-5 > > Ph: 1(925) 422-6411 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Tue Jan 14 20:55:08 2020 From: epscodes at gmail.com (Xiangdong) Date: Tue, 14 Jan 2020 21:55:08 -0500 Subject: [petsc-users] chowiluviennacl Message-ID: Dear Developers, I have a quick question about the chowiluviennacl. When I tried to use it, I found that it only works for np=1, not np>1. However, in the description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel parallel ILU preconditioner". I am wondering whether I am using it correctly. Does chowiluviennacl work for np>1? In addition, are there option keys for the chowiluviennacl one can try? Thank you. Best, Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 14 21:04:51 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Jan 2020 22:04:51 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: > Dear Developers, > > I have a quick question about the chowiluviennacl. When I tried to use it, > I found that it only works for np=1, not np>1. However, in the description > of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel parallel ILU > preconditioner". > By parallel, this means shared memory parallelism on the GPU. > I am wondering whether I am using it correctly. Does chowiluviennacl work > for np>1? > I do not believe so. I do not see why it could not be extended, but that would mean writing some more code. Thanks, Matt > In addition, are there option keys for the chowiluviennacl one can try? > Thank you. > > Best, > Xiangdong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothee.nicolas at gmail.com Wed Jan 15 04:03:40 2020 From: timothee.nicolas at gmail.com (=?UTF-8?Q?Timoth=C3=A9e_Nicolas?=) Date: Wed, 15 Jan 2020 11:03:40 +0100 Subject: [petsc-users] SNESSetOptionsPrefix usage Message-ID: Dear PETSc users, I am confused by the usage of SNESSetOptionsPrefix. I understand this is required if you have for example different SNES in your program and want to set different options for them. So for my second snes I wrote call SNESCreate(MPI_COMM_SELF,snes,ierr) call SNESSetOptionsPrefix(snes,'green_',ierr) call SNESSetFromOptions(snes,ierr) etc. Then when launching the program I wanted to monitor that snes so I launched it with the option -green_snes_monitor instead of -snes_monitor. But I keep getting the message WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-green_snes_monitor (no value) What do I miss here? Best regards Timoth?e NICOLAS -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.melnichuk at geosteertech.com Wed Jan 15 07:00:03 2020 From: dmitry.melnichuk at geosteertech.com (=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JzQtdC70YzQvdC40YfRg9C6?=) Date: Wed, 15 Jan 2020 16:00:03 +0300 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin Message-ID: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 15 07:59:50 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 15 Jan 2020 13:59:50 +0000 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: Works for me with PETSc 12, what version of PETSc are you using? program main #include use petsc implicit none ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - PetscErrorCode ierr SNES snes1 call PetscInitialize(PETSC_NULL_CHARACTER,ierr) if (ierr .ne. 0) then print*,'Unable to initialize PETSc' stop endif call SNESCreate(MPI_COMM_SELF,snes1,ierr) call SNESSetOptionsPrefix(snes1,'green_',ierr) call SNESSetFromOptions(snes1,ierr) call SNESDestroy(snes1,ierr) call PetscFinalize(ierr) end $ ./ex1f -green_snes_monitor ~/Src/petsc/src/snes/examples/tutorials (oanam/sem-pde-optimization *=) $ ./ex1f -green_snes_monitor -options_left #PETSc Option Table entries: -check_pointer_intensity 0 -green_snes_monitor -malloc_dump -options_left #End of PETSc Option Table entries There are no unused options. ~/Src/petsc/src/snes/examples/tutorials (oanam/sem-pde-optimization *=) > On Jan 15, 2020, at 4:03 AM, Timoth?e Nicolas wrote: > > Dear PETSc users, > > I am confused by the usage of SNESSetOptionsPrefix. I understand this is required if you have for example different SNES in your program and want to set different options for them. > So for my second snes I wrote > > call SNESCreate(MPI_COMM_SELF,snes,ierr) > call SNESSetOptionsPrefix(snes,'green_',ierr) > call SNESSetFromOptions(snes,ierr) > > etc. > > Then when launching the program I wanted to monitor that snes so I launched it with the option -green_snes_monitor instead of -snes_monitor. But I keep getting the message > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-green_snes_monitor (no value) > > What do I miss here? > > Best regards > > Timoth?e NICOLAS From mfadams at lbl.gov Wed Jan 15 07:59:36 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Jan 2020 08:59:36 -0500 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: I'm guessing a Fortran issue. What version of PETSc are you using? On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas wrote: > Dear PETSc users, > > I am confused by the usage of SNESSetOptionsPrefix. I understand this is > required if you have for example different SNES in your program and want to > set different options for them. > So for my second snes I wrote > > call SNESCreate(MPI_COMM_SELF,snes,ierr) > > call SNESSetOptionsPrefix(snes,'green_',ierr) > > call SNESSetFromOptions(snes,ierr) > > etc. > > Then when launching the program I wanted to monitor that snes so I > launched it with the option -green_snes_monitor instead of -snes_monitor. > But I keep getting the message > > WARNING! There are options you set that were not used! > > WARNING! could be spelling mistake, etc! > > Option left: name:-green_snes_monitor (no value) > > What do I miss here? > > Best regards > > Timoth?e NICOLAS > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jan 15 08:14:00 2020 From: balay at mcs.anl.gov (Balay, Satish) Date: Wed, 15 Jan 2020 14:14:00 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> Message-ID: -fdefault-integer-8 is likely to break things [esp with MPI - where 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become incompatible with the MPI library with -fdefault-integer-8.] And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' What version of PETSc are you using? Are you seeing this issue with a PETSc example? Satish On Wed, 15 Jan 2020, ??????? ????????? wrote: > Hello all! > ?At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home),?which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with?32-bit version of PETSc does not cause any problem.? > But solver compilation with?64-bit version?of PETSc produces an error with size of?ierr?PETSc?variable.? > ? > 1.?For example,?consider the?following?statements written?in Fortran: > ? > ? > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr)? > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > ? > ? > As can be seen first three subroutunes require?ierr?to be size of?INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires?ierr?to be size of?INTEGER(4). > Using the same integer?format gives an error: > ? > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > ? > 2. Another example is: > ? > ? > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > ? > ? > I am not able to define an?appropriate size if?ierr?in?CHKERRA(ierr). If I choose?INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to > INTEGER(4)"?occurs. > If I define?ierr??as?INTEGER(4),?the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)"?appears. > ?? > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with > following message: > [0]PETSC?ERROR: Out of range index value -4 cannot be negative? > ? > > Command to configure 32-bit version of PETSc?under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > ?Command to configure 64-bit version of PETSc?under Windows 10 using Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > ? > Kind regards, > Dmitry Melnichuk > > From epscodes at gmail.com Wed Jan 15 08:35:47 2020 From: epscodes at gmail.com (Xiangdong) Date: Wed, 15 Jan 2020 09:35:47 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: Can chowiluviennacl do ilu0? I need to solve a tri-diagonal system directly. If I apply the PCILU, I will obtain the exact solution with preonly + pcilu. However, the preonly + chowiluviennacl will not provide the exact solution. Any option keys to set the CHOWILUVIENNACL filling level or dropping off tolerance like the standard ilu? Thank you. Best, Xiangdong On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley wrote: > On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: > >> Dear Developers, >> >> I have a quick question about the chowiluviennacl. When I tried to use >> it, I found that it only works for np=1, not np>1. However, in the >> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >> parallel ILU preconditioner". >> > > By parallel, this means shared memory parallelism on the GPU. > > >> I am wondering whether I am using it correctly. Does chowiluviennacl work >> for np>1? >> > > I do not believe so. I do not see why it could not be extended, but that > would mean writing some more code. > > Thanks, > > Matt > > >> In addition, are there option keys for the chowiluviennacl one can try? >> Thank you. >> >> Best, >> Xiangdong >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 15 08:40:56 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 09:40:56 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: > Can chowiluviennacl do ilu0? > > I need to solve a tri-diagonal system directly. If I apply the PCILU, I > will obtain the exact solution with preonly + pcilu. However, the preonly + > chowiluviennacl will not provide the exact solution. Any option keys to set > the CHOWILUVIENNACL filling level or dropping off tolerance like the > standard ilu? > No. However, such a scheme makes less sense here. This algorithm spawns a individual threads for individual elements. Drop tolerance is not less work, it is sparser, but that should not matter for a tridiagonal system. Levels also is not applicable since you have only 1 level. Thanks, Matt > Thank you. > > Best, > Xiangdong > > > > On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley > wrote: > >> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: >> >>> Dear Developers, >>> >>> I have a quick question about the chowiluviennacl. When I tried to use >>> it, I found that it only works for np=1, not np>1. However, in the >>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>> parallel ILU preconditioner". >>> >> >> By parallel, this means shared memory parallelism on the GPU. >> >> >>> I am wondering whether I am using it correctly. Does chowiluviennacl >>> work for np>1? >>> >> >> I do not believe so. I do not see why it could not be extended, but that >> would mean writing some more code. >> >> Thanks, >> >> Matt >> >> >>> In addition, are there option keys for the chowiluviennacl one can try? >>> Thank you. >>> >>> Best, >>> Xiangdong >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothee.nicolas at gmail.com Wed Jan 15 08:58:11 2020 From: timothee.nicolas at gmail.com (=?UTF-8?Q?Timoth=C3=A9e_Nicolas?=) Date: Wed, 15 Jan 2020 15:58:11 +0100 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: Hi, thanks for your answer, I'm using Petsc version 3.10.4 Timoth?e Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : > I'm guessing a Fortran issue. What version of PETSc are you using? > > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas < > timothee.nicolas at gmail.com> wrote: > >> Dear PETSc users, >> >> I am confused by the usage of SNESSetOptionsPrefix. I understand this is >> required if you have for example different SNES in your program and want to >> set different options for them. >> So for my second snes I wrote >> >> call SNESCreate(MPI_COMM_SELF,snes,ierr) >> >> call SNESSetOptionsPrefix(snes,'green_',ierr) >> >> call SNESSetFromOptions(snes,ierr) >> >> etc. >> >> Then when launching the program I wanted to monitor that snes so I >> launched it with the option -green_snes_monitor instead of -snes_monitor. >> But I keep getting the message >> >> WARNING! There are options you set that were not used! >> >> WARNING! could be spelling mistake, etc! >> >> Option left: name:-green_snes_monitor (no value) >> >> What do I miss here? >> >> Best regards >> >> Timoth?e NICOLAS >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Jan 15 08:58:51 2020 From: epscodes at gmail.com (Xiangdong) Date: Wed, 15 Jan 2020 09:58:51 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: Maybe I am not clear. I want to solve the block tridiagonal system Tx=b a few times with same T but different b. On CPU, I can have it by applying the ILU0 and reuse the factorization. Since it is block tridiagonal, ILU0 would give same results as LU. I am trying to do the same thing on GPU with chowiluviennacl, but found default factorization does not produce the exact factorization for tridiagonal system. Can we tight the drop off tolerance so that it can work as LU for tridiagonal system? Thank you. Xiangdong On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley wrote: > On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: > >> Can chowiluviennacl do ilu0? >> >> I need to solve a tri-diagonal system directly. If I apply the PCILU, I >> will obtain the exact solution with preonly + pcilu. However, the preonly + >> chowiluviennacl will not provide the exact solution. Any option keys to set >> the CHOWILUVIENNACL filling level or dropping off tolerance like the >> standard ilu? >> > > No. However, such a scheme makes less sense here. This algorithm spawns a > individual threads for individual elements. Drop tolerance > is not less work, it is sparser, but that should not matter for a > tridiagonal system. Levels also is not applicable since you have only 1 > level. > > Thanks, > > Matt > > >> Thank you. >> >> Best, >> Xiangdong >> >> >> >> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >> wrote: >> >>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: >>> >>>> Dear Developers, >>>> >>>> I have a quick question about the chowiluviennacl. When I tried to use >>>> it, I found that it only works for np=1, not np>1. However, in the >>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>> parallel ILU preconditioner". >>>> >>> >>> By parallel, this means shared memory parallelism on the GPU. >>> >>> >>>> I am wondering whether I am using it correctly. Does chowiluviennacl >>>> work for np>1? >>>> >>> >>> I do not believe so. I do not see why it could not be extended, but that >>> would mean writing some more code. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> In addition, are there option keys for the chowiluviennacl one can try? >>>> Thank you. >>>> >>>> Best, >>>> Xiangdong >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.melnichuk at geosteertech.com Wed Jan 15 09:26:11 2020 From: dmitry.melnichuk at geosteertech.com (=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JzQtdC70YzQvdC40YfRg9C6?=) Date: Wed, 15 Jan 2020 18:26:11 +0300 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> Message-ID: <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 15 11:04:09 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 15 Jan 2020 17:04:09 +0000 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: Should still work. Run in the debugger and put a break point in snessetoptionsprefix_ and see what it is trying to do Barry > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas wrote: > > Hi, thanks for your answer, > > I'm using Petsc version 3.10.4 > > Timoth?e > > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : > I'm guessing a Fortran issue. What version of PETSc are you using? > > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas wrote: > Dear PETSc users, > > I am confused by the usage of SNESSetOptionsPrefix. I understand this is required if you have for example different SNES in your program and want to set different options for them. > So for my second snes I wrote > > call SNESCreate(MPI_COMM_SELF,snes,ierr) > call SNESSetOptionsPrefix(snes,'green_',ierr) > call SNESSetFromOptions(snes,ierr) > > etc. > > Then when launching the program I wanted to monitor that snes so I launched it with the option -green_snes_monitor instead of -snes_monitor. But I keep getting the message > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-green_snes_monitor (no value) > > What do I miss here? > > Best regards > > Timoth?e NICOLAS From knepley at gmail.com Wed Jan 15 11:40:19 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 12:40:19 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: On Wed, Jan 15, 2020 at 9:59 AM Xiangdong wrote: > Maybe I am not clear. I want to solve the block tridiagonal system Tx=b a > few times with same T but different b. On CPU, I can have it by applying > the ILU0 and reuse the factorization. Since it is block tridiagonal, ILU0 > would give same results as LU. > > I am trying to do the same thing on GPU with chowiluviennacl, but found > default factorization does not produce the exact factorization for > tridiagonal system. Can we tight the drop off tolerance so that it can work > as LU for tridiagonal system? > There are no options in our implementation. You could look at the ViennaCL manual to see if we missed something. Thanks, Matt > Thank you. > > Xiangdong > > On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley wrote: > >> On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: >> >>> Can chowiluviennacl do ilu0? >>> >>> I need to solve a tri-diagonal system directly. If I apply the PCILU, I >>> will obtain the exact solution with preonly + pcilu. However, the preonly + >>> chowiluviennacl will not provide the exact solution. Any option keys to set >>> the CHOWILUVIENNACL filling level or dropping off tolerance like the >>> standard ilu? >>> >> >> No. However, such a scheme makes less sense here. This algorithm spawns a >> individual threads for individual elements. Drop tolerance >> is not less work, it is sparser, but that should not matter for a >> tridiagonal system. Levels also is not applicable since you have only 1 >> level. >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> Best, >>> Xiangdong >>> >>> >>> >>> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >>> wrote: >>> >>>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: >>>> >>>>> Dear Developers, >>>>> >>>>> I have a quick question about the chowiluviennacl. When I tried to use >>>>> it, I found that it only works for np=1, not np>1. However, in the >>>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>>> parallel ILU preconditioner". >>>>> >>>> >>>> By parallel, this means shared memory parallelism on the GPU. >>>> >>>> >>>>> I am wondering whether I am using it correctly. Does chowiluviennacl >>>>> work for np>1? >>>>> >>>> >>>> I do not believe so. I do not see why it could not be extended, but >>>> that would mean writing some more code. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> In addition, are there option keys for the chowiluviennacl one can try? >>>>> Thank you. >>>>> >>>>> Best, >>>>> Xiangdong >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 15 11:55:22 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 12:55:22 -0500 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> Message-ID: On Wed, Jan 15, 2020 at 10:26 AM ??????? ????????? < dmitry.melnichuk at geosteertech.com> wrote: > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc > routines should be suing 'PetscErrorCode for ierr' > > If I define *ierr *as *PetscErrorCode *for all subroutines given below > > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > then errors occur with first three subroutines: > *Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to > INTEGER(8).* > Barry, It looks like the ftn-auto interfaces are using 'integer' for the error code, whereas the ftn-custom is using PetscErrorCode. Could we make the generated ones use integer? Thanks, Matt > Therefore I was forced to define *ierr *as *PetscInt *for VecDuplicate, > VecCopy, VecGetLocalSize subroutines to fix these errors. > Why some subroutines sue 8-bytes integer type of *ierr *(*PetscInt*), > while others - 4-bytes integer type of *ierr *(*PetscErrorCode*) remains > a mystery for me. > > > What version of PETSc are you using? > > version 3.12.2 > > > Are you seeing this issue with a PETSc example? > > I will check it tomorrow and let you know. > > Kind regards, > Dmitry Melnichuk > > > > 15.01.2020, 17:14, "Balay, Satish" : > > -fdefault-integer-8 is likely to break things [esp with MPI - where > 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become > incompatible with the MPI library with -fdefault-integer-8.] > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc > routines should be suing 'PetscErrorCode for ierr' > > What version of PETSc are you using? Are you seeing this issue with a > PETSc example? > > Satish > > On Wed, 15 Jan 2020, ??????? ????????? wrote: > > > Hello all! > At present time I need to compile solver called Defmod ( > https://bitbucket.org/stali/defmod/wiki/Home), which is written in > Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any > problem. > But solver compilation with 64-bit version of PETSc produces an error > with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size > of INTEGER(8), while the last subroutine (VecGetOwnershipRange) > requires ierr to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at > (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If > I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); > passed INTEGER(8) to > INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument > ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > 3. If I change the sizes of ierr vaiables as error messages require, the > compilation completed successfully, but an error occurs when calculating > the RHS vector with > following message: > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using > Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran > --download-fblaslapack > --with-mpi-include=/cygdrive/c/MPISDK/Include > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > -lpthread -fno-range-check' --with-shared-libraries=no > Command to configure 64-bit version of PETSc under Windows 10 using > Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc > --with-cxx=x86_64-w64-mingw32-g++ > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > --with-mpi-include=/cygdrive/c/MPISDK/Include > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > -lpthread -fno-range-check > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices > --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 15 11:56:14 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 12:56:14 -0500 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: I think that Mark is suggesting that no command line arguments are getting in. Timothee, Can you use any command line arguments? Thanks, Matt On Wed, Jan 15, 2020 at 12:04 PM Smith, Barry F. via petsc-users < petsc-users at mcs.anl.gov> wrote: > > Should still work. Run in the debugger and put a break point in > snessetoptionsprefix_ and see what it is trying to do > > Barry > > > > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas < > timothee.nicolas at gmail.com> wrote: > > > > Hi, thanks for your answer, > > > > I'm using Petsc version 3.10.4 > > > > Timoth?e > > > > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : > > I'm guessing a Fortran issue. What version of PETSc are you using? > > > > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas < > timothee.nicolas at gmail.com> wrote: > > Dear PETSc users, > > > > I am confused by the usage of SNESSetOptionsPrefix. I understand this is > required if you have for example different SNES in your program and want to > set different options for them. > > So for my second snes I wrote > > > > call SNESCreate(MPI_COMM_SELF,snes,ierr) > > call SNESSetOptionsPrefix(snes,'green_',ierr) > > call SNESSetFromOptions(snes,ierr) > > > > etc. > > > > Then when launching the program I wanted to monitor that snes so I > launched it with the option -green_snes_monitor instead of -snes_monitor. > But I keep getting the message > > > > WARNING! There are options you set that were not used! > > WARNING! could be spelling mistake, etc! > > Option left: name:-green_snes_monitor (no value) > > > > What do I miss here? > > > > Best regards > > > > Timoth?e NICOLAS > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Jan 15 12:48:40 2020 From: epscodes at gmail.com (Xiangdong) Date: Wed, 15 Jan 2020 13:48:40 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: In the ViennaCL manual http://viennacl.sourceforge.net/doc/manual-algorithms.html It did expose two parameters: // configuration of preconditioner: viennacl::linalg::chow_patel_tag chow_patel_ilu_config; chow_patel_ilu_config.sweeps(3); // three nonlinear sweeps chow_patel_ilu_config.jacobi_iters(2); // two Jacobi iterations per triangular 'solve' Rx=r and mentioned that: The number of nonlinear sweeps and Jacobi iterations need to be set problem-specific for best performance. In the PETSc' implementation: viennacl::linalg::chow_patel_tag ilu_tag; ViennaCLAIJMatrix *mat = (ViennaCLAIJMatrix*)gpustruct->mat; ilu->CHOWILUVIENNACL = new viennacl::linalg::chow_patel_ilu_precond >(*mat, ilu_tag); The default is used. Is it possible to expose these two parameters so that user can change it through option keys? Thank you. Xiangdong On Wed, Jan 15, 2020 at 12:40 PM Matthew Knepley wrote: > On Wed, Jan 15, 2020 at 9:59 AM Xiangdong wrote: > >> Maybe I am not clear. I want to solve the block tridiagonal system Tx=b >> a few times with same T but different b. On CPU, I can have it by applying >> the ILU0 and reuse the factorization. Since it is block tridiagonal, ILU0 >> would give same results as LU. >> >> I am trying to do the same thing on GPU with chowiluviennacl, but found >> default factorization does not produce the exact factorization for >> tridiagonal system. Can we tight the drop off tolerance so that it can work >> as LU for tridiagonal system? >> > > There are no options in our implementation. You could look at the ViennaCL > manual to see if we missed something. > > Thanks, > > Matt > > >> Thank you. >> >> Xiangdong >> >> On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley >> wrote: >> >>> On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: >>> >>>> Can chowiluviennacl do ilu0? >>>> >>>> I need to solve a tri-diagonal system directly. If I apply the PCILU, I >>>> will obtain the exact solution with preonly + pcilu. However, the preonly + >>>> chowiluviennacl will not provide the exact solution. Any option keys to set >>>> the CHOWILUVIENNACL filling level or dropping off tolerance like the >>>> standard ilu? >>>> >>> >>> No. However, such a scheme makes less sense here. This algorithm spawns >>> a individual threads for individual elements. Drop tolerance >>> is not less work, it is sparser, but that should not matter for a >>> tridiagonal system. Levels also is not applicable since you have only 1 >>> level. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thank you. >>>> >>>> Best, >>>> Xiangdong >>>> >>>> >>>> >>>> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >>>> wrote: >>>> >>>>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: >>>>> >>>>>> Dear Developers, >>>>>> >>>>>> I have a quick question about the chowiluviennacl. When I tried to >>>>>> use it, I found that it only works for np=1, not np>1. However, in the >>>>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>>>> parallel ILU preconditioner". >>>>>> >>>>> >>>>> By parallel, this means shared memory parallelism on the GPU. >>>>> >>>>> >>>>>> I am wondering whether I am using it correctly. Does chowiluviennacl >>>>>> work for np>1? >>>>>> >>>>> >>>>> I do not believe so. I do not see why it could not be extended, but >>>>> that would mean writing some more code. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> In addition, are there option keys for the chowiluviennacl one can >>>>>> try? >>>>>> Thank you. >>>>>> >>>>>> Best, >>>>>> Xiangdong >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jan 15 13:12:10 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Jan 2020 14:12:10 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: It sounds like you just want a (MPI) parallel GPU exact solver. SuperLU will do that. On Wed, Jan 15, 2020 at 1:50 PM Xiangdong wrote: > In the ViennaCL manual > http://viennacl.sourceforge.net/doc/manual-algorithms.html > > It did expose two parameters: > > // configuration of preconditioner: > viennacl::linalg::chow_patel_tag chow_patel_ilu_config; > chow_patel_ilu_config.sweeps(3); // three nonlinear sweeps > chow_patel_ilu_config.jacobi_iters(2); // two Jacobi iterations per > triangular 'solve' Rx=r > > and mentioned that: > The number of nonlinear sweeps and Jacobi iterations need to be set > problem-specific for best performance. > > In the PETSc' implementation: > > viennacl::linalg::chow_patel_tag ilu_tag; > ViennaCLAIJMatrix *mat = (ViennaCLAIJMatrix*)gpustruct->mat; > ilu->CHOWILUVIENNACL = new > viennacl::linalg::chow_patel_ilu_precond > >(*mat, ilu_tag); > > The default is used. Is it possible to expose these two parameters so that > user can change it through option keys? > > Thank you. > > Xiangdong > > On Wed, Jan 15, 2020 at 12:40 PM Matthew Knepley > wrote: > >> On Wed, Jan 15, 2020 at 9:59 AM Xiangdong wrote: >> >>> Maybe I am not clear. I want to solve the block tridiagonal system Tx=b >>> a few times with same T but different b. On CPU, I can have it by applying >>> the ILU0 and reuse the factorization. Since it is block tridiagonal, ILU0 >>> would give same results as LU. >>> >>> I am trying to do the same thing on GPU with chowiluviennacl, but found >>> default factorization does not produce the exact factorization for >>> tridiagonal system. Can we tight the drop off tolerance so that it can work >>> as LU for tridiagonal system? >>> >> >> There are no options in our implementation. You could look at the >> ViennaCL manual to see if we missed something. >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> Xiangdong >>> >>> On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley >>> wrote: >>> >>>> On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: >>>> >>>>> Can chowiluviennacl do ilu0? >>>>> >>>>> I need to solve a tri-diagonal system directly. If I apply the PCILU, >>>>> I will obtain the exact solution with preonly + pcilu. However, the >>>>> preonly + chowiluviennacl will not provide the exact solution. Any option >>>>> keys to set the CHOWILUVIENNACL filling level or dropping off tolerance >>>>> like the standard ilu? >>>>> >>>> >>>> No. However, such a scheme makes less sense here. This algorithm spawns >>>> a individual threads for individual elements. Drop tolerance >>>> is not less work, it is sparser, but that should not matter for a >>>> tridiagonal system. Levels also is not applicable since you have only 1 >>>> level. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thank you. >>>>> >>>>> Best, >>>>> Xiangdong >>>>> >>>>> >>>>> >>>>> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: >>>>>> >>>>>>> Dear Developers, >>>>>>> >>>>>>> I have a quick question about the chowiluviennacl. When I tried to >>>>>>> use it, I found that it only works for np=1, not np>1. However, in the >>>>>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>>>>> parallel ILU preconditioner". >>>>>>> >>>>>> >>>>>> By parallel, this means shared memory parallelism on the GPU. >>>>>> >>>>>> >>>>>>> I am wondering whether I am using it correctly. Does chowiluviennacl >>>>>>> work for np>1? >>>>>>> >>>>>> >>>>>> I do not believe so. I do not see why it could not be extended, but >>>>>> that would mean writing some more code. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> In addition, are there option keys for the chowiluviennacl one can >>>>>>> try? >>>>>>> Thank you. >>>>>>> >>>>>>> Best, >>>>>>> Xiangdong >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jan 15 13:15:17 2020 From: balay at mcs.anl.gov (Balay, Satish) Date: Wed, 15 Jan 2020 19:15:17 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> Message-ID: On Wed, 15 Jan 2020, Matthew Knepley wrote: > On Wed, Jan 15, 2020 at 10:26 AM ??????? ????????? < > dmitry.melnichuk at geosteertech.com> wrote: > > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc > > routines should be suing 'PetscErrorCode for ierr' > > > > If I define *ierr *as *PetscErrorCode *for all subroutines given below > > > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > > > then errors occur with first three subroutines: > > *Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to > > INTEGER(8).* > > > > Barry, > > It looks like the ftn-auto interfaces are using 'integer' for the error > code, whereas the ftn-custom is using PetscErrorCode. > Could we make the generated ones use integer? Well it needs a fix to bfort. But then there are a bunch of other issues wrt MPI - its not clear [to me] how to fix [wrt -fdefault-integer-8] Satish > > Thanks, > > Matt > > > > Therefore I was forced to define *ierr *as *PetscInt *for VecDuplicate, > > VecCopy, VecGetLocalSize subroutines to fix these errors. > > Why some subroutines sue 8-bytes integer type of *ierr *(*PetscInt*), > > while others - 4-bytes integer type of *ierr *(*PetscErrorCode*) remains > > a mystery for me. > > > > > What version of PETSc are you using? > > > > version 3.12.2 > > > > > Are you seeing this issue with a PETSc example? > > > > I will check it tomorrow and let you know. > > > > Kind regards, > > Dmitry Melnichuk > > > > > > > > 15.01.2020, 17:14, "Balay, Satish" : > > > > -fdefault-integer-8 is likely to break things [esp with MPI - where > > 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become > > incompatible with the MPI library with -fdefault-integer-8.] > > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc > > routines should be suing 'PetscErrorCode for ierr' > > > > What version of PETSc are you using? Are you seeing this issue with a > > PETSc example? > > > > Satish > > > > On Wed, 15 Jan 2020, ??????? ????????? wrote: > > > > > > Hello all! > > At present time I need to compile solver called Defmod ( > > https://bitbucket.org/stali/defmod/wiki/Home), which is written in > > Fortran 95. > > Defmod uses PETSc for solving linear algebra system. > > Solver compilation with 32-bit version of PETSc does not cause any > > problem. > > But solver compilation with 64-bit version of PETSc produces an error > > with size of ierr PETSc variable. > > > > 1. For example, consider the following statements written in Fortran: > > > > > > PetscErrorCode :: ierr_m > > PetscInt :: ierr > > ... > > ... > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > > > > As can be seen first three subroutunes require ierr to be size > > of INTEGER(8), while the last subroutine (VecGetOwnershipRange) > > requires ierr to be size of INTEGER(4). > > Using the same integer format gives an error: > > > > There is no specific subroutine for the generic ?vecgetownershiprange? at > > (1) > > > > 2. Another example is: > > > > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > > CHKERRA(ierr) > > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If > > I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); > > passed INTEGER(8) to > > INTEGER(4)" occurs. > > If I define ierr as INTEGER(4), the error "Type mismatch in argument > > ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > > 3. If I change the sizes of ierr vaiables as error messages require, the > > compilation completed successfully, but an error occurs when calculating > > the RHS vector with > > following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > > > > Command to configure 32-bit version of PETSc under Windows 10 using > > Cygwin: > > ./configure --with-cc=x86_64-w64-mingw32-gcc > > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran > > --download-fblaslapack > > --with-mpi-include=/cygdrive/c/MPISDK/Include > > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > > -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using > > Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc > > --with-cxx=x86_64-w64-mingw32-g++ > > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > > --with-mpi-include=/cygdrive/c/MPISDK/Include > > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > > -lpthread -fno-range-check > > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices > > --known-64-bit-blas-indices > > > > > > Kind regards, > > Dmitry Melnichuk > > > > > > > > > > From knepley at gmail.com Wed Jan 15 13:20:39 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 14:20:39 -0500 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> Message-ID: On Wed, Jan 15, 2020 at 2:15 PM Balay, Satish wrote: > On Wed, 15 Jan 2020, Matthew Knepley wrote: > > > On Wed, Jan 15, 2020 at 10:26 AM ??????? ????????? < > > dmitry.melnichuk at geosteertech.com> wrote: > > > > > > And I'm not sure why you are having to use PetscInt for ierr. All > PETSc > > > routines should be suing 'PetscErrorCode for ierr' > > > > > > If I define *ierr *as *PetscErrorCode *for all subroutines given below > > > > > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > > call VecCopy(Vec_U,Vec_Um,ierr) > > > call VecGetLocalSize(Vec_U,j,ierr) > > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > > > > > then errors occur with first three subroutines: > > > *Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to > > > INTEGER(8).* > > > > > > > Barry, > > > > It looks like the ftn-auto interfaces are using 'integer' for the error > > code, whereas the ftn-custom is using PetscErrorCode. > > Could we make the generated ones use integer? > > Well it needs a fix to bfort. But then there are a bunch of other issues > wrt MPI - its not clear [to me] how to fix [wrt -fdefault-integer-8] > Well, we could conversely just change the ftn-custom bindings to 'integer' for the error code. Thanks, Matt > Satish > > > > > Thanks, > > > > Matt > > > > > > > Therefore I was forced to define *ierr *as *PetscInt *for VecDuplicate, > > > VecCopy, VecGetLocalSize subroutines to fix these errors. > > > Why some subroutines sue 8-bytes integer type of *ierr *(*PetscInt*), > > > while others - 4-bytes integer type of *ierr *(*PetscErrorCode*) > remains > > > a mystery for me. > > > > > > > What version of PETSc are you using? > > > > > > version 3.12.2 > > > > > > > Are you seeing this issue with a PETSc example? > > > > > > I will check it tomorrow and let you know. > > > > > > Kind regards, > > > Dmitry Melnichuk > > > > > > > > > > > > 15.01.2020, 17:14, "Balay, Satish" : > > > > > > -fdefault-integer-8 is likely to break things [esp with MPI - where > > > 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes > become > > > incompatible with the MPI library with -fdefault-integer-8.] > > > > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc > > > routines should be suing 'PetscErrorCode for ierr' > > > > > > What version of PETSc are you using? Are you seeing this issue with a > > > PETSc example? > > > > > > Satish > > > > > > On Wed, 15 Jan 2020, ??????? ????????? wrote: > > > > > > > > > Hello all! > > > At present time I need to compile solver called Defmod ( > > > https://bitbucket.org/stali/defmod/wiki/Home), which is written in > > > Fortran 95. > > > Defmod uses PETSc for solving linear algebra system. > > > Solver compilation with 32-bit version of PETSc does not cause any > > > problem. > > > But solver compilation with 64-bit version of PETSc produces an error > > > with size of ierr PETSc variable. > > > > > > 1. For example, consider the following statements written in Fortran: > > > > > > > > > PetscErrorCode :: ierr_m > > > PetscInt :: ierr > > > ... > > > ... > > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > > call VecCopy(Vec_U,Vec_Um,ierr) > > > call VecGetLocalSize(Vec_U,j,ierr) > > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > > > > > > > As can be seen first three subroutunes require ierr to be size > > > of INTEGER(8), while the last subroutine (VecGetOwnershipRange) > > > requires ierr to be size of INTEGER(4). > > > Using the same integer format gives an error: > > > > > > There is no specific subroutine for the generic > ?vecgetownershiprange? at > > > (1) > > > > > > 2. Another example is: > > > > > > > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > > > CHKERRA(ierr) > > > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > > > > > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). > If > > > I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at > (1); > > > passed INTEGER(8) to > > > INTEGER(4)" occurs. > > > If I define ierr as INTEGER(4), the error "Type mismatch in argument > > > ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > > > > 3. If I change the sizes of ierr vaiables as error messages require, > the > > > compilation completed successfully, but an error occurs when > calculating > > > the RHS vector with > > > following message: > > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > > > > > > > Command to configure 32-bit version of PETSc under Windows 10 using > > > Cygwin: > > > ./configure --with-cc=x86_64-w64-mingw32-gcc > > > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran > > > --download-fblaslapack > > > --with-mpi-include=/cygdrive/c/MPISDK/Include > > > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > > > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > > > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > > > -lpthread -fno-range-check' --with-shared-libraries=no > > > Command to configure 64-bit version of PETSc under Windows 10 using > > > Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc > > > --with-cxx=x86_64-w64-mingw32-g++ > > > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > > > --with-mpi-include=/cygdrive/c/MPISDK/Include > > > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > > > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe > --with-debugging=yes > > > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > > > -lpthread -fno-range-check > > > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices > > > --known-64-bit-blas-indices > > > > > > > > > Kind regards, > > > Dmitry Melnichuk > > > > > > > > > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 15 13:22:38 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 14:22:38 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: On Wed, Jan 15, 2020 at 1:48 PM Xiangdong wrote: > In the ViennaCL manual > http://viennacl.sourceforge.net/doc/manual-algorithms.html > > It did expose two parameters: > > // configuration of preconditioner: > viennacl::linalg::chow_patel_tag chow_patel_ilu_config; > chow_patel_ilu_config.sweeps(3); // three nonlinear sweeps > chow_patel_ilu_config.jacobi_iters(2); // two Jacobi iterations per > triangular 'solve' Rx=r > > and mentioned that: > The number of nonlinear sweeps and Jacobi iterations need to be set > problem-specific for best performance. > > In the PETSc' implementation: > > viennacl::linalg::chow_patel_tag ilu_tag; > ViennaCLAIJMatrix *mat = (ViennaCLAIJMatrix*)gpustruct->mat; > ilu->CHOWILUVIENNACL = new > viennacl::linalg::chow_patel_ilu_precond > >(*mat, ilu_tag); > > The default is used. Is it possible to expose these two parameters so that > user can change it through option keys? > Yes. Do you mind making an issue for it? That way we can better keep track. https://gitlab.com/petsc/petsc/issues Thanks, Matt > Thank you. > > Xiangdong > > On Wed, Jan 15, 2020 at 12:40 PM Matthew Knepley > wrote: > >> On Wed, Jan 15, 2020 at 9:59 AM Xiangdong wrote: >> >>> Maybe I am not clear. I want to solve the block tridiagonal system Tx=b >>> a few times with same T but different b. On CPU, I can have it by applying >>> the ILU0 and reuse the factorization. Since it is block tridiagonal, ILU0 >>> would give same results as LU. >>> >>> I am trying to do the same thing on GPU with chowiluviennacl, but found >>> default factorization does not produce the exact factorization for >>> tridiagonal system. Can we tight the drop off tolerance so that it can work >>> as LU for tridiagonal system? >>> >> >> There are no options in our implementation. You could look at the >> ViennaCL manual to see if we missed something. >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> Xiangdong >>> >>> On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley >>> wrote: >>> >>>> On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: >>>> >>>>> Can chowiluviennacl do ilu0? >>>>> >>>>> I need to solve a tri-diagonal system directly. If I apply the PCILU, >>>>> I will obtain the exact solution with preonly + pcilu. However, the >>>>> preonly + chowiluviennacl will not provide the exact solution. Any option >>>>> keys to set the CHOWILUVIENNACL filling level or dropping off tolerance >>>>> like the standard ilu? >>>>> >>>> >>>> No. However, such a scheme makes less sense here. This algorithm spawns >>>> a individual threads for individual elements. Drop tolerance >>>> is not less work, it is sparser, but that should not matter for a >>>> tridiagonal system. Levels also is not applicable since you have only 1 >>>> level. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thank you. >>>>> >>>>> Best, >>>>> Xiangdong >>>>> >>>>> >>>>> >>>>> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong wrote: >>>>>> >>>>>>> Dear Developers, >>>>>>> >>>>>>> I have a quick question about the chowiluviennacl. When I tried to >>>>>>> use it, I found that it only works for np=1, not np>1. However, in the >>>>>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>>>>> parallel ILU preconditioner". >>>>>>> >>>>>> >>>>>> By parallel, this means shared memory parallelism on the GPU. >>>>>> >>>>>> >>>>>>> I am wondering whether I am using it correctly. Does chowiluviennacl >>>>>>> work for np>1? >>>>>>> >>>>>> >>>>>> I do not believe so. I do not see why it could not be extended, but >>>>>> that would mean writing some more code. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> In addition, are there option keys for the chowiluviennacl one can >>>>>>> try? >>>>>>> Thank you. >>>>>>> >>>>>>> Best, >>>>>>> Xiangdong >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 15 13:34:31 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 15 Jan 2020 19:34:31 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> Message-ID: Working on it now; may be doable > On Jan 15, 2020, at 11:55 AM, Matthew Knepley wrote: > > On Wed, Jan 15, 2020 at 10:26 AM ??????? ????????? wrote: > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > If I define ierr as PetscErrorCode for all subroutines given below > > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > then errors occur with first three subroutines: > Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to INTEGER(8). > > Barry, > > It looks like the ftn-auto interfaces are using 'integer' for the error code, whereas the ftn-custom is using PetscErrorCode. > Could we make the generated ones use integer? > > Thanks, > > Matt > > Therefore I was forced to define ierr as PetscInt for VecDuplicate, VecCopy, VecGetLocalSize subroutines to fix these errors. > Why some subroutines sue 8-bytes integer type of ierr (PetscInt), while others - 4-bytes integer type of ierr (PetscErrorCode) remains a mystery for me. > > > What version of PETSc are you using? > > version 3.12.2 > > > Are you seeing this issue with a PETSc example? > > I will check it tomorrow and let you know. > > Kind regards, > Dmitry Melnichuk > > > > 15.01.2020, 17:14, "Balay, Satish" : > -fdefault-integer-8 is likely to break things [esp with MPI - where 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become incompatible with the MPI library with -fdefault-integer-8.] > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > What version of PETSc are you using? Are you seeing this issue with a PETSc example? > > Satish > > On Wed, 15 Jan 2020, ??????? ????????? wrote: > > > Hello all! > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any problem. > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to > INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with > following message: > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From epscodes at gmail.com Wed Jan 15 13:39:05 2020 From: epscodes at gmail.com (Xiangdong) Date: Wed, 15 Jan 2020 14:39:05 -0500 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: I just submitted the issue: https://gitlab.com/petsc/petsc/issues/535 What I really want is an exact Block Tri-diagonal solver on GPU. Since for block tridiagonal system, ILU0 would be the same as ILU. So I tried the chowiluviennacl. but I found that the default parameters does not produce the same ILU0 factorization as the CPU ones (PCILU). My guess is that if I increase the number of sweeps chow_patel_ilu_config.sweeps(3), it may give a better result. So the option Keys would be helpful. Since Mark mentioned the Superlu's GPU feature, can I use superlu or hypre's GPU functionality through PETSc? Thank you. Xiangdong On Wed, Jan 15, 2020 at 2:22 PM Matthew Knepley wrote: > On Wed, Jan 15, 2020 at 1:48 PM Xiangdong wrote: > >> In the ViennaCL manual >> http://viennacl.sourceforge.net/doc/manual-algorithms.html >> >> It did expose two parameters: >> >> // configuration of preconditioner: >> viennacl::linalg::chow_patel_tag chow_patel_ilu_config; >> chow_patel_ilu_config.sweeps(3); // three nonlinear sweeps >> chow_patel_ilu_config.jacobi_iters(2); // two Jacobi iterations per >> triangular 'solve' Rx=r >> >> and mentioned that: >> The number of nonlinear sweeps and Jacobi iterations need to be set >> problem-specific for best performance. >> >> In the PETSc' implementation: >> >> viennacl::linalg::chow_patel_tag ilu_tag; >> ViennaCLAIJMatrix *mat = (ViennaCLAIJMatrix*)gpustruct->mat; >> ilu->CHOWILUVIENNACL = new >> viennacl::linalg::chow_patel_ilu_precond >> >(*mat, ilu_tag); >> >> The default is used. Is it possible to expose these two parameters so >> that user can change it through option keys? >> > > Yes. Do you mind making an issue for it? That way we can better keep track. > > https://gitlab.com/petsc/petsc/issues > > Thanks, > > Matt > > >> Thank you. >> >> Xiangdong >> >> On Wed, Jan 15, 2020 at 12:40 PM Matthew Knepley >> wrote: >> >>> On Wed, Jan 15, 2020 at 9:59 AM Xiangdong wrote: >>> >>>> Maybe I am not clear. I want to solve the block tridiagonal system >>>> Tx=b a few times with same T but different b. On CPU, I can have it by >>>> applying the ILU0 and reuse the factorization. Since it is block >>>> tridiagonal, ILU0 would give same results as LU. >>>> >>>> I am trying to do the same thing on GPU with chowiluviennacl, but found >>>> default factorization does not produce the exact factorization for >>>> tridiagonal system. Can we tight the drop off tolerance so that it can work >>>> as LU for tridiagonal system? >>>> >>> >>> There are no options in our implementation. You could look at the >>> ViennaCL manual to see if we missed something. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thank you. >>>> >>>> Xiangdong >>>> >>>> On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley >>>> wrote: >>>> >>>>> On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: >>>>> >>>>>> Can chowiluviennacl do ilu0? >>>>>> >>>>>> I need to solve a tri-diagonal system directly. If I apply the PCILU, >>>>>> I will obtain the exact solution with preonly + pcilu. However, the >>>>>> preonly + chowiluviennacl will not provide the exact solution. Any option >>>>>> keys to set the CHOWILUVIENNACL filling level or dropping off tolerance >>>>>> like the standard ilu? >>>>>> >>>>> >>>>> No. However, such a scheme makes less sense here. This algorithm >>>>> spawns a individual threads for individual elements. Drop tolerance >>>>> is not less work, it is sparser, but that should not matter for a >>>>> tridiagonal system. Levels also is not applicable since you have only 1 >>>>> level. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thank you. >>>>>> >>>>>> Best, >>>>>> Xiangdong >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >>>>>> wrote: >>>>>> >>>>>>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong >>>>>>> wrote: >>>>>>> >>>>>>>> Dear Developers, >>>>>>>> >>>>>>>> I have a quick question about the chowiluviennacl. When I tried to >>>>>>>> use it, I found that it only works for np=1, not np>1. However, in the >>>>>>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>>>>>> parallel ILU preconditioner". >>>>>>>> >>>>>>> >>>>>>> By parallel, this means shared memory parallelism on the GPU. >>>>>>> >>>>>>> >>>>>>>> I am wondering whether I am using it correctly. >>>>>>>> Does chowiluviennacl work for np>1? >>>>>>>> >>>>>>> >>>>>>> I do not believe so. I do not see why it could not be extended, but >>>>>>> that would mean writing some more code. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> In addition, are there option keys for the chowiluviennacl one can >>>>>>>> try? >>>>>>>> Thank you. >>>>>>>> >>>>>>>> Best, >>>>>>>> Xiangdong >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gautam.bisht at pnnl.gov Wed Jan 15 14:47:25 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Wed, 15 Jan 2020 20:47:25 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> Message-ID: <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> Hi Matt, I?m running into error while using DMPlexNaturalToGlobalBegin/End and am hoping you have some insights in what I?m doing incorrectly. I create a 2x2x2 grid and distribute it across processors (N=1,2). I create a natural and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are the two issues: - When N = 1, PETSc complains about DMSetUseNatural() not being called before DMPlexDistribute(), which is certainly not the case. - For N=1 and 2, global vector doesn?t have valid entries. I?m not sure how to create the natural vector and have used DMCreateGlobalVector() to create the natural vector, which could be the issue. Attached is the sample code to reproduce the error and below is the screen output. >make ex_test ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test Natural vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: DM global to natural SF was not created. You must call DMSetUseNatural() before DMPlexDistribute(). [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 GIT Date: 2019-12-21 14:33:27 -0600 [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed Jan 15 12:34:03 2020 [0]PETSC ERROR: Configure options --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local --download-zlib --download-exodusii=yes --download-hdf5=yes --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes --download-mpich=yes --download-mumps=yes --download-scalapack=yes --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 PETSC_ARCH=darwin-gcc8 [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c Global vector: Vec Object: 1 MPI processes type: seq 0. 0. 0. 0. 0. 0. 0. 0. Information about the mesh: Rank = 0 local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test Natural vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 1. 2. 3. Process [1] 4. 5. 6. 7. Global vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 0. 0. 0. Process [1] 0. 0. 0. 0. Information about the mesh: Rank = 0 local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 Rank = 1 local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Gautam On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev > wrote: On Jan 9, 2020, at 4:25 PM, Matthew Knepley > wrote: On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev > wrote: > On Jan 9, 2020, at 2:58 PM, Jed Brown > wrote: > > "'Bisht, Gautam' via tdycores-dev" > writes: > >>> Do you need to rely on the element number, or would coordinates (of a >>> centroid?) be sufficient for your purposes? >> >> I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. > > How does the mapping file work and how is it generated? In CESM/E3SM, the mapping file is used to map fluxes or states between grids of two components (e.g. land & atmosphere). The mapping method can be conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB script to create the mapping file. I?m surprised that this is not an issue for other codes that are using DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, they can specify material property for each grid cell. So, there should be a way to create a vectorscatter that will scatter material property read in the ?application?-order (i.e. order before calling DMPlexDistribute() ) to ghosted-order (i.e. order after calling DMPlexDistribute()). We did build something specific for this because some people wanted it. I wish I could purge this from all simulations. Its definitely destructive, but this is the way the world currently is. You want this: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html Perfect. Thanks. -Gautam Thanks, Matt > We can locate points and create interpolation with unstructured grids. > > -- > You received this message because you are subscribed to the Google Groups "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex_test.c Type: application/octet-stream Size: 3806 bytes Desc: ex_test.c URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 299 bytes Desc: makefile URL: From knepley at gmail.com Wed Jan 15 15:08:42 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 16:08:42 -0500 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> Message-ID: On Wed, Jan 15, 2020 at 3:47 PM 'Bisht, Gautam' via tdycores-dev < tdycores-dev at googlegroups.com> wrote: > Hi Matt, > > I?m running into error while using DMPlexNaturalToGlobalBegin/End and am > hoping you have some insights in what I?m doing incorrectly. I create a > 2x2x2 grid and distribute it across processors (N=1,2). I create a natural > and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are > the two issues: > > - When N = 1, PETSc complains about DMSetUseNatural() not being called > before DMPlexDistribute(), which is certainly not the case. > - For N=1 and 2, global vector doesn?t have valid entries. > > I?m not sure how to create the natural vector and have used > DMCreateGlobalVector() to create the natural vector, which could be the > issue. > > Attached is the sample code to reproduce the error and below is the screen > output. > Cool. I will run it and figure out the problem. Thanks, Matt > >make ex_test > > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test > Natural vector: > > Vec Object: 1 MPI processes > type: seq > 0. > 1. > 2. > 3. > 4. > 5. > 6. > 7. > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: DM global to natural SF was not created. > You must call DMSetUseNatural() before DMPlexDistribute(). > > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 > GIT Date: 2019-12-21 14:33:27 -0600 > [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed > Jan 15 12:34:03 2020 > [0]PETSC ERROR: Configure options > --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate > --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local > --download-zlib --download-exodusii=yes --download-hdf5=yes > --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes > --download-mpich=yes --download-mumps=yes --download-scalapack=yes > --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 > --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 > PETSC_ARCH=darwin-gcc8 > [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in > /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c > > Global vector: > > Vec Object: 1 MPI processes > type: seq > 0. > 0. > 0. > 0. > 0. > 0. > 0. > 0. > > Information about the mesh: > > Rank = 0 > local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 > local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 > local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 > local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 > local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 > local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 > local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 > local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 > > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test > Natural vector: > > Vec Object: 2 MPI processes > type: mpi > Process [0] > 0. > 1. > 2. > 3. > Process [1] > 4. > 5. > 6. > 7. > > Global vector: > > Vec Object: 2 MPI processes > type: mpi > Process [0] > 0. > 0. > 0. > 0. > Process [1] > 0. > 0. > 0. > 0. > > Information about the mesh: > > Rank = 0 > local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 > local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 > local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 > local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 > local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 > local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 > local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 > local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 > > Rank = 1 > local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 > local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 > local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 > local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 > local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 > local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 > local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 > local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 > > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > > -Gautam > > > > > On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev < > tdycores-dev at googlegroups.com> wrote: > > > > On Jan 9, 2020, at 4:25 PM, Matthew Knepley wrote: > > On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev < > tdycores-dev at googlegroups.com> wrote: > > > > On Jan 9, 2020, at 2:58 PM, Jed Brown wrote: > > > > "'Bisht, Gautam' via tdycores-dev" > writes: > > > >>> Do you need to rely on the element number, or would coordinates (of a > >>> centroid?) be sufficient for your purposes? > >> > >> I do need to rely on the element number. In my case, I have a mapping > file that remaps data from one grid onto another grid. Though I?m currently > creating a hexahedron mesh, in the future I would be reading in an > unstructured grid from a file for which I cannot rely on coordinates. > > > > How does the mapping file work and how is it generated? > > In CESM/E3SM, the mapping file is used to map fluxes or states between > grids of two components (e.g. land & atmosphere). The mapping method can be > conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses > ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB > script to create the mapping file. > > I?m surprised that this is not an issue for other codes that are using > DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, > they can specify material property for each grid cell. So, there should be > a way to create a vectorscatter that will scatter material property read in > the ?application?-order (i.e. order before calling DMPlexDistribute() ) to > ghosted-order (i.e. order after calling DMPlexDistribute()). > > > We did build something specific for this because some people wanted it. I > wish I could purge this from all simulations. Its > definitely destructive, but this is the way the world currently is. > > You want this: > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html > > > > Perfect. > > Thanks. > -Gautam > > > > Thanks, > > Matt > > > > We can locate points and create interpolation with unstructured grids. > > > > -- > > You received this message because you are subscribed to the Google > Groups "tdycores-dev" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to tdycores-dev+unsubscribe at googlegroups.com. > > To view this discussion on the web visit > https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org > . > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov > > . > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com > > . > > > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov > > . > > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0%40pnnl.gov > > . > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothee.nicolas at gmail.com Wed Jan 15 16:24:31 2020 From: timothee.nicolas at gmail.com (=?UTF-8?Q?Timoth=C3=A9e_Nicolas?=) Date: Wed, 15 Jan 2020 23:24:31 +0100 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: I can actually use some command line arguments. My line arguments actually read -snes_mf -green_snes_monitor and the first -snes_mf argument (for the main solver snes) is correctly taken into account. I will try what Barry suggested, I'll tell you if I find the reason. Best regards, thanks for your comments Timoth?e Le mer. 15 janv. 2020 ? 18:56, Matthew Knepley a ?crit : > I think that Mark is suggesting that no command line arguments are getting > in. > > Timothee, > > Can you use any command line arguments? > > Thanks, > > Matt > > On Wed, Jan 15, 2020 at 12:04 PM Smith, Barry F. via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> >> Should still work. Run in the debugger and put a break point in >> snessetoptionsprefix_ and see what it is trying to do >> >> Barry >> >> >> > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas < >> timothee.nicolas at gmail.com> wrote: >> > >> > Hi, thanks for your answer, >> > >> > I'm using Petsc version 3.10.4 >> > >> > Timoth?e >> > >> > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : >> > I'm guessing a Fortran issue. What version of PETSc are you using? >> > >> > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas < >> timothee.nicolas at gmail.com> wrote: >> > Dear PETSc users, >> > >> > I am confused by the usage of SNESSetOptionsPrefix. I understand this >> is required if you have for example different SNES in your program and want >> to set different options for them. >> > So for my second snes I wrote >> > >> > call SNESCreate(MPI_COMM_SELF,snes,ierr) >> > call SNESSetOptionsPrefix(snes,'green_',ierr) >> > call SNESSetFromOptions(snes,ierr) >> > >> > etc. >> > >> > Then when launching the program I wanted to monitor that snes so I >> launched it with the option -green_snes_monitor instead of -snes_monitor. >> But I keep getting the message >> > >> > WARNING! There are options you set that were not used! >> > WARNING! could be spelling mistake, etc! >> > Option left: name:-green_snes_monitor (no value) >> > >> > What do I miss here? >> > >> > Best regards >> > >> > Timoth?e NICOLAS >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jan 15 16:26:44 2020 From: balay at mcs.anl.gov (Balay, Satish) Date: Wed, 15 Jan 2020 22:26:44 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> Message-ID: I have some changes (incomplete) here - my hack to bfort. diff --git a/src/bfort/bfort.c b/src/bfort/bfort.c index 0efe900..31ff154 100644 --- a/src/bfort/bfort.c +++ b/src/bfort/bfort.c @@ -1654,7 +1654,7 @@ void PrintDefinition( FILE *fout, int is_function, char *name, int nstrings, /* Add a "decl/result(name) for functions */ if (useFerr) { - OutputFortranToken( fout, 7, "integer" ); + OutputFortranToken( fout, 7, "PetscErrorCode" ); OutputFortranToken( fout, 1, errArgNameParm); } else if (is_function) { OutputFortranToken( fout, 7, ArgToFortran( rt->name ) ); And my changes to petsc are on branch balay/fix-ftn-i8/maint Satish On Wed, 15 Jan 2020, Smith, Barry F. via petsc-users wrote: > > Working on it now; may be doable > > > > > On Jan 15, 2020, at 11:55 AM, Matthew Knepley wrote: > > > > On Wed, Jan 15, 2020 at 10:26 AM ??????? ????????? wrote: > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > > > If I define ierr as PetscErrorCode for all subroutines given below > > > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > > > then errors occur with first three subroutines: > > Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to INTEGER(8). > > > > Barry, > > > > It looks like the ftn-auto interfaces are using 'integer' for the error code, whereas the ftn-custom is using PetscErrorCode. > > Could we make the generated ones use integer? > > > > Thanks, > > > > Matt > > > > Therefore I was forced to define ierr as PetscInt for VecDuplicate, VecCopy, VecGetLocalSize subroutines to fix these errors. > > Why some subroutines sue 8-bytes integer type of ierr (PetscInt), while others - 4-bytes integer type of ierr (PetscErrorCode) remains a mystery for me. > > > > > What version of PETSc are you using? > > > > version 3.12.2 > > > > > Are you seeing this issue with a PETSc example? > > > > I will check it tomorrow and let you know. > > > > Kind regards, > > Dmitry Melnichuk > > > > > > > > 15.01.2020, 17:14, "Balay, Satish" : > > -fdefault-integer-8 is likely to break things [esp with MPI - where 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become incompatible with the MPI library with -fdefault-integer-8.] > > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > > > What version of PETSc are you using? Are you seeing this issue with a PETSc example? > > > > Satish > > > > On Wed, 15 Jan 2020, ??????? ????????? wrote: > > > > > > Hello all! > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > > Defmod uses PETSc for solving linear algebra system. > > Solver compilation with 32-bit version of PETSc does not cause any problem. > > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > > > 1. For example, consider the following statements written in Fortran: > > > > > > PetscErrorCode :: ierr_m > > PetscInt :: ierr > > ... > > ... > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > > Using the same integer format gives an error: > > > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > > > 2. Another example is: > > > > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > > CHKERRA(ierr) > > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to > > INTEGER(4)" occurs. > > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with > > following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > > --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ > > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check > > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > > > > Kind regards, > > Dmitry Melnichuk > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > From knepley at gmail.com Wed Jan 15 21:05:18 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Jan 2020 22:05:18 -0500 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> Message-ID: On Wed, Jan 15, 2020 at 4:08 PM Matthew Knepley wrote: > On Wed, Jan 15, 2020 at 3:47 PM 'Bisht, Gautam' via tdycores-dev < > tdycores-dev at googlegroups.com> wrote: > >> Hi Matt, >> >> I?m running into error while using DMPlexNaturalToGlobalBegin/End and am >> hoping you have some insights in what I?m doing incorrectly. I create a >> 2x2x2 grid and distribute it across processors (N=1,2). I create a natural >> and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are >> the two issues: >> >> - When N = 1, PETSc complains about DMSetUseNatural() not being called >> before DMPlexDistribute(), which is certainly not the case. >> - For N=1 and 2, global vector doesn?t have valid entries. >> >> I?m not sure how to create the natural vector and have used >> DMCreateGlobalVector() to create the natural vector, which could be the >> issue. >> >> Attached is the sample code to reproduce the error and below is the >> screen output. >> > > Cool. I will run it and figure out the problem. > 1) There was bad error reporting there. I am putting the fix in a new branch. It did not check for being on one process. If you run with knepley/fix-dm-g2n-serial It will work correctly in serial. 2) The G2N needs a serial data layout to work, so you have to make a Section _before_ distributing. I need to put that in the docs. I have fixed your example to do this and attached it. I run it with master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 1 ./ex_test -dm_plex_box_faces 2,2,2 -dm_view DM Object: 1 MPI processes type: plex DM_0x84000000_0 in 3 dimensions: 0-cells: 27 1-cells: 54 2-cells: 36 3-cells: 8 Labels: marker: 1 strata with value/size (1 (72)) Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 (4), 2 (4)) depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) Field p: adjacency FVM++ Natural vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. Global vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. Information about the mesh: [0] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 [0] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 [0] cell = 02; (0.250000, 0.750000, 0.250000); is_local = 1 [0] cell = 03; (0.750000, 0.750000, 0.250000); is_local = 1 [0] cell = 04; (0.250000, 0.250000, 0.750000); is_local = 1 [0] cell = 05; (0.750000, 0.250000, 0.750000); is_local = 1 [0] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 1 [0] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 1 master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 2 ./ex_test -dm_plex_box_faces 2,2,2 -dm_view DM Object: Parallel Mesh 2 MPI processes type: plex Parallel Mesh in 3 dimensions: 0-cells: 27 27 1-cells: 54 54 2-cells: 36 36 3-cells: 8 8 Labels: depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) marker: 1 strata with value/size (1 (72)) Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 (4), 6 (4)) Field p: adjacency FVM++ Natural vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 1. 2. 3. Process [1] 4. 5. 6. 7. Global vector: Vec Object: 2 MPI processes type: mpi Process [0] 2. 3. 6. 7. Process [1] 0. 1. 4. 5. Information about the mesh: [0] cell = 00; (0.250000, 0.750000, 0.250000); is_local = 1 [0] cell = 01; (0.750000, 0.750000, 0.250000); is_local = 1 [0] cell = 02; (0.250000, 0.750000, 0.750000); is_local = 1 [0] cell = 03; (0.750000, 0.750000, 0.750000); is_local = 1 [0] cell = 04; (0.250000, 0.250000, 0.250000); is_local = 0 [0] cell = 05; (0.750000, 0.250000, 0.250000); is_local = 0 [0] cell = 06; (0.250000, 0.250000, 0.750000); is_local = 0 [0] cell = 07; (0.750000, 0.250000, 0.750000); is_local = 0 [1] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 [1] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 [1] cell = 02; (0.250000, 0.250000, 0.750000); is_local = 1 [1] cell = 03; (0.750000, 0.250000, 0.750000); is_local = 1 [1] cell = 04; (0.250000, 0.750000, 0.250000); is_local = 0 [1] cell = 05; (0.750000, 0.750000, 0.250000); is_local = 0 [1] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 0 [1] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 0 Thanks, Matt > Thanks, > > Matt > > >> >make ex_test >> >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test >> Natural vector: >> >> Vec Object: 1 MPI processes >> type: seq >> 0. >> 1. >> 2. >> 3. >> 4. >> 5. >> 6. >> 7. >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Object is in wrong state >> [0]PETSC ERROR: DM global to natural SF was not created. >> You must call DMSetUseNatural() before DMPlexDistribute(). >> >> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 >> GIT Date: 2019-12-21 14:33:27 -0600 >> [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed >> Jan 15 12:34:03 2020 >> [0]PETSC ERROR: Configure options >> --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate >> --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local >> --download-zlib --download-exodusii=yes --download-hdf5=yes >> --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes >> --download-mpich=yes --download-mumps=yes --download-scalapack=yes >> --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 >> --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 >> PETSC_ARCH=darwin-gcc8 >> [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in >> /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c >> >> Global vector: >> >> Vec Object: 1 MPI processes >> type: seq >> 0. >> 0. >> 0. >> 0. >> 0. >> 0. >> 0. >> 0. >> >> Information about the mesh: >> >> Rank = 0 >> local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >> local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >> local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 >> local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 >> local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 >> local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 >> local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 >> local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 >> >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> >> >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test >> Natural vector: >> >> Vec Object: 2 MPI processes >> type: mpi >> Process [0] >> 0. >> 1. >> 2. >> 3. >> Process [1] >> 4. >> 5. >> 6. >> 7. >> >> Global vector: >> >> Vec Object: 2 MPI processes >> type: mpi >> Process [0] >> 0. >> 0. >> 0. >> 0. >> Process [1] >> 0. >> 0. >> 0. >> 0. >> >> Information about the mesh: >> >> Rank = 0 >> local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 >> local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 >> local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 >> local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 >> local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 >> local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 >> local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 >> local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 >> >> Rank = 1 >> local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >> local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >> local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 >> local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 >> local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 >> local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 >> local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 >> local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 >> >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> >> >> -Gautam >> >> >> >> >> On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev < >> tdycores-dev at googlegroups.com> wrote: >> >> >> >> On Jan 9, 2020, at 4:25 PM, Matthew Knepley wrote: >> >> On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev < >> tdycores-dev at googlegroups.com> wrote: >> >> >> > On Jan 9, 2020, at 2:58 PM, Jed Brown wrote: >> > >> > "'Bisht, Gautam' via tdycores-dev" >> writes: >> > >> >>> Do you need to rely on the element number, or would coordinates (of a >> >>> centroid?) be sufficient for your purposes? >> >> >> >> I do need to rely on the element number. In my case, I have a mapping >> file that remaps data from one grid onto another grid. Though I?m currently >> creating a hexahedron mesh, in the future I would be reading in an >> unstructured grid from a file for which I cannot rely on coordinates. >> > >> > How does the mapping file work and how is it generated? >> >> In CESM/E3SM, the mapping file is used to map fluxes or states between >> grids of two components (e.g. land & atmosphere). The mapping method can be >> conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses >> ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB >> script to create the mapping file. >> >> I?m surprised that this is not an issue for other codes that are using >> DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, >> they can specify material property for each grid cell. So, there should be >> a way to create a vectorscatter that will scatter material property read in >> the ?application?-order (i.e. order before calling DMPlexDistribute() ) to >> ghosted-order (i.e. order after calling DMPlexDistribute()). >> >> >> We did build something specific for this because some people wanted it. I >> wish I could purge this from all simulations. Its >> definitely destructive, but this is the way the world currently is. >> >> You want this: >> >> >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html >> >> >> >> Perfect. >> >> Thanks. >> -Gautam >> >> >> >> Thanks, >> >> Matt >> >> >> > We can locate points and create interpolation with unstructured grids. >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "tdycores-dev" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to tdycores-dev+unsubscribe at googlegroups.com. >> > To view this discussion on the web visit >> https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org >> . >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov >> >> . >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com >> >> . >> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov >> >> . >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0%40pnnl.gov >> >> . >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex_test.c Type: application/octet-stream Size: 3579 bytes Desc: not available URL: From timothee.nicolas at gmail.com Thu Jan 16 03:18:08 2020 From: timothee.nicolas at gmail.com (=?UTF-8?Q?Timoth=C3=A9e_Nicolas?=) Date: Thu, 16 Jan 2020 10:18:08 +0100 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: Actually, for the main solver it works. I'm thinking, could it be due to the fact that the second SNES instance is defined in a routine that is called somewhere inside the FormFunction of the main SNES? We are improving our boundary condition, which becomes quite complex, and we have a small problem to solve, so I'm trying to handle it with a SNES. So the two SNES are nested, in a sense. Timoth?e Le mer. 15 janv. 2020 ? 23:24, Timoth?e Nicolas a ?crit : > I can actually use some command line arguments. My line arguments actually > read > > -snes_mf -green_snes_monitor > > and the first -snes_mf argument (for the main solver snes) is correctly > taken into account. > I will try what Barry suggested, I'll tell you if I find the reason. > > Best regards, thanks for your comments > > Timoth?e > > Le mer. 15 janv. 2020 ? 18:56, Matthew Knepley a > ?crit : > >> I think that Mark is suggesting that no command line arguments are >> getting in. >> >> Timothee, >> >> Can you use any command line arguments? >> >> Thanks, >> >> Matt >> >> On Wed, Jan 15, 2020 at 12:04 PM Smith, Barry F. via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> >>> Should still work. Run in the debugger and put a break point in >>> snessetoptionsprefix_ and see what it is trying to do >>> >>> Barry >>> >>> >>> > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas < >>> timothee.nicolas at gmail.com> wrote: >>> > >>> > Hi, thanks for your answer, >>> > >>> > I'm using Petsc version 3.10.4 >>> > >>> > Timoth?e >>> > >>> > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : >>> > I'm guessing a Fortran issue. What version of PETSc are you using? >>> > >>> > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas < >>> timothee.nicolas at gmail.com> wrote: >>> > Dear PETSc users, >>> > >>> > I am confused by the usage of SNESSetOptionsPrefix. I understand this >>> is required if you have for example different SNES in your program and want >>> to set different options for them. >>> > So for my second snes I wrote >>> > >>> > call SNESCreate(MPI_COMM_SELF,snes,ierr) >>> > call SNESSetOptionsPrefix(snes,'green_',ierr) >>> > call SNESSetFromOptions(snes,ierr) >>> > >>> > etc. >>> > >>> > Then when launching the program I wanted to monitor that snes so I >>> launched it with the option -green_snes_monitor instead of -snes_monitor. >>> But I keep getting the message >>> > >>> > WARNING! There are options you set that were not used! >>> > WARNING! could be spelling mistake, etc! >>> > Option left: name:-green_snes_monitor (no value) >>> > >>> > What do I miss here? >>> > >>> > Best regards >>> > >>> > Timoth?e NICOLAS >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jourdon_anthony at hotmail.fr Thu Jan 16 02:52:10 2020 From: jourdon_anthony at hotmail.fr (Anthony Jourdon) Date: Thu, 16 Jan 2020 08:52:10 +0000 Subject: [petsc-users] DMDA Error Message-ID: Dear Petsc developer, I need assistance with an error. I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. This code used to run very well on a super computer with the OS SLES11. Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 The code was running with no problem on 1024 and more mpi ranks. Recently, the OS of the computer has been updated to RHEL7 I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c Thank you for your time, Sincerly, Anthony Jourdon -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DMDAError_1024_O0.err Type: application/octet-stream Size: 1008621 bytes Desc: DMDAError_1024_O0.err URL: From knepley at gmail.com Thu Jan 16 07:36:31 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Jan 2020 08:36:31 -0500 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: On Thu, Jan 16, 2020 at 4:18 AM Timoth?e Nicolas wrote: > Actually, for the main solver it works. I'm thinking, could it be due to > the fact that the second SNES instance is defined in a routine that is > called somewhere inside the FormFunction of the main SNES? We are improving > our boundary condition, which becomes quite complex, and we have a small > problem to solve, so I'm trying to handle it with a SNES. So the two SNES > are nested, in a sense. > As long as SNESSetFromOptions() is being called, it should function properly. That is why we want to see it in the debugger. Thanks, Matt > Timoth?e > > Le mer. 15 janv. 2020 ? 23:24, Timoth?e Nicolas < > timothee.nicolas at gmail.com> a ?crit : > >> I can actually use some command line arguments. My line arguments >> actually read >> >> -snes_mf -green_snes_monitor >> >> and the first -snes_mf argument (for the main solver snes) is correctly >> taken into account. >> I will try what Barry suggested, I'll tell you if I find the reason. >> >> Best regards, thanks for your comments >> >> Timoth?e >> >> Le mer. 15 janv. 2020 ? 18:56, Matthew Knepley a >> ?crit : >> >>> I think that Mark is suggesting that no command line arguments are >>> getting in. >>> >>> Timothee, >>> >>> Can you use any command line arguments? >>> >>> Thanks, >>> >>> Matt >>> >>> On Wed, Jan 15, 2020 at 12:04 PM Smith, Barry F. via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>>> >>>> Should still work. Run in the debugger and put a break point in >>>> snessetoptionsprefix_ and see what it is trying to do >>>> >>>> Barry >>>> >>>> >>>> > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas < >>>> timothee.nicolas at gmail.com> wrote: >>>> > >>>> > Hi, thanks for your answer, >>>> > >>>> > I'm using Petsc version 3.10.4 >>>> > >>>> > Timoth?e >>>> > >>>> > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : >>>> > I'm guessing a Fortran issue. What version of PETSc are you using? >>>> > >>>> > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas < >>>> timothee.nicolas at gmail.com> wrote: >>>> > Dear PETSc users, >>>> > >>>> > I am confused by the usage of SNESSetOptionsPrefix. I understand this >>>> is required if you have for example different SNES in your program and want >>>> to set different options for them. >>>> > So for my second snes I wrote >>>> > >>>> > call SNESCreate(MPI_COMM_SELF,snes,ierr) >>>> > call SNESSetOptionsPrefix(snes,'green_',ierr) >>>> > call SNESSetFromOptions(snes,ierr) >>>> > >>>> > etc. >>>> > >>>> > Then when launching the program I wanted to monitor that snes so I >>>> launched it with the option -green_snes_monitor instead of -snes_monitor. >>>> But I keep getting the message >>>> > >>>> > WARNING! There are options you set that were not used! >>>> > WARNING! could be spelling mistake, etc! >>>> > Option left: name:-green_snes_monitor (no value) >>>> > >>>> > What do I miss here? >>>> > >>>> > Best regards >>>> > >>>> > Timoth?e NICOLAS >>>> >>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jan 16 08:38:37 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 16 Jan 2020 14:38:37 +0000 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: References: Message-ID: <3DF2D586-B2B6-4FA7-9E26-D3455FBB21E9@mcs.anl.gov> > On Jan 16, 2020, at 3:18 AM, Timoth?e Nicolas wrote: > > Actually, for the main solver it works. I'm thinking, could it be due to the fact that the second SNES instance is defined in a routine that is called somewhere inside the FormFunction of the main SNES? We are improving our boundary condition, which becomes quite complex, and we have a small problem to solve, so I'm trying to handle it with a SNES. So the two SNES are nested, in a sense. This should be fine. We do this. Are you sure the inner SNES is actually being called? Run with -help | grep green does it print a help message for your green options? Barry > > Timoth?e > > Le mer. 15 janv. 2020 ? 23:24, Timoth?e Nicolas a ?crit : > I can actually use some command line arguments. My line arguments actually read > > -snes_mf -green_snes_monitor > > and the first -snes_mf argument (for the main solver snes) is correctly taken into account. > I will try what Barry suggested, I'll tell you if I find the reason. > > Best regards, thanks for your comments > > Timoth?e > > Le mer. 15 janv. 2020 ? 18:56, Matthew Knepley a ?crit : > I think that Mark is suggesting that no command line arguments are getting in. > > Timothee, > > Can you use any command line arguments? > > Thanks, > > Matt > > On Wed, Jan 15, 2020 at 12:04 PM Smith, Barry F. via petsc-users wrote: > > Should still work. Run in the debugger and put a break point in snessetoptionsprefix_ and see what it is trying to do > > Barry > > > > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas wrote: > > > > Hi, thanks for your answer, > > > > I'm using Petsc version 3.10.4 > > > > Timoth?e > > > > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : > > I'm guessing a Fortran issue. What version of PETSc are you using? > > > > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas wrote: > > Dear PETSc users, > > > > I am confused by the usage of SNESSetOptionsPrefix. I understand this is required if you have for example different SNES in your program and want to set different options for them. > > So for my second snes I wrote > > > > call SNESCreate(MPI_COMM_SELF,snes,ierr) > > call SNESSetOptionsPrefix(snes,'green_',ierr) > > call SNESSetFromOptions(snes,ierr) > > > > etc. > > > > Then when launching the program I wanted to monitor that snes so I launched it with the option -green_snes_monitor instead of -snes_monitor. But I keep getting the message > > > > WARNING! There are options you set that were not used! > > WARNING! could be spelling mistake, etc! > > Option left: name:-green_snes_monitor (no value) > > > > What do I miss here? > > > > Best regards > > > > Timoth?e NICOLAS > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From bsmith at mcs.anl.gov Thu Jan 16 08:46:29 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 16 Jan 2020 14:46:29 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: Message-ID: <24E6B347-4775-490F-B858-E6ABD9A0F781@anl.gov> Are you increasing your problem size with the number of ranks or same size problem? It could also be out of memory issues. No error message is printed; which is not standard. It should print first a message why it failed. Are you sure all the libraries were rebuilt. Run with -malloc_debug it will go slow but check for memory corruption, if possible use valgrind Barry > On Jan 16, 2020, at 2:52 AM, Anthony Jourdon wrote: > > Dear Petsc developer, > > I need assistance with an error. > > I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. > > This code used to run very well on a super computer with the OS SLES11. > Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 > The code was running with no problem on 1024 and more mpi ranks. > > Recently, the OS of the computer has been updated to RHEL7 > I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. > Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. > Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. > > [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c > [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c > [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c > [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c > [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c > [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c > > Thank you for your time, > Sincerly, > > Anthony Jourdon > From jczhang at mcs.anl.gov Thu Jan 16 09:49:57 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Thu, 16 Jan 2020 15:49:57 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: Message-ID: It seems the problem is triggered by DMSetUp. You can write a small test creating the DMDA with the same size as your code, to see if you can reproduce the problem. If yes, it would be much easier for us to debug it. --Junchao Zhang On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon > wrote: Dear Petsc developer, I need assistance with an error. I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. This code used to run very well on a super computer with the OS SLES11. Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 The code was running with no problem on 1024 and more mpi ranks. Recently, the OS of the computer has been updated to RHEL7 I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c Thank you for your time, Sincerly, Anthony Jourdon -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Thu Jan 16 16:31:57 2020 From: epscodes at gmail.com (Xiangdong) Date: Thu, 16 Jan 2020 17:31:57 -0500 Subject: [petsc-users] use superlu and hypre's gpu features through PETSc Message-ID: Dear Developers, >From the online documentation, both superlu and hypre have some gpu functionalities. Can we use these gpu features through PETSc's interface? Thank you. Best, Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jan 16 20:44:29 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Fri, 17 Jan 2020 02:44:29 +0000 Subject: [petsc-users] use superlu and hypre's gpu features through PETSc In-Reply-To: References: Message-ID: <52F57EA5-0D30-42F1-8315-D5D86C48E1B7@anl.gov> That is superlu_dist and hypre. Yes, but both backends are rather primitive and will be a little struggle to use. For superlu_dist you need to get the branch barry/fix-superlu_dist-py-for-gpus and rebase it against master I only recommend trying them if you are adventuresome. Note that PETSc's GAMG can also utilize the GPU. Barry > On Jan 16, 2020, at 4:31 PM, Xiangdong wrote: > > Dear Developers, > > From the online documentation, both superlu and hypre have some gpu functionalities. Can we use these gpu features through PETSc's interface? > > Thank you. > > Best, > Xiangdong From dmitry.melnichuk at geosteertech.com Fri Jan 17 02:40:02 2020 From: dmitry.melnichuk at geosteertech.com (=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JzQtdC70YzQvdC40YfRg9C6?=) Date: Fri, 17 Jan 2020 11:40:02 +0300 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> Message-ID: <3531711579250402@vla4-fbefcb3b0074.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From lars.odsater at sintef.no Fri Jan 17 06:00:01 2020 From: lars.odsater at sintef.no (=?iso-8859-1?Q?Lars_Ods=E6ter?=) Date: Fri, 17 Jan 2020 12:00:01 +0000 Subject: [petsc-users] Cross compilation of PETSc using MXE Message-ID: Dear PETSc users, First, thanks to the developers for your great effort with the PETSc library, which I have benefited from several times. I want to share with you that I recently was able to cross compile PETSc with the MXE (mxe.cc) cross compiler, and then link it into an application (also built with the mxe compiler) to produce an executable that I successfully ran on my Windows computer. In doing this I realized that there is very little documentation of this on the web, so maybe others could benefit from my approach. It contains a few hacks that might be solved more elegant, but that is probably out of may range. First, I installed the mxe cross compiler following the tutorial (mxe.cc/#tutorial): git clone https://github.com/mxe/mxe.git make cc make blas lapack export PATH=~/code/mxe/usr/bin:$PATH Then I compiled PETSc (3.11.3 tarball): wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.11.3.tar.gz gunzip -c petsc-3.11.3.tar.gz | tar -xof - cd petsc-3.11.3/ ./configure PETSC_ARCH=arch-mxe-static \ --with-mpi=0 --host=i686-w64-mingw32.static \ --enable-static --disable-shared \ --with-cc=i686-w64-mingw32.static-gcc \ --with-cxx=i686-w64-mingw32.static-g++ \ --with-fc=i686-w64-mingw32.static-gfortran \ --with-ld=i686-w64-mingw32.static-ld \ --with-ar=i686-w64-mingw32.static-ar \ --with-pkg-config=i686-w64-mingw32.static-pkg-config \ --with-batch --known-64-bit-blas-indices Next, I did the reconfigure step that was explained in the output from the call to configure: * copy 'conftest-arch-mxe-static' to your Windows machine * Rename it with extension '.exe' * Run the application in Windows. This generates 'reconfigure-arch-mxe-static.py' * Copy 'reconfigure-arch-mxe-static.py' back to the Linux machine * Run the python script: python reconfigure-arch-mxe-static.py Now, 'make all' failed to compile, but I did two hacks to mitigate it: 1) In '~/code/petsc-3.11.3/arch-mxe-static/include/petscconf.h' include the following lines: #ifndef PETSC_HAVE_DIRECT_H #define PETSC_HAVE_DIRECT_H 1 #endif 2) In ~/code/petsc-3.11.3/src/sys/error/fp.c, comment out line 405-406: // elif defined PETSC_HAVE_XMMINTRIN_H // _MM_SET_EXCEPTION_MASK(_MM_MASK_INEXACT | _MM_MASK_UNDERFLOW); After this 'make all' ran successfully. I was finally able to compile my code (linking PETSc) following step 5 of the mxe tutorial: mxe.cc/#tutorial Then, simply copy to Windows and double-click. Best regards, Lars Hov Ods?ter -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 17 06:34:13 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Jan 2020 07:34:13 -0500 Subject: [petsc-users] Cross compilation of PETSc using MXE In-Reply-To: References: Message-ID: On Fri, Jan 17, 2020 at 7:00 AM Lars Ods?ter wrote: > Dear PETSc users, > > First, thanks to the developers for your great effort with the PETSc > library, which I have benefited from several times. > > I want to share with you that I recently was able to cross compile PETSc > with the MXE (mxe.cc) cross compiler, and then link it into an application > (also built with the mxe compiler) to produce an executable that I > successfully ran on my Windows computer. In doing this I realized that > there is very little documentation of this on the web, so maybe others > could benefit from my approach. It contains a few hacks that might be > solved more elegant, but that is probably out of may range. > > First, I installed the mxe cross compiler following the tutorial ( > mxe.cc/#tutorial): > > git clone https://github.com/mxe/mxe.git > make cc > make blas lapack > export PATH=~/code/mxe/usr/bin:$PATH > > Then I compiled PETSc (3.11.3 tarball): > > wget > http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.11.3.tar.gz > gunzip -c petsc-3.11.3.tar.gz | tar -xof - > cd petsc-3.11.3/ > ./configure PETSC_ARCH=arch-mxe-static \ > --with-mpi=0 --host=i686-w64-mingw32.static \ > --enable-static --disable-shared \ > --with-cc=i686-w64-mingw32.static-gcc \ > --with-cxx=i686-w64-mingw32.static-g++ \ > --with-fc=i686-w64-mingw32.static-gfortran \ > --with-ld=i686-w64-mingw32.static-ld \ > --with-ar=i686-w64-mingw32.static-ar \ > --with-pkg-config=i686-w64-mingw32.static-pkg-config \ > --with-batch --known-64-bit-blas-indices > > Next, I did the reconfigure step that was explained in the output from the > call to configure: > > * copy 'conftest-arch-mxe-static' to your Windows machine > * Rename it with extension '.exe' > * Run the application in Windows. This generates > 'reconfigure-arch-mxe-static.py' > * Copy 'reconfigure-arch-mxe-static.py' back to the Linux machine > * Run the python script: > > python reconfigure-arch-mxe-static.py > > Now, 'make all' failed to compile, but I did two hacks to mitigate it: > > 1) In '~/code/petsc-3.11.3/arch-mxe-static/include/petscconf.h' include > the following lines: > #ifndef PETSC_HAVE_DIRECT_H > #define PETSC_HAVE_DIRECT_H 1 > #endif > > 2) In ~/code/petsc-3.11.3/src/sys/error/fp.c, comment out line 405-406: > // elif defined PETSC_HAVE_XMMINTRIN_H > > // _MM_SET_EXCEPTION_MASK(_MM_MASK_INEXACT | _MM_MASK_UNDERFLOW); > > After this 'make all' ran successfully. > I was finally able to compile my code (linking PETSc) following step 5 of > the mxe tutorial: mxe.cc/#tutorial > Then, simply copy to Windows and double-click. > Great! It sounds like we can make a couple of fixes that make this easier: 1) Name the batch executable .exe for this 2) Find direct.h correctly 3) Do not find xmmintrin.h Can you send me your configure.log so I can see what went wrong on the header location. Thanks, Matt > Best regards, > > Lars Hov Ods?ter > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Jan 17 07:37:47 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 17 Jan 2020 08:37:47 -0500 Subject: [petsc-users] use superlu and hypre's gpu features through PETSc In-Reply-To: <52F57EA5-0D30-42F1-8315-D5D86C48E1B7@anl.gov> References: <52F57EA5-0D30-42F1-8315-D5D86C48E1B7@anl.gov> Message-ID: Stafano ported hypre to SUMMIT to use CUDA in branch stefanozampini/hypre-cuda-rebased Fragile and performance was poor. On Thu, Jan 16, 2020 at 9:44 PM Smith, Barry F. via petsc-users < petsc-users at mcs.anl.gov> wrote: > > That is superlu_dist and hypre. > > Yes, but both backends are rather primitive and will be a little > struggle to use. > > For superlu_dist you need to get the branch > barry/fix-superlu_dist-py-for-gpus and rebase it against master > > I only recommend trying them if you are adventuresome. Note that > PETSc's GAMG can also utilize the GPU. > > Barry > > > > On Jan 16, 2020, at 4:31 PM, Xiangdong wrote: > > > > Dear Developers, > > > > From the online documentation, both superlu and hypre have some gpu > functionalities. Can we use these gpu features through PETSc's interface? > > > > Thank you. > > > > Best, > > Xiangdong > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Fri Jan 17 12:47:00 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Fri, 17 Jan 2020 10:47:00 -0800 Subject: [petsc-users] checking max iteration reached Message-ID: Dear PETSc dev team, How to check if the max iterations have been reached? I notice there is PETSC_ERR_NOT_CONVERGED but I am not sure if this error is issued for max iterations reached or not. If yes, how to tell if the max iterations have been reached since this error can be issued for many other reasons. If not, should I check the number of iterations myself? Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jan 17 13:09:51 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Jan 2020 14:09:51 -0500 Subject: [petsc-users] checking max iteration reached In-Reply-To: References: Message-ID: You want https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPGetConvergedReason.html Thanks, Matt On Fri, Jan 17, 2020 at 1:48 PM Sam Guo wrote: > Dear PETSc dev team, > How to check if the max iterations have been reached? I notice there is > PETSC_ERR_NOT_CONVERGED but I am not sure if this error is issued for max > iterations reached or not. > If yes, how to tell if the max iterations have been reached since this > error can be issued for many other reasons. > If not, should I check the number of iterations myself? > > Thanks, > Sam > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Jan 17 15:55:05 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Fri, 17 Jan 2020 21:55:05 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <3531711579250402@vla4-fbefcb3b0074.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <42552671579101971@vla3-bebe75876e15.qloud-c.yandex.net> <3531711579250402@vla4-fbefcb3b0074.qloud-c.yandex.net> Message-ID: <8938687F-DDFF-4141-89DA-E09428A9CD83@mcs.anl.gov> I am working on it. It requires a large number of fixes; I hope to have it working by tonight. Barry > On Jan 17, 2020, at 2:40 AM, ??????? ????????? wrote: > > Thank you for your replies. > > Tried to configure commited version of PETSc from Satish Balay branch (balay/fix-ftn-i8/maint) and faced to the same error when running test example ex5f > > > call SNESCreate(PETSC_COMM_WORLD,snes,ierr) > > Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to INTEGER(8) > > > At the moment, some subroutines (such as PetscInitialize, PetscFinalize, MatSetValue, VecSetValue) work with the correct size of the variable ierr defined as PetscErrorCode, and some do not. > The following subroutines still require ierr to be of type INTEGER(8): > > > VecGetSubVector, VecAssemblyBegin, VecAssemblyEnd, VecScatterBegin, VecScatterEnd, VecScatterDestroy, VecCreateMPI, VecDuplicate, VecZeroEntries, VecAYPX, VecWAXPY, VecWAXPY > MatMult, MatDestroy, MatAssemblyBegin, MatAssemblyEnd, MatZeroEntries, MatCreateSubMatrix, MatScale, MatDiagonalSet, MatGetDiagonal, MatDuplicat, MatSetSizes, MatSetFromOptions > > Unfortunately, I'm not sure if this is the only issue that occurs when switching to the 64-bit version of PETSc. > I can set the size of the variables ierr so that the solver compilation process completes successfully, but I get the following error when solving linear algebra system by use of KSPSolve subroutine: > > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > > Because the solver with the 32-bit version of PETSc works properly, I suppose that the cause of the errors (for 64-bit version of PETSc) is the inappropriate size of the variables. > I compiled PETSc with flags: with-64-bit-indices and -fdefault-integer-8. > Also changed the size of MPI_Integer to MPI_Integer8: > MPI_Bcast(npart,nnds,MPI_Integer8,0,MPI_Comm_World,ierr). > > I am probably missing something else. > > ??????? > Kind regards, > Dmitry Melnichuk > > 16.01.2020, 01:26, "Balay, Satish" : > I have some changes (incomplete) here - > > my hack to bfort. > > diff --git a/src/bfort/bfort.c b/src/bfort/bfort.c > index 0efe900..31ff154 100644 > --- a/src/bfort/bfort.c > +++ b/src/bfort/bfort.c > @@ -1654,7 +1654,7 @@ void PrintDefinition( FILE *fout, int is_function, char *name, int nstrings, > > /* Add a "decl/result(name) for functions */ > if (useFerr) { > - OutputFortranToken( fout, 7, "integer" ); > + OutputFortranToken( fout, 7, "PetscErrorCode" ); > OutputFortranToken( fout, 1, errArgNameParm); > } else if (is_function) { > OutputFortranToken( fout, 7, ArgToFortran( rt->name ) ); > > > And my changes to petsc are on branch balay/fix-ftn-i8/maint > > Satish > > On Wed, 15 Jan 2020, Smith, Barry F. via petsc-users wrote: > > > > Working on it now; may be doable > > > > > On Jan 15, 2020, at 11:55 AM, Matthew Knepley wrote: > > > > On Wed, Jan 15, 2020 at 10:26 AM ??????? ????????? wrote: > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > > > If I define ierr as PetscErrorCode for all subroutines given below > > > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr) > > > > then errors occur with first three subroutines: > > Error: Type mismatch in argument ?z? at (1); passed INTEGER(4) to INTEGER(8). > > > > Barry, > > > > It looks like the ftn-auto interfaces are using 'integer' for the error code, whereas the ftn-custom is using PetscErrorCode. > > Could we make the generated ones use integer? > > > > Thanks, > > > > Matt > > > > Therefore I was forced to define ierr as PetscInt for VecDuplicate, VecCopy, VecGetLocalSize subroutines to fix these errors. > > Why some subroutines sue 8-bytes integer type of ierr (PetscInt), while others - 4-bytes integer type of ierr (PetscErrorCode) remains a mystery for me. > > > > > What version of PETSc are you using? > > > > version 3.12.2 > > > > > Are you seeing this issue with a PETSc example? > > > > I will check it tomorrow and let you know. > > > > Kind regards, > > Dmitry Melnichuk > > > > > > > > 15.01.2020, 17:14, "Balay, Satish" : > > -fdefault-integer-8 is likely to break things [esp with MPI - where 'integer' is used everywhere for ex - MPI_Comm etc - so MPI includes become incompatible with the MPI library with -fdefault-integer-8.] > > > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc routines should be suing 'PetscErrorCode for ierr' > > > > What version of PETSc are you using? Are you seeing this issue with a PETSc example? > > > > Satish > > > > On Wed, 15 Jan 2020, ??????? ????????? wrote: > > > > > > Hello all! > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > > Defmod uses PETSc for solving linear algebra system. > > Solver compilation with 32-bit version of PETSc does not cause any problem. > > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > > > 1. For example, consider the following statements written in Fortran: > > > > > > PetscErrorCode :: ierr_m > > PetscInt :: ierr > > ... > > ... > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > > Using the same integer format gives an error: > > > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > > > 2. Another example is: > > > > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > > CHKERRA(ierr) > > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to > > INTEGER(4)" occurs. > > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with > > following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack > > --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin:./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ > > --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check > > -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > > > > Kind regards, > > Dmitry Melnichuk > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > From gautam.bisht at pnnl.gov Sat Jan 18 13:19:27 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Sat, 18 Jan 2020 19:19:27 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> Message-ID: <3F79926B-D567-4592-8E4E-46D21628D2DF@pnnl.gov> Hi Matt, Thanks for the fixes to the example. -Gautam On Jan 15, 2020, at 7:05 PM, Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 4:08 PM Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 3:47 PM 'Bisht, Gautam' via tdycores-dev > wrote: Hi Matt, I?m running into error while using DMPlexNaturalToGlobalBegin/End and am hoping you have some insights in what I?m doing incorrectly. I create a 2x2x2 grid and distribute it across processors (N=1,2). I create a natural and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are the two issues: - When N = 1, PETSc complains about DMSetUseNatural() not being called before DMPlexDistribute(), which is certainly not the case. - For N=1 and 2, global vector doesn?t have valid entries. I?m not sure how to create the natural vector and have used DMCreateGlobalVector() to create the natural vector, which could be the issue. Attached is the sample code to reproduce the error and below is the screen output. Cool. I will run it and figure out the problem. 1) There was bad error reporting there. I am putting the fix in a new branch. It did not check for being on one process. If you run with knepley/fix-dm-g2n-serial It will work correctly in serial. 2) The G2N needs a serial data layout to work, so you have to make a Section _before_ distributing. I need to put that in the docs. I have fixed your example to do this and attached it. I run it with master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 1 ./ex_test -dm_plex_box_faces 2,2,2 -dm_view DM Object: 1 MPI processes type: plex DM_0x84000000_0 in 3 dimensions: 0-cells: 27 1-cells: 54 2-cells: 36 3-cells: 8 Labels: marker: 1 strata with value/size (1 (72)) Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 (4), 2 (4)) depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) Field p: adjacency FVM++ Natural vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. Global vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. Information about the mesh: [0] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 [0] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 [0] cell = 02; (0.250000, 0.750000, 0.250000); is_local = 1 [0] cell = 03; (0.750000, 0.750000, 0.250000); is_local = 1 [0] cell = 04; (0.250000, 0.250000, 0.750000); is_local = 1 [0] cell = 05; (0.750000, 0.250000, 0.750000); is_local = 1 [0] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 1 [0] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 1 master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 2 ./ex_test -dm_plex_box_faces 2,2,2 -dm_view DM Object: Parallel Mesh 2 MPI processes type: plex Parallel Mesh in 3 dimensions: 0-cells: 27 27 1-cells: 54 54 2-cells: 36 36 3-cells: 8 8 Labels: depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) marker: 1 strata with value/size (1 (72)) Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 (4), 6 (4)) Field p: adjacency FVM++ Natural vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 1. 2. 3. Process [1] 4. 5. 6. 7. Global vector: Vec Object: 2 MPI processes type: mpi Process [0] 2. 3. 6. 7. Process [1] 0. 1. 4. 5. Information about the mesh: [0] cell = 00; (0.250000, 0.750000, 0.250000); is_local = 1 [0] cell = 01; (0.750000, 0.750000, 0.250000); is_local = 1 [0] cell = 02; (0.250000, 0.750000, 0.750000); is_local = 1 [0] cell = 03; (0.750000, 0.750000, 0.750000); is_local = 1 [0] cell = 04; (0.250000, 0.250000, 0.250000); is_local = 0 [0] cell = 05; (0.750000, 0.250000, 0.250000); is_local = 0 [0] cell = 06; (0.250000, 0.250000, 0.750000); is_local = 0 [0] cell = 07; (0.750000, 0.250000, 0.750000); is_local = 0 [1] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 [1] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 [1] cell = 02; (0.250000, 0.250000, 0.750000); is_local = 1 [1] cell = 03; (0.750000, 0.250000, 0.750000); is_local = 1 [1] cell = 04; (0.250000, 0.750000, 0.250000); is_local = 0 [1] cell = 05; (0.750000, 0.750000, 0.250000); is_local = 0 [1] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 0 [1] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 0 Thanks, Matt Thanks, Matt >make ex_test ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test Natural vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: DM global to natural SF was not created. You must call DMSetUseNatural() before DMPlexDistribute(). [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 GIT Date: 2019-12-21 14:33:27 -0600 [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed Jan 15 12:34:03 2020 [0]PETSC ERROR: Configure options --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local --download-zlib --download-exodusii=yes --download-hdf5=yes --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes --download-mpich=yes --download-mumps=yes --download-scalapack=yes --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 PETSC_ARCH=darwin-gcc8 [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c Global vector: Vec Object: 1 MPI processes type: seq 0. 0. 0. 0. 0. 0. 0. 0. Information about the mesh: Rank = 0 local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test Natural vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 1. 2. 3. Process [1] 4. 5. 6. 7. Global vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 0. 0. 0. Process [1] 0. 0. 0. 0. Information about the mesh: Rank = 0 local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 Rank = 1 local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Gautam On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev > wrote: On Jan 9, 2020, at 4:25 PM, Matthew Knepley > wrote: On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev > wrote: > On Jan 9, 2020, at 2:58 PM, Jed Brown > wrote: > > "'Bisht, Gautam' via tdycores-dev" > writes: > >>> Do you need to rely on the element number, or would coordinates (of a >>> centroid?) be sufficient for your purposes? >> >> I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. > > How does the mapping file work and how is it generated? In CESM/E3SM, the mapping file is used to map fluxes or states between grids of two components (e.g. land & atmosphere). The mapping method can be conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB script to create the mapping file. I?m surprised that this is not an issue for other codes that are using DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, they can specify material property for each grid cell. So, there should be a way to create a vectorscatter that will scatter material property read in the ?application?-order (i.e. order before calling DMPlexDistribute() ) to ghosted-order (i.e. order after calling DMPlexDistribute()). We did build something specific for this because some people wanted it. I wish I could purge this from all simulations. Its definitely destructive, but this is the way the world currently is. You want this: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html Perfect. Thanks. -Gautam Thanks, Matt > We can locate points and create interpolation with unstructured grids. > > -- > You received this message because you are subscribed to the Google Groups "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0%40pnnl.gov. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gn%3DxsVjjN8sX6km8ub%3Djkk8vxiU2DZVEi-4Kpbi_rM-0w%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Jan 18 22:47:15 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Sun, 19 Jan 2020 04:47:15 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> Message-ID: Dmitry, I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls from Fortran pass 4 byte integers not promoted 8 byte integers. I have tested it with recent versions of MPICH and OpenMPI, it is fragile at compile time and may fail to compile with different versions of MPI. Good luck, Barry I do not recommend this approach for integer promotion in Fortran. Just blindly promoting all integers can often lead to problems. I recommend using the kind mechanism of Fortran to insure that each variable is the type you want, you can recompile with different options to promote the kind declared variables you wish. Of course this is more intrusive and requires changes to the Fortran code. > On Jan 15, 2020, at 7:00 AM, ??????? ????????? wrote: > > Hello all! > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any problem. > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk From mfadams at lbl.gov Sun Jan 19 09:13:52 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 19 Jan 2020 10:13:52 -0500 Subject: [petsc-users] TS (arkimex) question Message-ID: I am using -ts_type arkimex -ts_arkimex_type 1bee -ts_max_snes_failures -1 -ts_rtol 1e-6 -ts_dt 1.e-7 First ,Jed gave me these parameters. This is not a DAE, just a fully implicit solve. Advice on parameters welcome. Second, TS is reporting a large time step (0.0505357) that is wrong. Third, it repeatedly takes this extra one or two (its a 3 step method) step due to SNES failure. I wonder if that can be optimized. Thanks, Mark .... 9 SNES Function norm 1.438395286712e-06 10 SNES Function norm 8.050454869525e-07 Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 10 [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.00461254, *accepting step of size 0.00304954* 600 TS *dt 0.0304954* time 0.697817 0 SNES Function norm 1.018387577463e-02 ... 23 SNES Function norm 6.583420045281e-05 24 SNES Function norm 5.959294539241e-05 25 SNES Function norm 5.394347124131e-05 Nonlinear solve did not converge due to *DIVERGED_MAX_IT* iterations 25 0 SNES Function norm 1.018387577468e-02 ... 24 SNES Function norm 1.000717662032e-06 25 SNES Function norm 7.741622573808e-07 Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 25 0 SNES Function norm 1.014795904701e-02 ... 15 SNES Function norm 1.334407891279e-06 16 SNES Function norm 9.148934277015e-07 Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 16 0 SNES Function norm 1.016588008759e-02 ... 16 SNES Function norm 9.144418053264e-07 Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 16 [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error 0.0184347, *accepting step of size 0.00762384* 601 TS *dt 0.0505357 *time 0.705441 0 SNES Function norm 1.014792968017e-02 1 SNES Function norm 1.026477259201e-03 2 SNES Function norm 6.170336507030e-04 3 SNES Function norm 5.433176612554e-04 4 SNES Function norm 5.196626557375e-04 5 SNES Function norm 4.977855046309e-04 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Jan 19 09:37:36 2020 From: jed at jedbrown.org (Jed Brown) Date: Sun, 19 Jan 2020 08:37:36 -0700 Subject: [petsc-users] TS (arkimex) question In-Reply-To: References: Message-ID: <8736cb8njz.fsf@jedbrown.org> Use -ts_adapt_monitor to see the rationale. Note that 1bee is backward Euler with an extrapolation error estimator (for adaptive control). It's still only first order accurate, and the longer step may be part of your SNES issues. You can set a maximum time step (-ts_adapt_dt_max) or be more aggressive about reducing time step in response to SNES failure (-ts_adapt_scale_solve_failed) or remember that failure for longer before increasing the step again (-ts_adapt_time_step_increase_delay) or more gradually increase time step when permitted (-ts_adapt_clip). Mark Adams writes: > I am using -ts_type arkimex -ts_arkimex_type 1bee -ts_max_snes_failures -1 > -ts_rtol 1e-6 -ts_dt 1.e-7 > > First ,Jed gave me these parameters. This is not a DAE, just a fully > implicit solve. Advice on parameters welcome. > > Second, TS is reporting a large time step (0.0505357) that is wrong. > > Third, it repeatedly takes this extra one or two (its a 3 step method) step > due to SNES failure. I wonder if that can be optimized. > > Thanks, > Mark > > > .... > 9 SNES Function norm 1.438395286712e-06 > 10 SNES Function norm 8.050454869525e-07 > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 10 > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 0.00461254, *accepting step of size 0.00304954* > 600 TS *dt 0.0304954* time 0.697817 > 0 SNES Function norm 1.018387577463e-02 > ... > 23 SNES Function norm 6.583420045281e-05 > 24 SNES Function norm 5.959294539241e-05 > 25 SNES Function norm 5.394347124131e-05 > Nonlinear solve did not converge due to *DIVERGED_MAX_IT* iterations 25 > 0 SNES Function norm 1.018387577468e-02 > ... > 24 SNES Function norm 1.000717662032e-06 > 25 SNES Function norm 7.741622573808e-07 > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 25 > 0 SNES Function norm 1.014795904701e-02 > ... > 15 SNES Function norm 1.334407891279e-06 > 16 SNES Function norm 9.148934277015e-07 > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 16 > 0 SNES Function norm 1.016588008759e-02 > ... > 16 SNES Function norm 9.144418053264e-07 > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 16 > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 0.0184347, *accepting step of size 0.00762384* > 601 TS *dt 0.0505357 *time 0.705441 > 0 SNES Function norm 1.014792968017e-02 > 1 SNES Function norm 1.026477259201e-03 > 2 SNES Function norm 6.170336507030e-04 > 3 SNES Function norm 5.433176612554e-04 > 4 SNES Function norm 5.196626557375e-04 > 5 SNES Function norm 4.977855046309e-04 From mfadams at lbl.gov Sun Jan 19 11:38:23 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 19 Jan 2020 12:38:23 -0500 Subject: [petsc-users] TS (arkimex) question In-Reply-To: <8736cb8njz.fsf@jedbrown.org> References: <8736cb8njz.fsf@jedbrown.org> Message-ID: Can you recommend a higher order method that I might try? On Sun, Jan 19, 2020 at 10:37 AM Jed Brown wrote: > Use -ts_adapt_monitor to see the rationale. > > Note that 1bee is backward Euler with an extrapolation error estimator > (for adaptive control). It's still only first order accurate, and the > longer step may be part of your SNES issues. > > You can set a maximum time step (-ts_adapt_dt_max) or be more aggressive > about reducing time step in response to SNES failure > (-ts_adapt_scale_solve_failed) or remember that failure for longer > before increasing the step again (-ts_adapt_time_step_increase_delay) or > more gradually increase time step when permitted (-ts_adapt_clip). > > Mark Adams writes: > > > I am using -ts_type arkimex -ts_arkimex_type 1bee -ts_max_snes_failures > -1 > > -ts_rtol 1e-6 -ts_dt 1.e-7 > > > > First ,Jed gave me these parameters. This is not a DAE, just a fully > > implicit solve. Advice on parameters welcome. > > > > Second, TS is reporting a large time step (0.0505357) that is wrong. > > > > Third, it repeatedly takes this extra one or two (its a 3 step method) > step > > due to SNES failure. I wonder if that can be optimized. > > > > Thanks, > > Mark > > > > > > .... > > 9 SNES Function norm 1.438395286712e-06 > > 10 SNES Function norm 8.050454869525e-07 > > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 10 > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > > 0.00461254, *accepting step of size 0.00304954* > > 600 TS *dt 0.0304954* time 0.697817 > > 0 SNES Function norm 1.018387577463e-02 > > ... > > 23 SNES Function norm 6.583420045281e-05 > > 24 SNES Function norm 5.959294539241e-05 > > 25 SNES Function norm 5.394347124131e-05 > > Nonlinear solve did not converge due to *DIVERGED_MAX_IT* iterations 25 > > 0 SNES Function norm 1.018387577468e-02 > > ... > > 24 SNES Function norm 1.000717662032e-06 > > 25 SNES Function norm 7.741622573808e-07 > > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations > 25 > > 0 SNES Function norm 1.014795904701e-02 > > ... > > 15 SNES Function norm 1.334407891279e-06 > > 16 SNES Function norm 9.148934277015e-07 > > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations > 16 > > 0 SNES Function norm 1.016588008759e-02 > > ... > > 16 SNES Function norm 9.144418053264e-07 > > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations > 16 > > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > > 0.0184347, *accepting step of size 0.00762384* > > 601 TS *dt 0.0505357 *time 0.705441 > > 0 SNES Function norm 1.014792968017e-02 > > 1 SNES Function norm 1.026477259201e-03 > > 2 SNES Function norm 6.170336507030e-04 > > 3 SNES Function norm 5.433176612554e-04 > > 4 SNES Function norm 5.196626557375e-04 > > 5 SNES Function norm 4.977855046309e-04 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emconsta at anl.gov Sun Jan 19 14:48:30 2020 From: emconsta at anl.gov (Constantinescu, Emil M.) Date: Sun, 19 Jan 2020 20:48:30 +0000 Subject: [petsc-users] TS (arkimex) question In-Reply-To: References: <8736cb8njz.fsf@jedbrown.org> Message-ID: On 1/19/20 11:38 AM, Mark Adams wrote: Can you recommend a higher order method that I might try? Mark, all of 2e, 3, 4, 5 are high order with really good properties. They have error estimators that are cheaper but less reliable (most of the time work well enough). Emil On Sun, Jan 19, 2020 at 10:37 AM Jed Brown > wrote: Use -ts_adapt_monitor to see the rationale. Note that 1bee is backward Euler with an extrapolation error estimator (for adaptive control). It's still only first order accurate, and the longer step may be part of your SNES issues. You can set a maximum time step (-ts_adapt_dt_max) or be more aggressive about reducing time step in response to SNES failure (-ts_adapt_scale_solve_failed) or remember that failure for longer before increasing the step again (-ts_adapt_time_step_increase_delay) or more gradually increase time step when permitted (-ts_adapt_clip). Mark Adams > writes: > I am using -ts_type arkimex -ts_arkimex_type 1bee -ts_max_snes_failures -1 > -ts_rtol 1e-6 -ts_dt 1.e-7 > > First ,Jed gave me these parameters. This is not a DAE, just a fully > implicit solve. Advice on parameters welcome. > > Second, TS is reporting a large time step (0.0505357) that is wrong. > > Third, it repeatedly takes this extra one or two (its a 3 step method) step > due to SNES failure. I wonder if that can be optimized. > > Thanks, > Mark > > > .... > 9 SNES Function norm 1.438395286712e-06 > 10 SNES Function norm 8.050454869525e-07 > Nonlinear solve converged due to CONVERGED_SNORM_RELATIVE iterations 10 > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 0.00461254, *accepting step of size 0.00304954* > 600 TS *dt 0.0304954* time 0.697817 > 0 SNES Function norm 1.018387577463e-02 > ... > 23 SNES Function norm 6.583420045281e-05 > 24 SNES Function norm 5.959294539241e-05 > 25 SNES Function norm 5.394347124131e-05 > Nonlinear solve did not converge due to *DIVERGED_MAX_IT* iterations 25 > 0 SNES Function norm 1.018387577468e-02 > ... > 24 SNES Function norm 1.000717662032e-06 > 25 SNES Function norm 7.741622573808e-07 > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 25 > 0 SNES Function norm 1.014795904701e-02 > ... > 15 SNES Function norm 1.334407891279e-06 > 16 SNES Function norm 9.148934277015e-07 > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 16 > 0 SNES Function norm 1.016588008759e-02 > ... > 16 SNES Function norm 9.144418053264e-07 > Nonlinear solve converged due to *CONVERGED_SNORM_RELATIVE* iterations 16 > [0] TSAdaptChoose_Basic(): Estimated scaled local truncation error > 0.0184347, *accepting step of size 0.00762384* > 601 TS *dt 0.0505357 *time 0.705441 > 0 SNES Function norm 1.014792968017e-02 > 1 SNES Function norm 1.026477259201e-03 > 2 SNES Function norm 6.170336507030e-04 > 3 SNES Function norm 5.433176612554e-04 > 4 SNES Function norm 5.196626557375e-04 > 5 SNES Function norm 4.977855046309e-04 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Jan 19 16:07:00 2020 From: jed at jedbrown.org (Jed Brown) Date: Sun, 19 Jan 2020 15:07:00 -0700 Subject: [petsc-users] TS (arkimex) question In-Reply-To: References: <8736cb8njz.fsf@jedbrown.org> Message-ID: <87muaj6qyj.fsf@jedbrown.org> "Constantinescu, Emil M." writes: > On 1/19/20 11:38 AM, Mark Adams wrote: > Can you recommend a higher order method that I might try? > > > Mark, all of 2e, 3, 4, 5 are high order with really good properties. They have error estimators that are cheaper but less reliable (most of the time work well enough). BDF2 may also be a good choice. Unless I'm misreading the logs, it seems to allege that SNES convergence is problematic before time discretization error. Mark, it would help to know more about the dominant modes in the system. From dmitry.melnichuk at geosteertech.com Mon Jan 20 04:43:26 2020 From: dmitry.melnichuk at geosteertech.com (=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JzQtdC70YzQvdC40YfRg9C6?=) Date: Mon, 20 Jan 2020 13:43:26 +0300 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> Message-ID: <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 20 05:24:18 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 20 Jan 2020 06:24:18 -0500 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> Message-ID: On Mon, Jan 20, 2020 at 5:43 AM ??????? ????????? < dmitry.melnichuk at geosteertech.com> wrote: > Thank you so much for your assistance! > > As far as I have been able to find out, the errors * "Type mismatch in > argument ?ierr?"* have been successfully fixed. > But execution of command "*make PETSC_DIR=/cygdrive/d/... > PETSC_ARCH=arch-mswin-c-debug check" *leads to the appereance of > Segmantation Violation error. > > I compiled PETSc with Microsoft MPI v10. > Does it make sense to compile PETSc with another MPI implementation (such > as MPICH) in order to resolve the issue? > Its not MPI. The problem appears to be your BLAS. Barry, is this a mismatch with BLAS ints? Matt > Error message: > *Running test examples to verify correct installation* > *Using > PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry > and PETSC_ARCH=arch-mswin-c-debug* > *Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI > process* > *See http://www.mcs.anl.gov/petsc/documentation/faq.html > * > *C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot > open shared object file: No such file or directory* > *Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI > processes* > *See http://www.mcs.anl.gov/petsc/documentation/faq.html > * > *C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot > open shared object file: No such file or directory* > *Possible error running Fortran example src/snes/examples/tutorials/ex5f > with 1 MPI process* > *See http://www.mcs.anl.gov/petsc/documentation/faq.html > * > *[0]PETSC ERROR: > ------------------------------------------------------------------------* > *[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range* > *[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger* > *[0]PETSC ERROR: or see > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > * > *[0]PETSC ERROR: or try http://valgrind.org on > GNU/linux and Apple Mac OS X to find memory corruption errors* > *[0]PETSC ERROR: likely location of problem given in stack below* > *[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------* > *[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,* > *[0]PETSC ERROR: INSTEAD the line number of the start of the > function* > *[0]PETSC ERROR: is given.* > *[0]PETSC ERROR: [0] VecNorm_Seq line 221 > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c* > *[0]PETSC ERROR: [0] VecNorm line 213 > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c* > *[0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c* > *[0]PETSC ERROR: [0] SNESSolve line 4375 > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c* > *[0]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------* > *[0]PETSC ERROR: Signal received* > *[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble > shooting.* > *[0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: > unknown* > *[0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by > useruser Mon Jan 20 09:18:34 2020* > *[0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran > --with-mpi-include=/cygdrive/c/MPISDK/Include > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static > -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack > --with-shared-libraries=no --with-64-bit-indices --force* > *[0]PETSC ERROR: #1 User provided function() line 0 in unknown file* > > *job aborted:* > *[ranks] message* > > *[0] application aborted* > *aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0* > > *---- error analysis -----* > > *[0] on DESKTOP-R88IMOB* > *./ex5f aborted the job. abort code 50152059* > > *---- error analysis -----* > *Completed test examples* > > Kind regards, > Dmitry Melnichuk > > 19.01.2020, 07:47, "Smith, Barry F." : > > > Dmitry, > > I have completed and tested the branch > barry/2020-01-15/support-default-integer-8 it is undergoing testing now > https://gitlab.com/petsc/petsc/merge_requests/2456 > > Please give it a try. Note that MPI has no support for integer > promotion so YOU must insure that any MPI calls from Fortran pass 4 byte > integers not promoted 8 byte integers. > > I have tested it with recent versions of MPICH and OpenMPI, it is > fragile at compile time and may fail to compile with different versions of > MPI. > > Good luck, > > Barry > > I do not recommend this approach for integer promotion in Fortran. > Just blindly promoting all integers can often lead to problems. I recommend > using the kind mechanism of > Fortran to insure that each variable is the type you want, you can > recompile with different options to promote the kind declared variables you > wish. Of course this is more intrusive and requires changes to the Fortran > code. > > > On Jan 15, 2020, at 7:00 AM, ??????? ????????? < > dmitry.melnichuk at geosteertech.com> wrote: > > Hello all! > > At present time I need to compile solver called Defmod ( > https://bitbucket.org/stali/defmod/wiki/Home), which is written in > Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any > problem. > But solver compilation with 64-bit version of PETSc produces an error > with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size of > INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr > to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at > (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If > I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); > passed INTEGER(8) to INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument > ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > 3. If I change the sizes of ierr vaiables as error messages require, the > compilation completed successfully, but an error occurs when calculating > the RHS vector with following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using > Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran > --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using > Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc > --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran > --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include > --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a > --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes > -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static > -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no > --with-64-bit-indices --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jan 20 07:32:04 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 20 Jan 2020 13:32:04 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> Message-ID: First you need to figure out what is triggering: C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. Second > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. Barry > On Jan 20, 2020, at 4:43 AM, ??????? ????????? wrote: > > Thank you so much for your assistance! > > As far as I have been able to find out, the errors "Type mismatch in argument ?ierr?" have been successfully fixed. > But execution of command "make PETSC_DIR=/cygdrive/d/... PETSC_ARCH=arch-mswin-c-debug check" leads to the appereance of Segmantation Violation error. > > I compiled PETSc with Microsoft MPI v10. > Does it make sense to compile PETSc with another MPI implementation (such as MPICH) in order to resolve the issue? > > Error message: > Running test examples to verify correct installation > Using PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry and PETSC_ARCH=arch-mswin-c-debug > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes > See http://www.mcs.anl.gov/petsc/documentation/faq.html > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNorm line 213 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c > [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c > [0]PETSC ERROR: [0] SNESSolve line 4375 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown > [0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by useruser Mon Jan 20 09:18:34 2020 > [0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack --with-shared-libraries=no --with-64-bit-indices --force > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > job aborted: > [ranks] message > > [0] application aborted > aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0 > > ---- error analysis ----- > > [0] on DESKTOP-R88IMOB > ./ex5f aborted the job. abort code 50152059 > > ---- error analysis ----- > Completed test examples > > Kind regards, > Dmitry Melnichuk > > 19.01.2020, 07:47, "Smith, Barry F." : > > Dmitry, > > I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 > > Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls from Fortran pass 4 byte integers not promoted 8 byte integers. > > I have tested it with recent versions of MPICH and OpenMPI, it is fragile at compile time and may fail to compile with different versions of MPI. > > Good luck, > > Barry > > I do not recommend this approach for integer promotion in Fortran. Just blindly promoting all integers can often lead to problems. I recommend using the kind mechanism of > Fortran to insure that each variable is the type you want, you can recompile with different options to promote the kind declared variables you wish. Of course this is more intrusive and requires changes to the Fortran code. > > > On Jan 15, 2020, at 7:00 AM, ??????? ????????? wrote: > > Hello all! > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any problem. > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk > From sam.guo at cd-adapco.com Mon Jan 20 12:10:05 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 10:10:05 -0800 Subject: [petsc-users] error handling Message-ID: Dear PETSc dev team, If PETSc function returns an error, what's the correct way to clean PETSc? Particularly how to clean up the memory? Thanks, Sam -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Jan 20 12:14:34 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 20 Jan 2020 19:14:34 +0100 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > Dear PETSc dev team, > If PETSc function returns an error, what's the correct way to clean > PETSc? > The answer depends on the error message reported. Send the complete error message and a better answer can be provided. Particularly how to clean up the memory? > Totally depends on the objects which aren?t being freed. You need to provide more information Thanks Dave > Thanks, > Sam > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Mon Jan 20 12:39:29 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 10:39:29 -0800 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: I don't have a specific case yet. Currently every call of PETSc is checked. If ierr is not zero, print the error and return. For example, Mat A; /* problem matrix */ EPS eps; /* eigenproblem solver context */ EPSType type; PetscReal error,tol,re,im; PetscScalar kr,ki; Vec xr,xi; 25 PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; PetscErrorCode ierr; ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); I am wondering if the memory is lost by calling CHKERRQ. On Mon, Jan 20, 2020 at 10:14 AM Dave May wrote: > > > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > >> Dear PETSc dev team, >> If PETSc function returns an error, what's the correct way to clean >> PETSc? >> > > The answer depends on the error message reported. Send the complete error > message and a better answer can be provided. > > Particularly how to clean up the memory? >> > > Totally depends on the objects which aren?t being freed. You need to > provide more information > > Thanks > Dave > > >> Thanks, >> Sam >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Jan 20 12:40:49 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 20 Jan 2020 19:40:49 +0100 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > I don't have a specific case yet. Currently every call of PETSc is > checked. If ierr is not zero, print the error and return. For example, > Mat A; /* problem matrix */ > EPS eps; /* eigenproblem solver context */ > EPSType type; > PetscReal error,tol,re,im; > PetscScalar kr,ki; Vec xr,xi; 25 > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > PetscErrorCode ierr; > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, > n=%D\n\n",n);CHKERRQ(ierr); > > I am wondering if the memory is lost by calling CHKERRQ. > No. > On Mon, Jan 20, 2020 at 10:14 AM Dave May wrote: > >> >> >> On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >> >>> Dear PETSc dev team, >>> If PETSc function returns an error, what's the correct way to clean >>> PETSc? >>> >> >> The answer depends on the error message reported. Send the complete error >> message and a better answer can be provided. >> >> Particularly how to clean up the memory? >>> >> >> Totally depends on the objects which aren?t being freed. You need to >> provide more information >> >> Thanks >> Dave >> >> >>> Thanks, >>> Sam >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Mon Jan 20 12:45:19 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 10:45:19 -0800 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: I only include the first few lines of SLEPc example. What about following ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); Is there any memory lost? On Mon, Jan 20, 2020 at 10:41 AM Dave May wrote: > > > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > >> I don't have a specific case yet. Currently every call of PETSc is >> checked. If ierr is not zero, print the error and return. For example, >> Mat A; /* problem matrix */ >> EPS eps; /* eigenproblem solver context */ >> EPSType type; >> PetscReal error,tol,re,im; >> PetscScalar kr,ki; Vec xr,xi; 25 >> PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >> PetscErrorCode ierr; >> ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >> ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >> ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, >> n=%D\n\n",n);CHKERRQ(ierr); >> >> I am wondering if the memory is lost by calling CHKERRQ. >> > > No. > > > >> On Mon, Jan 20, 2020 at 10:14 AM Dave May >> wrote: >> >>> >>> >>> On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >>> >>>> Dear PETSc dev team, >>>> If PETSc function returns an error, what's the correct way to clean >>>> PETSc? >>>> >>> >>> The answer depends on the error message reported. Send the complete >>> error message and a better answer can be provided. >>> >>> Particularly how to clean up the memory? >>>> >>> >>> Totally depends on the objects which aren?t being freed. You need to >>> provide more information >>> >>> Thanks >>> Dave >>> >>> >>>> Thanks, >>>> Sam >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Mon Jan 20 12:47:23 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 10:47:23 -0800 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: Can I assume if there is MatCreat or VecCreate, I should clean up the memory myself? On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: > I only include the first few lines of SLEPc example. What about following > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > Is there any memory lost? > > On Mon, Jan 20, 2020 at 10:41 AM Dave May wrote: > >> >> >> On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: >> >>> I don't have a specific case yet. Currently every call of PETSc is >>> checked. If ierr is not zero, print the error and return. For example, >>> Mat A; /* problem matrix */ >>> EPS eps; /* eigenproblem solver context */ >>> EPSType type; >>> PetscReal error,tol,re,im; >>> PetscScalar kr,ki; Vec xr,xi; 25 >>> PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >>> PetscErrorCode ierr; >>> ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >>> ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >>> ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, >>> n=%D\n\n",n);CHKERRQ(ierr); >>> >>> I am wondering if the memory is lost by calling CHKERRQ. >>> >> >> No. >> >> >> >>> On Mon, Jan 20, 2020 at 10:14 AM Dave May >>> wrote: >>> >>>> >>>> >>>> On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >>>> >>>>> Dear PETSc dev team, >>>>> If PETSc function returns an error, what's the correct way to clean >>>>> PETSc? >>>>> >>>> >>>> The answer depends on the error message reported. Send the complete >>>> error message and a better answer can be provided. >>>> >>>> Particularly how to clean up the memory? >>>>> >>>> >>>> Totally depends on the objects which aren?t being freed. You need to >>>> provide more information >>>> >>>> Thanks >>>> Dave >>>> >>>> >>>>> Thanks, >>>>> Sam >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Jan 20 13:11:24 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 20 Jan 2020 20:11:24 +0100 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > Can I assume if there is MatCreat or VecCreate, I should clean up the > memory myself? > Yes. You will need to call the matching Destroy function. > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: > >> I only include the first few lines of SLEPc example. What about following >> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); >> ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); >> Is there any memory lost? >> >> On Mon, Jan 20, 2020 at 10:41 AM Dave May >> wrote: >> >>> >>> >>> On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: >>> >>>> I don't have a specific case yet. Currently every call of PETSc is >>>> checked. If ierr is not zero, print the error and return. For example, >>>> Mat A; /* problem matrix */ >>>> EPS eps; /* eigenproblem solver context */ >>>> EPSType type; >>>> PetscReal error,tol,re,im; >>>> PetscScalar kr,ki; Vec xr,xi; 25 >>>> PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >>>> PetscErrorCode ierr; >>>> ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >>>> ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >>>> ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, >>>> n=%D\n\n",n);CHKERRQ(ierr); >>>> >>>> I am wondering if the memory is lost by calling CHKERRQ. >>>> >>> >>> No. >>> >>> >>> >>>> On Mon, Jan 20, 2020 at 10:14 AM Dave May >>>> wrote: >>>> >>>>> >>>>> >>>>> On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >>>>> >>>>>> Dear PETSc dev team, >>>>>> If PETSc function returns an error, what's the correct way to >>>>>> clean PETSc? >>>>>> >>>>> >>>>> The answer depends on the error message reported. Send the complete >>>>> error message and a better answer can be provided. >>>>> >>>>> Particularly how to clean up the memory? >>>>>> >>>>> >>>>> Totally depends on the objects which aren?t being freed. You need to >>>>> provide more information >>>>> >>>>> Thanks >>>>> Dave >>>>> >>>>> >>>>>> Thanks, >>>>>> Sam >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtmills at anl.gov Mon Jan 20 16:41:40 2020 From: rtmills at anl.gov (Mills, Richard Tran) Date: Mon, 20 Jan 2020 22:41:40 +0000 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: Hi Xiangdong, Maybe I am misunderstanding you, but it sounds like you want an exact direct solution, so I don't understand why you are using an incomplete factorization solver for this. SuperLU_DIST (as Mark has suggested) or MUMPS are two such packages that provide MPI-parallel sparse LU factorization. If you need GPU support, SuperLU_DIST has such support. I don't know the status of our support for using the GPU capabilities of this, though -- I assume another developer can chime in regarding this. Note that the ILU provided by "chowiluiennacl" employs a very different algorithm than the standard PCILU in PETSc, and you shouldn't expect to get the same incomplete factorization. The algorithm is described in this paper by Chow and Patel: https://www.cc.gatech.edu/~echow/pubs/parilu-sisc.pdf Best regards, Richard On 1/15/20 11:39 AM, Xiangdong wrote: I just submitted the issue: https://gitlab.com/petsc/petsc/issues/535 What I really want is an exact Block Tri-diagonal solver on GPU. Since for block tridiagonal system, ILU0 would be the same as ILU. So I tried the chowiluviennacl. but I found that the default parameters does not produce the same ILU0 factorization as the CPU ones (PCILU). My guess is that if I increase the number of sweeps chow_patel_ilu_config.sweeps(3), it may give a better result. So the option Keys would be helpful. Since Mark mentioned the Superlu's GPU feature, can I use superlu or hypre's GPU functionality through PETSc? Thank you. Xiangdong On Wed, Jan 15, 2020 at 2:22 PM Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 1:48 PM Xiangdong > wrote: In the ViennaCL manual http://viennacl.sourceforge.net/doc/manual-algorithms.html It did expose two parameters: // configuration of preconditioner: viennacl::linalg::chow_patel_tag chow_patel_ilu_config; chow_patel_ilu_config.sweeps(3); // three nonlinear sweeps chow_patel_ilu_config.jacobi_iters(2); // two Jacobi iterations per triangular 'solve' Rx=r and mentioned that: The number of nonlinear sweeps and Jacobi iterations need to be set problem-specific for best performance. In the PETSc' implementation: viennacl::linalg::chow_patel_tag ilu_tag; ViennaCLAIJMatrix *mat = (ViennaCLAIJMatrix*)gpustruct->mat; ilu->CHOWILUVIENNACL = new viennacl::linalg::chow_patel_ilu_precond >(*mat, ilu_tag); The default is used. Is it possible to expose these two parameters so that user can change it through option keys? Yes. Do you mind making an issue for it? That way we can better keep track. https://gitlab.com/petsc/petsc/issues Thanks, Matt Thank you. Xiangdong On Wed, Jan 15, 2020 at 12:40 PM Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 9:59 AM Xiangdong > wrote: Maybe I am not clear. I want to solve the block tridiagonal system Tx=b a few times with same T but different b. On CPU, I can have it by applying the ILU0 and reuse the factorization. Since it is block tridiagonal, ILU0 would give same results as LU. I am trying to do the same thing on GPU with chowiluviennacl, but found default factorization does not produce the exact factorization for tridiagonal system. Can we tight the drop off tolerance so that it can work as LU for tridiagonal system? There are no options in our implementation. You could look at the ViennaCL manual to see if we missed something. Thanks, Matt Thank you. Xiangdong On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 9:36 AM Xiangdong > wrote: Can chowiluviennacl do ilu0? I need to solve a tri-diagonal system directly. If I apply the PCILU, I will obtain the exact solution with preonly + pcilu. However, the preonly + chowiluviennacl will not provide the exact solution. Any option keys to set the CHOWILUVIENNACL filling level or dropping off tolerance like the standard ilu? No. However, such a scheme makes less sense here. This algorithm spawns a individual threads for individual elements. Drop tolerance is not less work, it is sparser, but that should not matter for a tridiagonal system. Levels also is not applicable since you have only 1 level. Thanks, Matt Thank you. Best, Xiangdong On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley > wrote: On Tue, Jan 14, 2020 at 9:56 PM Xiangdong > wrote: Dear Developers, I have a quick question about the chowiluviennacl. When I tried to use it, I found that it only works for np=1, not np>1. However, in the description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel parallel ILU preconditioner". By parallel, this means shared memory parallelism on the GPU. I am wondering whether I am using it correctly. Does chowiluviennacl work for np>1? I do not believe so. I do not see why it could not be extended, but that would mean writing some more code. Thanks, Matt In addition, are there option keys for the chowiluviennacl one can try? Thank you. Best, Xiangdong -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Mon Jan 20 17:28:35 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 15:28:35 -0800 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: Does it hurt to call Destroy function without calling CreateFunction? For example Mat A, B; PetscErrorCode ierr1, ierr2; ierr1 = MatCreate (PETSC_COMM_WORLD ,&A); if(ierr1 == 0) { ierr2 = MatCreate (PETSC_COMM_WORLD ,&B); } if(ierr1 !=0 || ierr2 != 0) { Destroy(&A); Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it hurt to call Destroy B here? } On Mon, Jan 20, 2020 at 11:11 AM Dave May wrote: > > > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > >> Can I assume if there is MatCreat or VecCreate, I should clean up the >> memory myself? >> > > Yes. You will need to call the matching Destroy function. > > > >> On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: >> >>> I only include the first few lines of SLEPc example. What about following >>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); >>> ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); >>> Is there any memory lost? >>> >>> On Mon, Jan 20, 2020 at 10:41 AM Dave May >>> wrote: >>> >>>> >>>> >>>> On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: >>>> >>>>> I don't have a specific case yet. Currently every call of PETSc is >>>>> checked. If ierr is not zero, print the error and return. For example, >>>>> Mat A; /* problem matrix */ >>>>> EPS eps; /* eigenproblem solver context */ >>>>> EPSType type; >>>>> PetscReal error,tol,re,im; >>>>> PetscScalar kr,ki; Vec xr,xi; 25 >>>>> PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >>>>> PetscErrorCode ierr; >>>>> ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >>>>> ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >>>>> ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, >>>>> n=%D\n\n",n);CHKERRQ(ierr); >>>>> >>>>> I am wondering if the memory is lost by calling CHKERRQ. >>>>> >>>> >>>> No. >>>> >>>> >>>> >>>>> On Mon, Jan 20, 2020 at 10:14 AM Dave May >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >>>>>> >>>>>>> Dear PETSc dev team, >>>>>>> If PETSc function returns an error, what's the correct way to >>>>>>> clean PETSc? >>>>>>> >>>>>> >>>>>> The answer depends on the error message reported. Send the complete >>>>>> error message and a better answer can be provided. >>>>>> >>>>>> Particularly how to clean up the memory? >>>>>>> >>>>>> >>>>>> Totally depends on the objects which aren?t being freed. You need to >>>>>> provide more information >>>>>> >>>>>> Thanks >>>>>> Dave >>>>>> >>>>>> >>>>>>> Thanks, >>>>>>> Sam >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.guo at cd-adapco.com Mon Jan 20 17:30:24 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 15:30:24 -0800 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: I mean MatDestroy. On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > Does it hurt to call Destroy function without calling CreateFunction? For > example > Mat A, B; > > PetscErrorCode ierr1, ierr2; > > ierr1 = MatCreate (PETSC_COMM_WORLD ,&A); > > if(ierr1 == 0) > > { > > ierr2 = MatCreate (PETSC_COMM_WORLD ,&B); > > } > > if(ierr1 !=0 || ierr2 != 0) > > { > > Destroy(&A); > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it hurt to call Destroy B here? > > } > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May wrote: > >> >> >> On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: >> >>> Can I assume if there is MatCreat or VecCreate, I should clean up the >>> memory myself? >>> >> >> Yes. You will need to call the matching Destroy function. >> >> >> >>> On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: >>> >>>> I only include the first few lines of SLEPc example. What about >>>> following >>>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); >>>> ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); >>>> Is there any memory lost? >>>> >>>> On Mon, Jan 20, 2020 at 10:41 AM Dave May >>>> wrote: >>>> >>>>> >>>>> >>>>> On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: >>>>> >>>>>> I don't have a specific case yet. Currently every call of PETSc is >>>>>> checked. If ierr is not zero, print the error and return. For example, >>>>>> Mat A; /* problem matrix */ >>>>>> EPS eps; /* eigenproblem solver context */ >>>>>> EPSType type; >>>>>> PetscReal error,tol,re,im; >>>>>> PetscScalar kr,ki; Vec xr,xi; 25 >>>>>> PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >>>>>> PetscErrorCode ierr; >>>>>> ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >>>>>> ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >>>>>> ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, >>>>>> n=%D\n\n",n);CHKERRQ(ierr); >>>>>> >>>>>> I am wondering if the memory is lost by calling CHKERRQ. >>>>>> >>>>> >>>>> No. >>>>> >>>>> >>>>> >>>>>> On Mon, Jan 20, 2020 at 10:14 AM Dave May >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >>>>>>> >>>>>>>> Dear PETSc dev team, >>>>>>>> If PETSc function returns an error, what's the correct way to >>>>>>>> clean PETSc? >>>>>>>> >>>>>>> >>>>>>> The answer depends on the error message reported. Send the complete >>>>>>> error message and a better answer can be provided. >>>>>>> >>>>>>> Particularly how to clean up the memory? >>>>>>>> >>>>>>> >>>>>>> Totally depends on the objects which aren?t being freed. You need to >>>>>>> provide more information >>>>>>> >>>>>>> Thanks >>>>>>> Dave >>>>>>> >>>>>>> >>>>>>>> Thanks, >>>>>>>> Sam >>>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 20 17:41:08 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 20 Jan 2020 18:41:08 -0500 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: Not if you initialize the pointers to zero: Mat A = NULL. Matt On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: > I mean MatDestroy. > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > >> Does it hurt to call Destroy function without calling CreateFunction? For >> example >> Mat A, B; >> >> PetscErrorCode ierr1, ierr2; >> >> ierr1 = MatCreate (PETSC_COMM_WORLD ,&A); >> >> if(ierr1 == 0) >> >> { >> >> ierr2 = MatCreate (PETSC_COMM_WORLD ,&B); >> >> } >> >> if(ierr1 !=0 || ierr2 != 0) >> >> { >> >> Destroy(&A); >> >> Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it hurt to call Destroy B here? >> >> } >> >> >> >> >> On Mon, Jan 20, 2020 at 11:11 AM Dave May >> wrote: >> >>> >>> >>> On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: >>> >>>> Can I assume if there is MatCreat or VecCreate, I should clean up the >>>> memory myself? >>>> >>> >>> Yes. You will need to call the matching Destroy function. >>> >>> >>> >>>> On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: >>>> >>>>> I only include the first few lines of SLEPc example. What about >>>>> following >>>>> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); >>>>> ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); >>>>> Is there any memory lost? >>>>> >>>>> On Mon, Jan 20, 2020 at 10:41 AM Dave May >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: >>>>>> >>>>>>> I don't have a specific case yet. Currently every call of PETSc is >>>>>>> checked. If ierr is not zero, print the error and return. For example, >>>>>>> Mat A; /* problem matrix */ >>>>>>> EPS eps; /* eigenproblem solver context */ >>>>>>> EPSType type; >>>>>>> PetscReal error,tol,re,im; >>>>>>> PetscScalar kr,ki; Vec xr,xi; 25 >>>>>>> PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >>>>>>> PetscErrorCode ierr; >>>>>>> ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >>>>>>> ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >>>>>>> ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian >>>>>>> Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); >>>>>>> >>>>>>> I am wondering if the memory is lost by calling CHKERRQ. >>>>>>> >>>>>> >>>>>> No. >>>>>> >>>>>> >>>>>> >>>>>>> On Mon, Jan 20, 2020 at 10:14 AM Dave May >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon 20. Jan 2020 at 19:11, Sam Guo >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Dear PETSc dev team, >>>>>>>>> If PETSc function returns an error, what's the correct way to >>>>>>>>> clean PETSc? >>>>>>>>> >>>>>>>> >>>>>>>> The answer depends on the error message reported. Send the complete >>>>>>>> error message and a better answer can be provided. >>>>>>>> >>>>>>>> Particularly how to clean up the memory? >>>>>>>>> >>>>>>>> >>>>>>>> Totally depends on the objects which aren?t being freed. You need >>>>>>>> to provide more information >>>>>>>> >>>>>>>> Thanks >>>>>>>> Dave >>>>>>>> >>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Sam >>>>>>>>> >>>>>>>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jan 20 18:06:23 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 21 Jan 2020 00:06:23 +0000 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: Sam, I am not sure what your goal is but PETSc error return codes are error return codes not exceptions. They mean that something catastrophic happened and there is no recovery. Note that PETSc solvers do not return nonzero error codes on failure to converge etc. You call, for example, KPSGetConvergedReason() after a KSP solve to see if it has failed, this is not a catastrophic failure. If a MatCreate() or any other call returns a nonzero ierr the game is up, you cannot continue running PETSc. Barry > On Jan 20, 2020, at 5:41 PM, Matthew Knepley wrote: > > Not if you initialize the pointers to zero: Mat A = NULL. > > Matt > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: > I mean MatDestroy. > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > Does it hurt to call Destroy function without calling CreateFunction? For example > Mat A, B; > PetscErrorCode ierr1, ierr2; > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > if(ierr1 == 0) > { > ierr2 = MatCreate(PETSC_COMM_WORLD > ,&B); > > } > if(ierr1 !=0 || ierr2 != 0) > { > Destroy(&A); > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it hurt to call Destroy B here? > } > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May wrote: > > > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > Can I assume if there is MatCreat or VecCreate, I should clean up the memory myself? > > Yes. You will need to call the matching Destroy function. > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: > I only include the first few lines of SLEPc example. What about following > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > Is there any memory lost? > > On Mon, Jan 20, 2020 at 10:41 AM Dave May wrote: > > > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > I don't have a specific case yet. Currently every call of PETSc is checked. If ierr is not zero, print the error and return. For example, > Mat A; /* problem matrix */ > EPS eps; /* eigenproblem solver context */ > EPSType type; > PetscReal error,tol,re,im; > PetscScalar kr,ki; Vec xr,xi; 25 > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > PetscErrorCode ierr; > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); > > I am wondering if the memory is lost by calling CHKERRQ. > > No. > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May wrote: > > > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > Dear PETSc dev team, > If PETSc function returns an error, what's the correct way to clean PETSc? > > The answer depends on the error message reported. Send the complete error message and a better answer can be provided. > > Particularly how to clean up the memory? > > Totally depends on the objects which aren?t being freed. You need to provide more information > > Thanks > Dave > > > Thanks, > Sam > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From sam.guo at cd-adapco.com Mon Jan 20 18:32:20 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Mon, 20 Jan 2020 16:32:20 -0800 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: Hi Barry, I understand ierr != 0 means something catastrophic. I just want to release all memory before I exit PETSc. Thanks, Sam On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. wrote: > > Sam, > > I am not sure what your goal is but PETSc error return codes are error > return codes not exceptions. They mean that something catastrophic happened > and there is no recovery. > > Note that PETSc solvers do not return nonzero error codes on failure > to converge etc. You call, for example, KPSGetConvergedReason() after a KSP > solve to see if it has failed, this is not a catastrophic failure. If a > MatCreate() or any other call returns a nonzero ierr the game is up, you > cannot continue running PETSc. > > Barry > > > > On Jan 20, 2020, at 5:41 PM, Matthew Knepley wrote: > > > > Not if you initialize the pointers to zero: Mat A = NULL. > > > > Matt > > > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: > > I mean MatDestroy. > > > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > > Does it hurt to call Destroy function without calling CreateFunction? > For example > > Mat A, B; > > PetscErrorCode ierr1, ierr2; > > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > > if(ierr1 == 0) > > { > > ierr2 = MatCreate(PETSC_COMM_WORLD > > ,&B); > > > > } > > if(ierr1 !=0 || ierr2 != 0) > > { > > Destroy(&A); > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it > hurt to call Destroy B here? > > } > > > > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May > wrote: > > > > > > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > > Can I assume if there is MatCreat or VecCreate, I should clean up the > memory myself? > > > > Yes. You will need to call the matching Destroy function. > > > > > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: > > I only include the first few lines of SLEPc example. What about following > > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > > Is there any memory lost? > > > > On Mon, Jan 20, 2020 at 10:41 AM Dave May > wrote: > > > > > > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > > I don't have a specific case yet. Currently every call of PETSc is > checked. If ierr is not zero, print the error and return. For example, > > Mat A; /* problem matrix */ > > EPS eps; /* eigenproblem solver context */ > > EPSType type; > > PetscReal error,tol,re,im; > > PetscScalar kr,ki; Vec xr,xi; 25 > > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > > PetscErrorCode ierr; > > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); > > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, > n=%D\n\n",n);CHKERRQ(ierr); > > > > I am wondering if the memory is lost by calling CHKERRQ. > > > > No. > > > > > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May > wrote: > > > > > > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > > Dear PETSc dev team, > > If PETSc function returns an error, what's the correct way to clean > PETSc? > > > > The answer depends on the error message reported. Send the complete > error message and a better answer can be provided. > > > > Particularly how to clean up the memory? > > > > Totally depends on the objects which aren?t being freed. You need to > provide more information > > > > Thanks > > Dave > > > > > > Thanks, > > Sam > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jourdon_anthony at hotmail.fr Tue Jan 21 02:25:13 2020 From: jourdon_anthony at hotmail.fr (Anthony Jourdon) Date: Tue, 21 Jan 2020 08:25:13 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: , Message-ID: Hello, I made a test to try to reproduce the error. To do so I modified the file $PETSC_DIR/src/dm/examples/tests/ex35.c I attach the file in case of need. The same error is reproduced for 1024 mpi ranks. I tested two problem sizes (2*512+1x2*64+1x2*256+1 and 2*1024+1x2*128+1x2*512+1) and the error occured for both cases, the first case is also the one I used to run before the OS and mpi updates. I also run the code with -malloc_debug and nothing more appeared. I attached the configure command I used to build a debug version of petsc. Thank you for your time, Sincerly. Anthony Jourdon ________________________________ De : Zhang, Junchao Envoy? : jeudi 16 janvier 2020 16:49 ? : Anthony Jourdon Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] DMDA Error It seems the problem is triggered by DMSetUp. You can write a small test creating the DMDA with the same size as your code, to see if you can reproduce the problem. If yes, it would be much easier for us to debug it. --Junchao Zhang On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon > wrote: Dear Petsc developer, I need assistance with an error. I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. This code used to run very well on a super computer with the OS SLES11. Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 The code was running with no problem on 1024 and more mpi ranks. Recently, the OS of the computer has been updated to RHEL7 I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c Thank you for your time, Sincerly, Anthony Jourdon -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Configure_petsc_debug Type: application/octet-stream Size: 1228 bytes Desc: Configure_petsc_debug URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ex35.c URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TEST_DMDA_x512y64z256.err Type: application/octet-stream Size: 11013 bytes Desc: TEST_DMDA_x512y64z256.err URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TEST_DMDA_x1024y128z512.err Type: application/octet-stream Size: 14998 bytes Desc: TEST_DMDA_x1024y128z512.err URL: From patrick.sanan at gmail.com Tue Jan 21 02:47:39 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Tue, 21 Jan 2020 09:47:39 +0100 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: Just to clarify: are you using PETSc within some larger application, which you are hoping to continue executing, even after PETSc produces an error? Am Di., 21. Jan. 2020 um 01:33 Uhr schrieb Sam Guo : > Hi Barry, > I understand ierr != 0 means something catastrophic. I just want to > release all memory before I exit PETSc. > > Thanks, > Sam > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. > wrote: > >> >> Sam, >> >> I am not sure what your goal is but PETSc error return codes are >> error return codes not exceptions. They mean that something catastrophic >> happened and there is no recovery. >> >> Note that PETSc solvers do not return nonzero error codes on failure >> to converge etc. You call, for example, KPSGetConvergedReason() after a KSP >> solve to see if it has failed, this is not a catastrophic failure. If a >> MatCreate() or any other call returns a nonzero ierr the game is up, you >> cannot continue running PETSc. >> >> Barry >> >> >> > On Jan 20, 2020, at 5:41 PM, Matthew Knepley wrote: >> > >> > Not if you initialize the pointers to zero: Mat A = NULL. >> > >> > Matt >> > >> > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: >> > I mean MatDestroy. >> > >> > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: >> > Does it hurt to call Destroy function without calling CreateFunction? >> For example >> > Mat A, B; >> > PetscErrorCode ierr1, ierr2; >> > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); >> > if(ierr1 == 0) >> > { >> > ierr2 = MatCreate(PETSC_COMM_WORLD >> > ,&B); >> > >> > } >> > if(ierr1 !=0 || ierr2 != 0) >> > { >> > Destroy(&A); >> > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it >> hurt to call Destroy B here? >> > } >> > >> > >> > >> > On Mon, Jan 20, 2020 at 11:11 AM Dave May >> wrote: >> > >> > >> > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: >> > Can I assume if there is MatCreat or VecCreate, I should clean up the >> memory myself? >> > >> > Yes. You will need to call the matching Destroy function. >> > >> > >> > >> > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: >> > I only include the first few lines of SLEPc example. What about >> following >> > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); >> > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); >> > Is there any memory lost? >> > >> > On Mon, Jan 20, 2020 at 10:41 AM Dave May >> wrote: >> > >> > >> > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: >> > I don't have a specific case yet. Currently every call of PETSc is >> checked. If ierr is not zero, print the error and return. For example, >> > Mat A; /* problem matrix */ >> > EPS eps; /* eigenproblem solver context */ >> > EPSType type; >> > PetscReal error,tol,re,im; >> > PetscScalar kr,ki; Vec xr,xi; 25 >> > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; >> > PetscErrorCode ierr; >> > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >> > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); >> > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, >> n=%D\n\n",n);CHKERRQ(ierr); >> > >> > I am wondering if the memory is lost by calling CHKERRQ. >> > >> > No. >> > >> > >> > >> > On Mon, Jan 20, 2020 at 10:14 AM Dave May >> wrote: >> > >> > >> > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: >> > Dear PETSc dev team, >> > If PETSc function returns an error, what's the correct way to clean >> PETSc? >> > >> > The answer depends on the error message reported. Send the complete >> error message and a better answer can be provided. >> > >> > Particularly how to clean up the memory? >> > >> > Totally depends on the objects which aren?t being freed. You need to >> provide more information >> > >> > Thanks >> > Dave >> > >> > >> > Thanks, >> > Sam >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Tue Jan 21 03:02:16 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Tue, 21 Jan 2020 09:02:16 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: Message-ID: Hi Anthony, On Tue, 21 Jan 2020 at 08:25, Anthony Jourdon wrote: > Hello, > > I made a test to try to reproduce the error. > To do so I modified the file $PETSC_DIR/src/dm/examples/tests/ex35.c > I attach the file in case of need. > > The same error is reproduced for 1024 mpi ranks. I tested two problem > sizes (2*512+1x2*64+1x2*256+1 and 2*1024+1x2*128+1x2*512+1) and the error > occured for both cases, the first case is also the one I used to run before > the OS and mpi updates. > I also run the code with -malloc_debug and nothing more appeared. > > I attached the configure command I used to build a debug version of petsc. > The error indicates the problem occurs on the bold line below (e.g. within MPI_Isend()) /* Post the Isends with the message length-info */ for (i=0,j=0; i > Thank you for your time, > Sincerly. > Anthony Jourdon > > > ------------------------------ > *De :* Zhang, Junchao > *Envoy? :* jeudi 16 janvier 2020 16:49 > *? :* Anthony Jourdon > *Cc :* petsc-users at mcs.anl.gov > *Objet :* Re: [petsc-users] DMDA Error > > It seems the problem is triggered by DMSetUp. You can write a small test > creating the DMDA with the same size as your code, to see if you can > reproduce the problem. If yes, it would be much easier for us to debug it. > --Junchao Zhang > > > On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon < > jourdon_anthony at hotmail.fr> wrote: > > Dear Petsc developer, > > > I need assistance with an error. > > > I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. > > > This code used to run very well on a super computer with the OS SLES11. > > Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version > 2016.0.2.181 > > The code was running with no problem on 1024 and more mpi ranks. > > > Recently, the OS of the computer has been updated to RHEL7 > > I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl > (2019.0.5.281) which are the same versions for compilers and mkl. > > Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and > 1024 mpi ranks. > > Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA > appeared. I snip the first lines of the error stack here and the full error > stack is attached. > > > [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in > /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c > > [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in > /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c > > [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in > /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c > > [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in > /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c > > [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in > /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c > > [534]PETSC ERROR: #6 DMSetUp() line 720 in > /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c > > > > Thank you for your time, > > Sincerly, > > > Anthony Jourdon > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Tue Jan 21 03:10:22 2020 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Tue, 21 Jan 2020 10:10:22 +0100 Subject: [petsc-users] chowiluviennacl In-Reply-To: References: Message-ID: Just some more background on that algorithm for others reading (which is obviously explained better in the paper, which Richard linked). As others point out, I don't think it fits your use case. The motivation for the Chow-Patel algorithm is the fact that traditional ILU preconditioners don't work well in "fine-grained parallel" environments like GPUs. "Triangularity" is something associated with lots of data dependencies - think about Gaussian elimination - the whole idea is solving one equation at a time, based on the solutions of other equations. The Chow-Patel approach is to approach things in a clever way (solving a set of nonlinear equations describing the individual entries of the factors) to simultaneously compute all the entries of the triangular factors, asynchronously on lots of threads. That doesn't solve the problem of how to solve the resulting triangular systems in parallel, though, so that's done with an iterative approach (that is, you can approximate L^(-1) with a polynomial in L). It's a new approach and thus should be considered experimental. Key to note is that all of this is only explored or implemented on a single node (shared-memory domain), so if you want to use this preconditioner on multiple ranks it needs to be a sub-preconditioner in a block Jacobi, ASM, or other method with "local subsolves". Am Mo., 20. Jan. 2020 um 23:41 Uhr schrieb Mills, Richard Tran via petsc-users : > Hi Xiangdong, > > Maybe I am misunderstanding you, but it sounds like you want an exact > direct solution, so I don't understand why you are using an incomplete > factorization solver for this. SuperLU_DIST (as Mark has suggested) or > MUMPS are two such packages that provide MPI-parallel sparse LU > factorization. If you need GPU support, SuperLU_DIST has such support. I > don't know the status of our support for using the GPU capabilities of > this, though -- I assume another developer can chime in regarding this. > > Note that the ILU provided by "chowiluiennacl" employs a very different > algorithm than the standard PCILU in PETSc, and you shouldn't expect to get > the same incomplete factorization. The algorithm is described in this paper > by Chow and Patel: > > https://www.cc.gatech.edu/~echow/pubs/parilu-sisc.pdf > > Best regards, > Richard > On 1/15/20 11:39 AM, Xiangdong wrote: > > I just submitted the issue: https://gitlab.com/petsc/petsc/issues/535 > > What I really want is an exact Block Tri-diagonal solver on GPU. Since for > block tridiagonal system, ILU0 would be the same as ILU. So I tried the > chowiluviennacl. but I found that the default parameters does not produce > the same ILU0 factorization as the CPU ones (PCILU). My guess is that if I > increase the number of sweeps chow_patel_ilu_config.sweeps(3), it may give > a better result. So the option Keys would be helpful. > > Since Mark mentioned the Superlu's GPU feature, can I use superlu or > hypre's GPU functionality through PETSc? > > Thank you. > > Xiangdong > > On Wed, Jan 15, 2020 at 2:22 PM Matthew Knepley wrote: > >> On Wed, Jan 15, 2020 at 1:48 PM Xiangdong wrote: >> >>> In the ViennaCL manual >>> http://viennacl.sourceforge.net/doc/manual-algorithms.html >>> >>> It did expose two parameters: >>> >>> // configuration of preconditioner: >>> viennacl::linalg::chow_patel_tag chow_patel_ilu_config; >>> chow_patel_ilu_config.sweeps(3); // three nonlinear sweeps >>> chow_patel_ilu_config.jacobi_iters(2); // two Jacobi iterations per >>> triangular 'solve' Rx=r >>> >>> and mentioned that: >>> The number of nonlinear sweeps and Jacobi iterations need to be set >>> problem-specific for best performance. >>> >>> In the PETSc' implementation: >>> >>> viennacl::linalg::chow_patel_tag ilu_tag; >>> ViennaCLAIJMatrix *mat = (ViennaCLAIJMatrix*)gpustruct->mat; >>> ilu->CHOWILUVIENNACL = new >>> viennacl::linalg::chow_patel_ilu_precond >>> >(*mat, ilu_tag); >>> >>> The default is used. Is it possible to expose these two parameters so >>> that user can change it through option keys? >>> >> >> Yes. Do you mind making an issue for it? That way we can better keep >> track. >> >> https://gitlab.com/petsc/petsc/issues >> >> Thanks, >> >> Matt >> >> >>> Thank you. >>> >>> Xiangdong >>> >>> On Wed, Jan 15, 2020 at 12:40 PM Matthew Knepley >>> wrote: >>> >>>> On Wed, Jan 15, 2020 at 9:59 AM Xiangdong wrote: >>>> >>>>> Maybe I am not clear. I want to solve the block tridiagonal system >>>>> Tx=b a few times with same T but different b. On CPU, I can have it by >>>>> applying the ILU0 and reuse the factorization. Since it is block >>>>> tridiagonal, ILU0 would give same results as LU. >>>>> >>>>> I am trying to do the same thing on GPU with chowiluviennacl, but >>>>> found default factorization does not produce the exact factorization for >>>>> tridiagonal system. Can we tight the drop off tolerance so that it can work >>>>> as LU for tridiagonal system? >>>>> >>>> >>>> There are no options in our implementation. You could look at the >>>> ViennaCL manual to see if we missed something. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thank you. >>>>> >>>>> Xiangdong >>>>> >>>>> On Wed, Jan 15, 2020 at 9:41 AM Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Wed, Jan 15, 2020 at 9:36 AM Xiangdong wrote: >>>>>> >>>>>>> Can chowiluviennacl do ilu0? >>>>>>> >>>>>>> I need to solve a tri-diagonal system directly. If I apply the >>>>>>> PCILU, I will obtain the exact solution with preonly + pcilu. However, the >>>>>>> preonly + chowiluviennacl will not provide the exact solution. Any option >>>>>>> keys to set the CHOWILUVIENNACL filling level or dropping off tolerance >>>>>>> like the standard ilu? >>>>>>> >>>>>> >>>>>> No. However, such a scheme makes less sense here. This algorithm >>>>>> spawns a individual threads for individual elements. Drop tolerance >>>>>> is not less work, it is sparser, but that should not matter for a >>>>>> tridiagonal system. Levels also is not applicable since you have only 1 >>>>>> level. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thank you. >>>>>>> >>>>>>> Best, >>>>>>> Xiangdong >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Jan 14, 2020 at 10:05 PM Matthew Knepley >>>>>>> wrote: >>>>>>> >>>>>>>> On Tue, Jan 14, 2020 at 9:56 PM Xiangdong >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Dear Developers, >>>>>>>>> >>>>>>>>> I have a quick question about the chowiluviennacl. When I tried to >>>>>>>>> use it, I found that it only works for np=1, not np>1. However, in the >>>>>>>>> description of chowiluviennacl.cxx, it says "the ViennaCL Chow-Patel >>>>>>>>> parallel ILU preconditioner". >>>>>>>>> >>>>>>>> >>>>>>>> By parallel, this means shared memory parallelism on the GPU. >>>>>>>> >>>>>>>> >>>>>>>>> I am wondering whether I am using it correctly. >>>>>>>>> Does chowiluviennacl work for np>1? >>>>>>>>> >>>>>>>> >>>>>>>> I do not believe so. I do not see why it could not be extended, but >>>>>>>> that would mean writing some more code. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Matt >>>>>>>> >>>>>>>> >>>>>>>>> In addition, are there option keys for the chowiluviennacl one can >>>>>>>>> try? >>>>>>>>> Thank you. >>>>>>>>> >>>>>>>>> Best, >>>>>>>>> Xiangdong >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> What most experimenters take for granted before they begin their >>>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>>> experiments lead. >>>>>>>> -- Norbert Wiener >>>>>>>> >>>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>>> >>>>>> >>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ys453 at cam.ac.uk Tue Jan 21 04:02:23 2020 From: ys453 at cam.ac.uk (Y. Shidi) Date: Tue, 21 Jan 2020 10:02:23 +0000 Subject: [petsc-users] Compiling lists all the libraries Message-ID: <0f70d95010e7f9a99a745fa5bf36e3df@cam.ac.uk> Dear developers, I stated to use certain libraries in the Makefile of my code, but it turns out that Petsc automatically uses all the libraries. Some are listed twice. I am wondering how does this happen. -Wl,-rpath,/home/ys453/Sources/petsc/arch-linux2-c-debug/lib -L/home/ys453/Sources/petsc/arch-linux2-c-debug/lib -Wl,-rpath,/home/ys453/Sources/petsc/arch-linux2-c-debug/lib -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/5 -L/usr/lib/gcc/x86_64-linux-gnu/5 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lpetsc -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist -lHYPRE -llapack -lblas -lparmetis -lmetis -lptesmumps -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lm -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lrt -lm -lpthread -lz -lstdc++ -ldl -lparmetis -lmetis -lpetsc -lboost_timer -lboost_system Thank you for your time. Kind Regards, Shidi From timothee.nicolas at gmail.com Tue Jan 21 04:13:03 2020 From: timothee.nicolas at gmail.com (=?UTF-8?Q?Timoth=C3=A9e_Nicolas?=) Date: Tue, 21 Jan 2020 11:13:03 +0100 Subject: [petsc-users] SNESSetOptionsPrefix usage In-Reply-To: <3DF2D586-B2B6-4FA7-9E26-D3455FBB21E9@mcs.anl.gov> References: <3DF2D586-B2B6-4FA7-9E26-D3455FBB21E9@mcs.anl.gov> Message-ID: Hi, I am taking a slightly different path in the development, so I'll answer later when I come back to that point. Thanks for your help so far Cheers Timoth?e Le jeu. 16 janv. 2020 ? 15:38, Smith, Barry F. a ?crit : > > > > On Jan 16, 2020, at 3:18 AM, Timoth?e Nicolas < > timothee.nicolas at gmail.com> wrote: > > > > Actually, for the main solver it works. I'm thinking, could it be due to > the fact that the second SNES instance is defined in a routine that is > called somewhere inside the FormFunction of the main SNES? We are improving > our boundary condition, which becomes quite complex, and we have a small > problem to solve, so I'm trying to handle it with a SNES. So the two SNES > are nested, in a sense. > > This should be fine. We do this. > > Are you sure the inner SNES is actually being called? > > Run with -help | grep green does it print a help message for your > green options? > > Barry > > > > > Timoth?e > > > > Le mer. 15 janv. 2020 ? 23:24, Timoth?e Nicolas < > timothee.nicolas at gmail.com> a ?crit : > > I can actually use some command line arguments. My line arguments > actually read > > > > -snes_mf -green_snes_monitor > > > > and the first -snes_mf argument (for the main solver snes) is correctly > taken into account. > > I will try what Barry suggested, I'll tell you if I find the reason. > > > > Best regards, thanks for your comments > > > > Timoth?e > > > > Le mer. 15 janv. 2020 ? 18:56, Matthew Knepley a > ?crit : > > I think that Mark is suggesting that no command line arguments are > getting in. > > > > Timothee, > > > > Can you use any command line arguments? > > > > Thanks, > > > > Matt > > > > On Wed, Jan 15, 2020 at 12:04 PM Smith, Barry F. via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > > > Should still work. Run in the debugger and put a break point in > snessetoptionsprefix_ and see what it is trying to do > > > > Barry > > > > > > > On Jan 15, 2020, at 8:58 AM, Timoth?e Nicolas < > timothee.nicolas at gmail.com> wrote: > > > > > > Hi, thanks for your answer, > > > > > > I'm using Petsc version 3.10.4 > > > > > > Timoth?e > > > > > > Le mer. 15 janv. 2020 ? 14:59, Mark Adams a ?crit : > > > I'm guessing a Fortran issue. What version of PETSc are you using? > > > > > > On Wed, Jan 15, 2020 at 8:36 AM Timoth?e Nicolas < > timothee.nicolas at gmail.com> wrote: > > > Dear PETSc users, > > > > > > I am confused by the usage of SNESSetOptionsPrefix. I understand this > is required if you have for example different SNES in your program and want > to set different options for them. > > > So for my second snes I wrote > > > > > > call SNESCreate(MPI_COMM_SELF,snes,ierr) > > > call SNESSetOptionsPrefix(snes,'green_',ierr) > > > call SNESSetFromOptions(snes,ierr) > > > > > > etc. > > > > > > Then when launching the program I wanted to monitor that snes so I > launched it with the option -green_snes_monitor instead of -snes_monitor. > But I keep getting the message > > > > > > WARNING! There are options you set that were not used! > > > WARNING! could be spelling mistake, etc! > > > Option left: name:-green_snes_monitor (no value) > > > > > > What do I miss here? > > > > > > Best regards > > > > > > Timoth?e NICOLAS > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.melnichuk at geosteertech.com Tue Jan 21 04:28:15 2020 From: dmitry.melnichuk at geosteertech.com (=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JzQtdC70YzQvdC40YfRg9C6?=) Date: Tue, 21 Jan 2020 13:28:15 +0300 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> Message-ID: <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From ys453 at cam.ac.uk Tue Jan 21 04:46:43 2020 From: ys453 at cam.ac.uk (Y. Shidi) Date: Tue, 21 Jan 2020 10:46:43 +0000 Subject: [petsc-users] Compiling lists all the libraries In-Reply-To: <0f70d95010e7f9a99a745fa5bf36e3df@cam.ac.uk> References: <0f70d95010e7f9a99a745fa5bf36e3df@cam.ac.uk> Message-ID: I think because I did include ${PETSC_DIR}/lib/petsc/conf/variables in the Makefile. On 2020-01-21 10:02, Y. Shidi wrote: > Dear developers, > > I stated to use certain libraries in the Makefile of my code, > but it turns out that Petsc automatically uses all the libraries. > Some are listed twice. > I am wondering how does this happen. > > -Wl,-rpath,/home/ys453/Sources/petsc/arch-linux2-c-debug/lib > -L/home/ys453/Sources/petsc/arch-linux2-c-debug/lib > -Wl,-rpath,/home/ys453/Sources/petsc/arch-linux2-c-debug/lib > -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/5 > -L/usr/lib/gcc/x86_64-linux-gnu/5 -Wl,-rpath,/usr/lib/x86_64-linux-gnu > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > -L/lib/x86_64-linux-gnu -lpetsc -lcmumps -ldmumps -lsmumps -lzmumps > -lmumps_common -lpord -lscalapack -lsuperlu_dist -lHYPRE -llapack > -lblas -lparmetis -lmetis -lptesmumps -lptscotch -lptscotcherr > -lesmumps -lscotch -lscotcherr -lm -lX11 -lstdc++ -ldl -lmpi_usempif08 > -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran > -lm -lgcc_s -lquadmath -lpthread -lrt -lm -lpthread -lz -lstdc++ -ldl > -lparmetis -lmetis -lpetsc -lboost_timer -lboost_system > > Thank you for your time. > > Kind Regards, > Shidi From bsmith at mcs.anl.gov Tue Jan 21 07:57:31 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 21 Jan 2020 13:57:31 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> Message-ID: <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> I would avoid OpenBLAS it just introduces one new variable that could introduce problems. PetscErrorCode is ALWAYS 32 bit, PetscInt becomes 64 bit with --with-64-bit-indices, PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is usually 32 bit unless you build with a special BLAS that supports 64 bit indices. In theory the ex5f should be fine, we test it all the time with all possible values of the integer. Please redo the ./configure with --with-64-bit-indices --download-fblaslapack and send the configure.log this provides the most useful information on the decisions configure has made. Barry > On Jan 21, 2020, at 4:28 AM, ??????? ????????? wrote: > > > First you need to figure out what is triggering: > > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. > > > Second > > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > > > Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. > > > Barry > > > I am in the process of finding out the causes of these errors. > > I'm inclined to the fact that BLAS has still some influence on what is happening. > Because testing of 32-bit version of PETSc gives such weird error with mpiexec.exe, but Fortran example ex5f completes succeccfully. > > I need to say that my solver compiled with 64-bit version of PETSc failed with Segmentation Violation error (the same as ex5f) when calling KSPSolve(Krylov,Vec_F,Vec_U,ierr). > During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also VecNorm_Seq uses several types of integer: PetscErrorCode, PetscInt, PetscBLASInt. > I suspect that PetscBLASInt may conflict with PetscInt. > Also I noted that execution of KSPSolve() does not even start , so arguments (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve(). > (inserted fprint() in the top of KSPSolve and saw no output) > > > So I tried to configure PETSc with --download-fblaslapack --with-64-bit-blas-indices, but got an error that > > fblaslapack does not support -with-64-bit-blas-indices > > Switching to flags --download-openblas -with-64-bit-blas-indices was unsuccessfully too because of error: > > Error during download/extract/detection of OPENBLAS: > Unable to download openblas > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": > fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. > Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to /yourselectedlocation > and use the configure option: > --download-openblas=/yourselectedlocation > Unable to download openblas > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": > fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. > Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to /yourselectedlocation > and use the configure option: > --download-openblas=/yourselectedlocation > Could not locate downloaded package OPENBLAS in /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages > > But I checked the last location (.../externalpackages) and saw that OpenBLAS downloaded and unzipped. > > > > Kind regards, > Dmitry Melnichuk > > > 20.01.2020, 16:32, "Smith, Barry F." : > > First you need to figure out what is triggering: > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. > > Second > > > > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. > > Barry > > > > > > > > On Jan 20, 2020, at 4:43 AM, ??????? ????????? wrote: > > Thank you so much for your assistance! > > As far as I have been able to find out, the errors "Type mismatch in argument ?ierr?" have been successfully fixed. > But execution of command "make PETSC_DIR=/cygdrive/d/... PETSC_ARCH=arch-mswin-c-debug check" leads to the appereance of Segmantation Violation error. > > I compiled PETSc with Microsoft MPI v10. > Does it make sense to compile PETSc with another MPI implementation (such as MPICH) in order to resolve the issue? > > Error message: > Running test examples to verify correct installation > Using PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry and PETSC_ARCH=arch-mswin-c-debug > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes > See http://www.mcs.anl.gov/petsc/documentation/faq.html > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNorm line 213 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c > [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c > [0]PETSC ERROR: [0] SNESSolve line 4375 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown > [0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by useruser Mon Jan 20 09:18:34 2020 > [0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack --with-shared-libraries=no --with-64-bit-indices --force > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > job aborted: > [ranks] message > > [0] application aborted > aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0 > > ---- error analysis ----- > > [0] on DESKTOP-R88IMOB > ./ex5f aborted the job. abort code 50152059 > > ---- error analysis ----- > Completed test examples > > Kind regards, > Dmitry Melnichuk > > 19.01.2020, 07:47, "Smith, Barry F." : > > Dmitry, > > I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 > > Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls from Fortran pass 4 byte integers not promoted 8 byte integers. > > I have tested it with recent versions of MPICH and OpenMPI, it is fragile at compile time and may fail to compile with different versions of MPI. > > Good luck, > > Barry > > I do not recommend this approach for integer promotion in Fortran. Just blindly promoting all integers can often lead to problems. I recommend using the kind mechanism of > Fortran to insure that each variable is the type you want, you can recompile with different options to promote the kind declared variables you wish. Of course this is more intrusive and requires changes to the Fortran code. > > > On Jan 15, 2020, at 7:00 AM, ??????? ????????? wrote: > > Hello all! > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any problem. > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk > > From balay at mcs.anl.gov Tue Jan 21 08:09:00 2020 From: balay at mcs.anl.gov (Balay, Satish) Date: Tue, 21 Jan 2020 14:09:00 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> Message-ID: I would suggest installing regular 32bit int blas/lapack - And then using it with --with-blaslapack-lib option [we don't know what -fdefault-integer-8 does with --download-fblaslapack - if it really creates --known-64-bit-blas-indices variant of blas/lapack or not] Satish On Tue, 21 Jan 2020, Smith, Barry F. via petsc-users wrote: > > I would avoid OpenBLAS it just introduces one new variable that could introduce problems. > > PetscErrorCode is ALWAYS 32 bit, PetscInt becomes 64 bit with --with-64-bit-indices, PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is usually 32 bit unless you build with a special BLAS that supports 64 bit indices. > > In theory the ex5f should be fine, we test it all the time with all possible values of the integer. Please redo the ./configure with --with-64-bit-indices --download-fblaslapack and send the configure.log this provides the most useful information on the decisions configure has made. > > Barry > > > > On Jan 21, 2020, at 4:28 AM, ??????? ????????? wrote: > > > > > First you need to figure out what is triggering: > > > > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > > > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. > > > > > Second > > > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > > > > > > Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. > > > > > Barry > > > > > > I am in the process of finding out the causes of these errors. > > > > I'm inclined to the fact that BLAS has still some influence on what is happening. > > Because testing of 32-bit version of PETSc gives such weird error with mpiexec.exe, but Fortran example ex5f completes succeccfully. > > > > I need to say that my solver compiled with 64-bit version of PETSc failed with Segmentation Violation error (the same as ex5f) when calling KSPSolve(Krylov,Vec_F,Vec_U,ierr). > > During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also VecNorm_Seq uses several types of integer: PetscErrorCode, PetscInt, PetscBLASInt. > > I suspect that PetscBLASInt may conflict with PetscInt. > > Also I noted that execution of KSPSolve() does not even start , so arguments (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve(). > > (inserted fprint() in the top of KSPSolve and saw no output) > > > > > > So I tried to configure PETSc with --download-fblaslapack --with-64-bit-blas-indices, but got an error that > > > > fblaslapack does not support -with-64-bit-blas-indices > > > > Switching to flags --download-openblas -with-64-bit-blas-indices was unsuccessfully too because of error: > > > > Error during download/extract/detection of OPENBLAS: > > Unable to download openblas > > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": > > fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. > > Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git > > * If URL specified manually - perhaps there is a typo? > > * If your network is disconnected - please reconnect and rerun ./configure > > * Or perhaps you have a firewall blocking the download > > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > > * or you can download the above URL manually, to /yourselectedlocation > > and use the configure option: > > --download-openblas=/yourselectedlocation > > Unable to download openblas > > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": > > fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. > > Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git > > * If URL specified manually - perhaps there is a typo? > > * If your network is disconnected - please reconnect and rerun ./configure > > * Or perhaps you have a firewall blocking the download > > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > > * or you can download the above URL manually, to /yourselectedlocation > > and use the configure option: > > --download-openblas=/yourselectedlocation > > Could not locate downloaded package OPENBLAS in /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages > > > > But I checked the last location (.../externalpackages) and saw that OpenBLAS downloaded and unzipped. > > > > > > > > Kind regards, > > Dmitry Melnichuk > > > > > > 20.01.2020, 16:32, "Smith, Barry F." : > > > > First you need to figure out what is triggering: > > > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. > > > > Second > > > > > > > > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > > > Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. > > > > Barry > > > > > > > > > > > > > > > > On Jan 20, 2020, at 4:43 AM, ??????? ????????? wrote: > > > > Thank you so much for your assistance! > > > > As far as I have been able to find out, the errors "Type mismatch in argument ?ierr?" have been successfully fixed. > > But execution of command "make PETSC_DIR=/cygdrive/d/... PETSC_ARCH=arch-mswin-c-debug check" leads to the appereance of Segmantation Violation error. > > > > I compiled PETSc with Microsoft MPI v10. > > Does it make sense to compile PETSc with another MPI implementation (such as MPICH) in order to resolve the issue? > > > > Error message: > > Running test examples to verify correct installation > > Using PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry and PETSC_ARCH=arch-mswin-c-debug > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the function > > [0]PETSC ERROR: is given. > > [0]PETSC ERROR: [0] VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > [0]PETSC ERROR: [0] VecNorm line 213 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c > > [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c > > [0]PETSC ERROR: [0] SNESSolve line 4375 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > [0]PETSC ERROR: Signal received > > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > > [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown > > [0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by useruser Mon Jan 20 09:18:34 2020 > > [0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack --with-shared-libraries=no --with-64-bit-indices --force > > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > > > job aborted: > > [ranks] message > > > > [0] application aborted > > aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0 > > > > ---- error analysis ----- > > > > [0] on DESKTOP-R88IMOB > > ./ex5f aborted the job. abort code 50152059 > > > > ---- error analysis ----- > > Completed test examples > > > > Kind regards, > > Dmitry Melnichuk > > > > 19.01.2020, 07:47, "Smith, Barry F." : > > > > Dmitry, > > > > I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 > > > > Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls from Fortran pass 4 byte integers not promoted 8 byte integers. > > > > I have tested it with recent versions of MPICH and OpenMPI, it is fragile at compile time and may fail to compile with different versions of MPI. > > > > Good luck, > > > > Barry > > > > I do not recommend this approach for integer promotion in Fortran. Just blindly promoting all integers can often lead to problems. I recommend using the kind mechanism of > > Fortran to insure that each variable is the type you want, you can recompile with different options to promote the kind declared variables you wish. Of course this is more intrusive and requires changes to the Fortran code. > > > > > > On Jan 15, 2020, at 7:00 AM, ??????? ????????? wrote: > > > > Hello all! > > > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > > Defmod uses PETSc for solving linear algebra system. > > Solver compilation with 32-bit version of PETSc does not cause any problem. > > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > > > 1. For example, consider the following statements written in Fortran: > > > > > > PetscErrorCode :: ierr_m > > PetscInt :: ierr > > ... > > ... > > call VecDuplicate(Vec_U,Vec_Um,ierr) > > call VecCopy(Vec_U,Vec_Um,ierr) > > call VecGetLocalSize(Vec_U,j,ierr) > > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > > Using the same integer format gives an error: > > > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > > > 2. Another example is: > > > > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > > CHKERRA(ierr) > > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to INTEGER(4)" occurs. > > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with following message: > > > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin: > > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > > > > Kind regards, > > Dmitry Melnichuk > > > > > > From bsmith at mcs.anl.gov Tue Jan 21 08:35:00 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 21 Jan 2020 14:35:00 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> Message-ID: <6F80E0F7-72E6-4DC8-AF23-3368C6392B1A@mcs.anl.gov> > On Jan 21, 2020, at 8:09 AM, Balay, Satish wrote: > > I would suggest installing regular 32bit int blas/lapack - And then using it with --with-blaslapack-lib option > > [we don't know what -fdefault-integer-8 does with --download-fblaslapack - if it really creates --known-64-bit-blas-indices variant of blas/lapack or not] Satish, The intention is that package.py strips out these options before passing them to the external packages but it is possible I made a mistake and it does not strip them out properly. Barry > > Satish > > On Tue, 21 Jan 2020, Smith, Barry F. via petsc-users wrote: > >> >> I would avoid OpenBLAS it just introduces one new variable that could introduce problems. >> >> PetscErrorCode is ALWAYS 32 bit, PetscInt becomes 64 bit with --with-64-bit-indices, PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is usually 32 bit unless you build with a special BLAS that supports 64 bit indices. >> >> In theory the ex5f should be fine, we test it all the time with all possible values of the integer. Please redo the ./configure with --with-64-bit-indices --download-fblaslapack and send the configure.log this provides the most useful information on the decisions configure has made. >> >> Barry >> >> >>> On Jan 21, 2020, at 4:28 AM, ??????? ????????? wrote: >>> >>>> First you need to figure out what is triggering: >>> >>>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory >>> >>>> Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. >>> >>>> Second >>>> VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c >>> >>> >>>> Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. >>> >>>> Barry >>> >>> >>> I am in the process of finding out the causes of these errors. >>> >>> I'm inclined to the fact that BLAS has still some influence on what is happening. >>> Because testing of 32-bit version of PETSc gives such weird error with mpiexec.exe, but Fortran example ex5f completes succeccfully. >>> >>> I need to say that my solver compiled with 64-bit version of PETSc failed with Segmentation Violation error (the same as ex5f) when calling KSPSolve(Krylov,Vec_F,Vec_U,ierr). >>> During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also VecNorm_Seq uses several types of integer: PetscErrorCode, PetscInt, PetscBLASInt. >>> I suspect that PetscBLASInt may conflict with PetscInt. >>> Also I noted that execution of KSPSolve() does not even start , so arguments (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve(). >>> (inserted fprint() in the top of KSPSolve and saw no output) >>> >>> >>> So I tried to configure PETSc with --download-fblaslapack --with-64-bit-blas-indices, but got an error that >>> >>> fblaslapack does not support -with-64-bit-blas-indices >>> >>> Switching to flags --download-openblas -with-64-bit-blas-indices was unsuccessfully too because of error: >>> >>> Error during download/extract/detection of OPENBLAS: >>> Unable to download openblas >>> Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": >>> fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. >>> Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git >>> * If URL specified manually - perhaps there is a typo? >>> * If your network is disconnected - please reconnect and rerun ./configure >>> * Or perhaps you have a firewall blocking the download >>> * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually >>> * or you can download the above URL manually, to /yourselectedlocation >>> and use the configure option: >>> --download-openblas=/yourselectedlocation >>> Unable to download openblas >>> Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": >>> fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. >>> Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git >>> * If URL specified manually - perhaps there is a typo? >>> * If your network is disconnected - please reconnect and rerun ./configure >>> * Or perhaps you have a firewall blocking the download >>> * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually >>> * or you can download the above URL manually, to /yourselectedlocation >>> and use the configure option: >>> --download-openblas=/yourselectedlocation >>> Could not locate downloaded package OPENBLAS in /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages >>> >>> But I checked the last location (.../externalpackages) and saw that OpenBLAS downloaded and unzipped. >>> >>> >>> >>> Kind regards, >>> Dmitry Melnichuk >>> >>> >>> 20.01.2020, 16:32, "Smith, Barry F." : >>> >>> First you need to figure out what is triggering: >>> >>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory >>> >>> Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. >>> >>> Second >>> >>> >>> >>> VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c >>> >>> Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. >>> >>> Barry >>> >>> >>> >>> >>> >>> >>> >>> On Jan 20, 2020, at 4:43 AM, ??????? ????????? wrote: >>> >>> Thank you so much for your assistance! >>> >>> As far as I have been able to find out, the errors "Type mismatch in argument ?ierr?" have been successfully fixed. >>> But execution of command "make PETSC_DIR=/cygdrive/d/... PETSC_ARCH=arch-mswin-c-debug check" leads to the appereance of Segmantation Violation error. >>> >>> I compiled PETSc with Microsoft MPI v10. >>> Does it make sense to compile PETSc with another MPI implementation (such as MPICH) in order to resolve the issue? >>> >>> Error message: >>> Running test examples to verify correct installation >>> Using PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry and PETSC_ARCH=arch-mswin-c-debug >>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process >>> See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory >>> Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes >>> See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory >>> Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process >>> See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [0]PETSC ERROR: likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: [0] VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c >>> [0]PETSC ERROR: [0] VecNorm line 213 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c >>> [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: [0] SNESSolve line 4375 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c >>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [0]PETSC ERROR: Signal received >>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>> [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown >>> [0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by useruser Mon Jan 20 09:18:34 2020 >>> [0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack --with-shared-libraries=no --with-64-bit-indices --force >>> [0]PETSC ERROR: #1 User provided function() line 0 in unknown file >>> >>> job aborted: >>> [ranks] message >>> >>> [0] application aborted >>> aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0 >>> >>> ---- error analysis ----- >>> >>> [0] on DESKTOP-R88IMOB >>> ./ex5f aborted the job. abort code 50152059 >>> >>> ---- error analysis ----- >>> Completed test examples >>> >>> Kind regards, >>> Dmitry Melnichuk >>> >>> 19.01.2020, 07:47, "Smith, Barry F." : >>> >>> Dmitry, >>> >>> I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 >>> >>> Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls from Fortran pass 4 byte integers not promoted 8 byte integers. >>> >>> I have tested it with recent versions of MPICH and OpenMPI, it is fragile at compile time and may fail to compile with different versions of MPI. >>> >>> Good luck, >>> >>> Barry >>> >>> I do not recommend this approach for integer promotion in Fortran. Just blindly promoting all integers can often lead to problems. I recommend using the kind mechanism of >>> Fortran to insure that each variable is the type you want, you can recompile with different options to promote the kind declared variables you wish. Of course this is more intrusive and requires changes to the Fortran code. >>> >>> >>> On Jan 15, 2020, at 7:00 AM, ??????? ????????? wrote: >>> >>> Hello all! >>> >>> At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. >>> Defmod uses PETSc for solving linear algebra system. >>> Solver compilation with 32-bit version of PETSc does not cause any problem. >>> But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. >>> >>> 1. For example, consider the following statements written in Fortran: >>> >>> >>> PetscErrorCode :: ierr_m >>> PetscInt :: ierr >>> ... >>> ... >>> call VecDuplicate(Vec_U,Vec_Um,ierr) >>> call VecCopy(Vec_U,Vec_Um,ierr) >>> call VecGetLocalSize(Vec_U,j,ierr) >>> call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) >>> >>> >>> As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). >>> Using the same integer format gives an error: >>> >>> There is no specific subroutine for the generic ?vecgetownershiprange? at (1) >>> >>> 2. Another example is: >>> >>> >>> call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) >>> CHKERRA(ierr) >>> call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) >>> >>> >>> I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to INTEGER(4)" occurs. >>> If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. >>> >>> >>> 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with following message: >>> >>> [0]PETSC ERROR: Out of range index value -4 cannot be negative >>> >>> >>> Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: >>> ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no >>> >>> Command to configure 64-bit version of PETSc under Windows 10 using Cygwin: >>> ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices >>> >>> >>> Kind regards, >>> Dmitry Melnichuk >>> >>> >> From jczhang at mcs.anl.gov Tue Jan 21 10:20:55 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Tue, 21 Jan 2020 16:20:55 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: Message-ID: I submitted a job and I am waiting for the result. --Junchao Zhang On Tue, Jan 21, 2020 at 3:03 AM Dave May > wrote: Hi Anthony, On Tue, 21 Jan 2020 at 08:25, Anthony Jourdon > wrote: Hello, I made a test to try to reproduce the error. To do so I modified the file $PETSC_DIR/src/dm/examples/tests/ex35.c I attach the file in case of need. The same error is reproduced for 1024 mpi ranks. I tested two problem sizes (2*512+1x2*64+1x2*256+1 and 2*1024+1x2*128+1x2*512+1) and the error occured for both cases, the first case is also the one I used to run before the OS and mpi updates. I also run the code with -malloc_debug and nothing more appeared. I attached the configure command I used to build a debug version of petsc. The error indicates the problem occurs on the bold line below (e.g. within MPI_Isend()) /* Post the Isends with the message length-info */ for (i=0,j=0; i> Envoy? : jeudi 16 janvier 2020 16:49 ? : Anthony Jourdon > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] DMDA Error It seems the problem is triggered by DMSetUp. You can write a small test creating the DMDA with the same size as your code, to see if you can reproduce the problem. If yes, it would be much easier for us to debug it. --Junchao Zhang On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon > wrote: Dear Petsc developer, I need assistance with an error. I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. This code used to run very well on a super computer with the OS SLES11. Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 The code was running with no problem on 1024 and more mpi ranks. Recently, the OS of the computer has been updated to RHEL7 I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c Thank you for your time, Sincerly, Anthony Jourdon -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jan 21 10:25:45 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 21 Jan 2020 16:25:45 +0000 Subject: [petsc-users] error handling In-Reply-To: References: Message-ID: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> > On Jan 20, 2020, at 6:32 PM, Sam Guo wrote: > > Hi Barry, > I understand ierr != 0 means something catastrophic. I just want to release all memory before I exit PETSc. In general not possible. If you run with the debug version and -malloc_debug it is possible but because of the unknown error it could be that the releasing of the memory causes a real crash. Is your main concern when you use PETSc for a large problem and it errors because it is "out of memory"? Barry > > Thanks, > Sam > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. wrote: > > Sam, > > I am not sure what your goal is but PETSc error return codes are error return codes not exceptions. They mean that something catastrophic happened and there is no recovery. > > Note that PETSc solvers do not return nonzero error codes on failure to converge etc. You call, for example, KPSGetConvergedReason() after a KSP solve to see if it has failed, this is not a catastrophic failure. If a MatCreate() or any other call returns a nonzero ierr the game is up, you cannot continue running PETSc. > > Barry > > > > On Jan 20, 2020, at 5:41 PM, Matthew Knepley wrote: > > > > Not if you initialize the pointers to zero: Mat A = NULL. > > > > Matt > > > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: > > I mean MatDestroy. > > > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > > Does it hurt to call Destroy function without calling CreateFunction? For example > > Mat A, B; > > PetscErrorCode ierr1, ierr2; > > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > > if(ierr1 == 0) > > { > > ierr2 = MatCreate(PETSC_COMM_WORLD > > ,&B); > > > > } > > if(ierr1 !=0 || ierr2 != 0) > > { > > Destroy(&A); > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it hurt to call Destroy B here? > > } > > > > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May wrote: > > > > > > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > > Can I assume if there is MatCreat or VecCreate, I should clean up the memory myself? > > > > Yes. You will need to call the matching Destroy function. > > > > > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: > > I only include the first few lines of SLEPc example. What about following > > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > > Is there any memory lost? > > > > On Mon, Jan 20, 2020 at 10:41 AM Dave May wrote: > > > > > > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > > I don't have a specific case yet. Currently every call of PETSc is checked. If ierr is not zero, print the error and return. For example, > > Mat A; /* problem matrix */ > > EPS eps; /* eigenproblem solver context */ > > EPSType type; > > PetscReal error,tol,re,im; > > PetscScalar kr,ki; Vec xr,xi; 25 > > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > > PetscErrorCode ierr; > > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); > > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); > > > > I am wondering if the memory is lost by calling CHKERRQ. > > > > No. > > > > > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May wrote: > > > > > > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > > Dear PETSc dev team, > > If PETSc function returns an error, what's the correct way to clean PETSc? > > > > The answer depends on the error message reported. Send the complete error message and a better answer can be provided. > > > > Particularly how to clean up the memory? > > > > Totally depends on the objects which aren?t being freed. You need to provide more information > > > > Thanks > > Dave > > > > > > Thanks, > > Sam > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > From sam.guo at cd-adapco.com Tue Jan 21 10:49:02 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Tue, 21 Jan 2020 08:49:02 -0800 Subject: [petsc-users] error handling In-Reply-To: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> References: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> Message-ID: I use PETSc from my application. Sounds you are saying I just treat ierr!=0 as an system error and no need to call Destroy functions. On Tuesday, January 21, 2020, Smith, Barry F. wrote: > > > > On Jan 20, 2020, at 6:32 PM, Sam Guo wrote: > > > > Hi Barry, > > I understand ierr != 0 means something catastrophic. I just want to > release all memory before I exit PETSc. > > In general not possible. If you run with the debug version and > -malloc_debug it is possible but because of the unknown error it could be > that the releasing of the memory causes a real crash. > > Is your main concern when you use PETSc for a large problem and it > errors because it is "out of memory"? > > Barry > > > > > > Thanks, > > Sam > > > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. > wrote: > > > > Sam, > > > > I am not sure what your goal is but PETSc error return codes are > error return codes not exceptions. They mean that something catastrophic > happened and there is no recovery. > > > > Note that PETSc solvers do not return nonzero error codes on failure > to converge etc. You call, for example, KPSGetConvergedReason() after a KSP > solve to see if it has failed, this is not a catastrophic failure. If a > MatCreate() or any other call returns a nonzero ierr the game is up, you > cannot continue running PETSc. > > > > Barry > > > > > > > On Jan 20, 2020, at 5:41 PM, Matthew Knepley > wrote: > > > > > > Not if you initialize the pointers to zero: Mat A = NULL. > > > > > > Matt > > > > > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: > > > I mean MatDestroy. > > > > > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > > > Does it hurt to call Destroy function without calling CreateFunction? > For example > > > Mat A, B; > > > PetscErrorCode ierr1, ierr2; > > > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > > > if(ierr1 == 0) > > > { > > > ierr2 = MatCreate(PETSC_COMM_WORLD > > > ,&B); > > > > > > } > > > if(ierr1 !=0 || ierr2 != 0) > > > { > > > Destroy(&A); > > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it > hurt to call Destroy B here? > > > } > > > > > > > > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May > wrote: > > > > > > > > > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > > > Can I assume if there is MatCreat or VecCreate, I should clean up the > memory myself? > > > > > > Yes. You will need to call the matching Destroy function. > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo > wrote: > > > I only include the first few lines of SLEPc example. What about > following > > > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > > > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > > > Is there any memory lost? > > > > > > On Mon, Jan 20, 2020 at 10:41 AM Dave May > wrote: > > > > > > > > > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > > > I don't have a specific case yet. Currently every call of PETSc is > checked. If ierr is not zero, print the error and return. For example, > > > Mat A; /* problem matrix */ > > > EPS eps; /* eigenproblem solver context */ > > > EPSType type; > > > PetscReal error,tol,re,im; > > > PetscScalar kr,ki; Vec xr,xi; 25 > > > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > > > PetscErrorCode ierr; > > > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); > > > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, > n=%D\n\n",n);CHKERRQ(ierr); > > > > > > I am wondering if the memory is lost by calling CHKERRQ. > > > > > > No. > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May > wrote: > > > > > > > > > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > > > Dear PETSc dev team, > > > If PETSc function returns an error, what's the correct way to clean > PETSc? > > > > > > The answer depends on the error message reported. Send the complete > error message and a better answer can be provided. > > > > > > Particularly how to clean up the memory? > > > > > > Totally depends on the objects which aren?t being freed. You need to > provide more information > > > > > > Thanks > > > Dave > > > > > > > > > Thanks, > > > Sam > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > > -- Norbert Wiener > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at seamplex.com Tue Jan 21 13:02:39 2020 From: jeremy at seamplex.com (Jeremy Theler) Date: Tue, 21 Jan 2020 16:02:39 -0300 Subject: [petsc-users] error handling In-Reply-To: References: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> Message-ID: <273142e7480731ef74d6bfb294e02d9aef74be71.camel@seamplex.com> Dear Sam Probably you are already aware of the following paragraph, but just in case. Quote from https://www.gnu.org/prep/standards/standards.html#Memory-Usage Memory analysis tools such as valgrind can be useful, but don?t complicate a program merely to avoid their false alarms. For example, if memory is used until just before a process exits, don?t free it simply to silence such a tool. Regards -- jeremy theler www.seamplex.com On Tue, 2020-01-21 at 08:49 -0800, Sam Guo wrote: > I use PETSc from my application. Sounds you are saying I just treat > ierr!=0 as an system error and no need to call Destroy functions. > > On Tuesday, January 21, 2020, Smith, Barry F. > wrote: > > > > > On Jan 20, 2020, at 6:32 PM, Sam Guo > > wrote: > > > > > > Hi Barry, > > > I understand ierr != 0 means something catastrophic. I just > > want to release all memory before I exit PETSc. > > > > In general not possible. If you run with the debug version and > > -malloc_debug it is possible but because of the unknown error it > > could be that the releasing of the memory causes a real crash. > > > > Is your main concern when you use PETSc for a large problem and > > it errors because it is "out of memory"? > > > > Barry > > > > > > > > > > Thanks, > > > Sam > > > > > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. < > > bsmith at mcs.anl.gov> wrote: > > > > > > Sam, > > > > > > I am not sure what your goal is but PETSc error return codes > > are error return codes not exceptions. They mean that something > > catastrophic happened and there is no recovery. > > > > > > Note that PETSc solvers do not return nonzero error codes on > > failure to converge etc. You call, for example, > > KPSGetConvergedReason() after a KSP solve to see if it has failed, > > this is not a catastrophic failure. If a MatCreate() or any other > > call returns a nonzero ierr the game is up, you cannot continue > > running PETSc. > > > > > > Barry > > > > > > > > > > On Jan 20, 2020, at 5:41 PM, Matthew Knepley > > wrote: > > > > > > > > Not if you initialize the pointers to zero: Mat A = NULL. > > > > > > > > Matt > > > > > > > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo > > wrote: > > > > I mean MatDestroy. > > > > > > > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo > > wrote: > > > > Does it hurt to call Destroy function without calling > > CreateFunction? For example > > > > Mat A, B; > > > > PetscErrorCode ierr1, ierr2; > > > > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > > > > if(ierr1 == 0) > > > > { > > > > ierr2 = MatCreate(PETSC_COMM_WORLD > > > > ,&B); > > > > > > > > } > > > > if(ierr1 !=0 || ierr2 != 0) > > > > { > > > > Destroy(&A); > > > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. > > Does it hurt to call Destroy B here? > > > > } > > > > > > > > > > > > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May < > > dave.mayhem23 at gmail.com> wrote: > > > > > > > > > > > > On Mon 20. Jan 2020 at 19:47, Sam Guo > > wrote: > > > > Can I assume if there is MatCreat or VecCreate, I should clean > > up the memory myself? > > > > > > > > Yes. You will need to call the matching Destroy function. > > > > > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo > > wrote: > > > > I only include the first few lines of SLEPc example. What about > > following > > > > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > > > > ierr = > > MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > > > > Is there any memory lost? > > > > > > > > On Mon, Jan 20, 2020 at 10:41 AM Dave May < > > dave.mayhem23 at gmail.com> wrote: > > > > > > > > > > > > On Mon 20. Jan 2020 at 19:39, Sam Guo > > wrote: > > > > I don't have a specific case yet. Currently every call of PETSc > > is checked. If ierr is not zero, print the error and return. For > > example, > > > > Mat A; /* problem matrix */ > > > > EPS eps; /* eigenproblem solver context */ > > > > EPSType type; > > > > PetscReal error,tol,re,im; > > > > PetscScalar kr,ki; Vec xr,xi; 25 > > > > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > > > > PetscErrorCode ierr; > > > > ierr = > > SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > > > ierr = PetscOptionsGetInt(NULL,NULL,"- > > n",&n,NULL);CHKERRQ(ierr); > > > > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian > > Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); > > > > > > > > I am wondering if the memory is lost by calling CHKERRQ. > > > > > > > > No. > > > > > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May < > > dave.mayhem23 at gmail.com> wrote: > > > > > > > > > > > > On Mon 20. Jan 2020 at 19:11, Sam Guo > > wrote: > > > > Dear PETSc dev team, > > > > If PETSc function returns an error, what's the correct way > > to clean PETSc? > > > > > > > > The answer depends on the error message reported. Send the > > complete error message and a better answer can be provided. > > > > > > > > Particularly how to clean up the memory? > > > > > > > > Totally depends on the objects which aren?t being freed. You > > need to provide more information > > > > > > > > Thanks > > > > Dave > > > > > > > > > > > > Thanks, > > > > Sam > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin > > their experiments is infinitely more interesting than any results > > to which their experiments lead. > > > > -- Norbert Wiener > > > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > From jed at jedbrown.org Tue Jan 21 13:18:41 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 21 Jan 2020 12:18:41 -0700 Subject: [petsc-users] error handling In-Reply-To: <273142e7480731ef74d6bfb294e02d9aef74be71.camel@seamplex.com> References: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> <273142e7480731ef74d6bfb294e02d9aef74be71.camel@seamplex.com> Message-ID: <87v9p462jy.fsf@jedbrown.org> Jeremy Theler writes: > Dear Sam > > Probably you are already aware of the following paragraph, but just in > case. Quote from > https://www.gnu.org/prep/standards/standards.html#Memory-Usage > > > Memory analysis tools such as valgrind can be useful, but don?t > complicate a program merely to avoid their false alarms. For example, > if memory is used until just before a process exits, don?t free it > simply to silence such a tool. Off-topic, but I consider this to be bad advice, especially for a library or community project. From sam.guo at cd-adapco.com Tue Jan 21 13:18:16 2020 From: sam.guo at cd-adapco.com (Sam Guo) Date: Tue, 21 Jan 2020 11:18:16 -0800 Subject: [petsc-users] error handling In-Reply-To: <273142e7480731ef74d6bfb294e02d9aef74be71.camel@seamplex.com> References: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> <273142e7480731ef74d6bfb294e02d9aef74be71.camel@seamplex.com> Message-ID: Thanks. On Tue, Jan 21, 2020 at 11:03 AM Jeremy Theler wrote: > Dear Sam > > Probably you are already aware of the following paragraph, but just in > case. Quote from > https://www.gnu.org/prep/standards/standards.html#Memory-Usage > > > Memory analysis tools such as valgrind can be useful, but don?t > complicate a program merely to avoid their false alarms. For example, > if memory is used until just before a process exits, don?t free it > simply to silence such a tool. > > > Regards > -- > jeremy theler > www.seamplex.com > > > > On Tue, 2020-01-21 at 08:49 -0800, Sam Guo wrote: > > I use PETSc from my application. Sounds you are saying I just treat > > ierr!=0 as an system error and no need to call Destroy functions. > > > > On Tuesday, January 21, 2020, Smith, Barry F. > > wrote: > > > > > > > On Jan 20, 2020, at 6:32 PM, Sam Guo > > > wrote: > > > > > > > > Hi Barry, > > > > I understand ierr != 0 means something catastrophic. I just > > > want to release all memory before I exit PETSc. > > > > > > In general not possible. If you run with the debug version and > > > -malloc_debug it is possible but because of the unknown error it > > > could be that the releasing of the memory causes a real crash. > > > > > > Is your main concern when you use PETSc for a large problem and > > > it errors because it is "out of memory"? > > > > > > Barry > > > > > > > > > > > > > > Thanks, > > > > Sam > > > > > > > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. < > > > bsmith at mcs.anl.gov> wrote: > > > > > > > > Sam, > > > > > > > > I am not sure what your goal is but PETSc error return codes > > > are error return codes not exceptions. They mean that something > > > catastrophic happened and there is no recovery. > > > > > > > > Note that PETSc solvers do not return nonzero error codes on > > > failure to converge etc. You call, for example, > > > KPSGetConvergedReason() after a KSP solve to see if it has failed, > > > this is not a catastrophic failure. If a MatCreate() or any other > > > call returns a nonzero ierr the game is up, you cannot continue > > > running PETSc. > > > > > > > > Barry > > > > > > > > > > > > > On Jan 20, 2020, at 5:41 PM, Matthew Knepley > > > wrote: > > > > > > > > > > Not if you initialize the pointers to zero: Mat A = NULL. > > > > > > > > > > Matt > > > > > > > > > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo > > > wrote: > > > > > I mean MatDestroy. > > > > > > > > > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo > > > wrote: > > > > > Does it hurt to call Destroy function without calling > > > CreateFunction? For example > > > > > Mat A, B; > > > > > PetscErrorCode ierr1, ierr2; > > > > > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > > > > > if(ierr1 == 0) > > > > > { > > > > > ierr2 = MatCreate(PETSC_COMM_WORLD > > > > > ,&B); > > > > > > > > > > } > > > > > if(ierr1 !=0 || ierr2 != 0) > > > > > { > > > > > Destroy(&A); > > > > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. > > > Does it hurt to call Destroy B here? > > > > > } > > > > > > > > > > > > > > > > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May < > > > dave.mayhem23 at gmail.com> wrote: > > > > > > > > > > > > > > > On Mon 20. Jan 2020 at 19:47, Sam Guo > > > wrote: > > > > > Can I assume if there is MatCreat or VecCreate, I should clean > > > up the memory myself? > > > > > > > > > > Yes. You will need to call the matching Destroy function. > > > > > > > > > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo > > > wrote: > > > > > I only include the first few lines of SLEPc example. What about > > > following > > > > > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > > > > > ierr = > > > MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > > > > > Is there any memory lost? > > > > > > > > > > On Mon, Jan 20, 2020 at 10:41 AM Dave May < > > > dave.mayhem23 at gmail.com> wrote: > > > > > > > > > > > > > > > On Mon 20. Jan 2020 at 19:39, Sam Guo > > > wrote: > > > > > I don't have a specific case yet. Currently every call of PETSc > > > is checked. If ierr is not zero, print the error and return. For > > > example, > > > > > Mat A; /* problem matrix */ > > > > > EPS eps; /* eigenproblem solver context */ > > > > > EPSType type; > > > > > PetscReal error,tol,re,im; > > > > > PetscScalar kr,ki; Vec xr,xi; 25 > > > > > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > > > > > PetscErrorCode ierr; > > > > > ierr = > > > SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > > > > ierr = PetscOptionsGetInt(NULL,NULL,"- > > > n",&n,NULL);CHKERRQ(ierr); > > > > > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian > > > Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); > > > > > > > > > > I am wondering if the memory is lost by calling CHKERRQ. > > > > > > > > > > No. > > > > > > > > > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May < > > > dave.mayhem23 at gmail.com> wrote: > > > > > > > > > > > > > > > On Mon 20. Jan 2020 at 19:11, Sam Guo > > > wrote: > > > > > Dear PETSc dev team, > > > > > If PETSc function returns an error, what's the correct way > > > to clean PETSc? > > > > > > > > > > The answer depends on the error message reported. Send the > > > complete error message and a better answer can be provided. > > > > > > > > > > Particularly how to clean up the memory? > > > > > > > > > > Totally depends on the objects which aren?t being freed. You > > > need to provide more information > > > > > > > > > > Thanks > > > > > Dave > > > > > > > > > > > > > > > Thanks, > > > > > Sam > > > > > > > > > > > > > > > -- > > > > > What most experimenters take for granted before they begin > > > their experiments is infinitely more interesting than any results > > > to which their experiments lead. > > > > > -- Norbert Wiener > > > > > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jan 21 18:09:01 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 22 Jan 2020 00:09:01 +0000 Subject: [petsc-users] error handling In-Reply-To: References: <6757A644-17BB-459B-9E43-6BDBA8081ECC@mcs.anl.gov> Message-ID: <20C9C0CB-AC8E-4539-B549-981ADE4B56E1@mcs.anl.gov> Yes, it is essentially like a system error. Barry > On Jan 21, 2020, at 10:49 AM, Sam Guo wrote: > > I use PETSc from my application. Sounds you are saying I just treat ierr!=0 as an system error and no need to call Destroy functions. > > On Tuesday, January 21, 2020, Smith, Barry F. wrote: > > > > On Jan 20, 2020, at 6:32 PM, Sam Guo wrote: > > > > Hi Barry, > > I understand ierr != 0 means something catastrophic. I just want to release all memory before I exit PETSc. > > In general not possible. If you run with the debug version and -malloc_debug it is possible but because of the unknown error it could be that the releasing of the memory causes a real crash. > > Is your main concern when you use PETSc for a large problem and it errors because it is "out of memory"? > > Barry > > > > > > Thanks, > > Sam > > > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. wrote: > > > > Sam, > > > > I am not sure what your goal is but PETSc error return codes are error return codes not exceptions. They mean that something catastrophic happened and there is no recovery. > > > > Note that PETSc solvers do not return nonzero error codes on failure to converge etc. You call, for example, KPSGetConvergedReason() after a KSP solve to see if it has failed, this is not a catastrophic failure. If a MatCreate() or any other call returns a nonzero ierr the game is up, you cannot continue running PETSc. > > > > Barry > > > > > > > On Jan 20, 2020, at 5:41 PM, Matthew Knepley wrote: > > > > > > Not if you initialize the pointers to zero: Mat A = NULL. > > > > > > Matt > > > > > > On Mon, Jan 20, 2020 at 6:31 PM Sam Guo wrote: > > > I mean MatDestroy. > > > > > > On Mon, Jan 20, 2020 at 3:28 PM Sam Guo wrote: > > > Does it hurt to call Destroy function without calling CreateFunction? For example > > > Mat A, B; > > > PetscErrorCode ierr1, ierr2; > > > ierr1 = MatCreate(PETSC_COMM_WORLD,&A); > > > if(ierr1 == 0) > > > { > > > ierr2 = MatCreate(PETSC_COMM_WORLD > > > ,&B); > > > > > > } > > > if(ierr1 !=0 || ierr2 != 0) > > > { > > > Destroy(&A); > > > Destroy(&B); // if ierr1 !=0, MatCreat is not called on B. Does it hurt to call Destroy B here? > > > } > > > > > > > > > > > > On Mon, Jan 20, 2020 at 11:11 AM Dave May wrote: > > > > > > > > > On Mon 20. Jan 2020 at 19:47, Sam Guo wrote: > > > Can I assume if there is MatCreat or VecCreate, I should clean up the memory myself? > > > > > > Yes. You will need to call the matching Destroy function. > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:45 AM Sam Guo wrote: > > > I only include the first few lines of SLEPc example. What about following > > > ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr); > > > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n);CHKERRQ(ierr); > > > Is there any memory lost? > > > > > > On Mon, Jan 20, 2020 at 10:41 AM Dave May wrote: > > > > > > > > > On Mon 20. Jan 2020 at 19:39, Sam Guo wrote: > > > I don't have a specific case yet. Currently every call of PETSc is checked. If ierr is not zero, print the error and return. For example, > > > Mat A; /* problem matrix */ > > > EPS eps; /* eigenproblem solver context */ > > > EPSType type; > > > PetscReal error,tol,re,im; > > > PetscScalar kr,ki; Vec xr,xi; 25 > > > PetscInt n=30,i,Istart,Iend,nev,maxit,its,nconv; > > > PetscErrorCode ierr; > > > ierr = SlepcInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > > ierr = PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL);CHKERRQ(ierr); > > > ierr = PetscPrintf(PETSC_COMM_WORLD,"\n1-D Laplacian Eigenproblem, n=%D\n\n",n);CHKERRQ(ierr); > > > > > > I am wondering if the memory is lost by calling CHKERRQ. > > > > > > No. > > > > > > > > > > > > On Mon, Jan 20, 2020 at 10:14 AM Dave May wrote: > > > > > > > > > On Mon 20. Jan 2020 at 19:11, Sam Guo wrote: > > > Dear PETSc dev team, > > > If PETSc function returns an error, what's the correct way to clean PETSc? > > > > > > The answer depends on the error message reported. Send the complete error message and a better answer can be provided. > > > > > > Particularly how to clean up the memory? > > > > > > Totally depends on the objects which aren?t being freed. You need to provide more information > > > > > > Thanks > > > Dave > > > > > > > > > Thanks, > > > Sam > > > > > > > > > -- > > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > > -- Norbert Wiener > > > > > > https://www.cse.buffalo.edu/~knepley/ > > > From jed at jedbrown.org Tue Jan 21 23:24:49 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 21 Jan 2020 22:24:49 -0700 Subject: [petsc-users] Compiling lists all the libraries In-Reply-To: <0f70d95010e7f9a99a745fa5bf36e3df@cam.ac.uk> References: <0f70d95010e7f9a99a745fa5bf36e3df@cam.ac.uk> Message-ID: <87a76g3vxa.fsf@jedbrown.org> PETSc normally removes duplicates from its library link lines, but there are some contexts (with static linking) where it's hard to show that it's safe because some projects ship library sets with circular dependencies. This is generally harmless unless you have a specific need to avoid overlinking (e.g., if you're packaging for binary distribution to machines that need to be able to minimally upgrade components). "Y. Shidi" writes: > Dear developers, > > I stated to use certain libraries in the Makefile of my code, > but it turns out that Petsc automatically uses all the libraries. > Some are listed twice. > I am wondering how does this happen. > > -Wl,-rpath,/home/ys453/Sources/petsc/arch-linux2-c-debug/lib > -L/home/ys453/Sources/petsc/arch-linux2-c-debug/lib > -Wl,-rpath,/home/ys453/Sources/petsc/arch-linux2-c-debug/lib > -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/5 > -L/usr/lib/gcc/x86_64-linux-gnu/5 -Wl,-rpath,/usr/lib/x86_64-linux-gnu > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > -L/lib/x86_64-linux-gnu -lpetsc -lcmumps -ldmumps -lsmumps -lzmumps > -lmumps_common -lpord -lscalapack -lsuperlu_dist -lHYPRE -llapack -lblas > -lparmetis -lmetis -lptesmumps -lptscotch -lptscotcherr -lesmumps > -lscotch -lscotcherr -lm -lX11 -lstdc++ -ldl -lmpi_usempif08 > -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm > -lgcc_s -lquadmath -lpthread -lrt -lm -lpthread -lz -lstdc++ -ldl > -lparmetis -lmetis -lpetsc -lboost_timer -lboost_system > > Thank you for your time. > > Kind Regards, > Shidi From dmitry.melnichuk at geosteertech.com Wed Jan 22 03:49:04 2020 From: dmitry.melnichuk at geosteertech.com (=?utf-8?B?0JTQvNC40YLRgNC40Lkg0JzQtdC70YzQvdC40YfRg9C6?=) Date: Wed, 22 Jan 2020 12:49:04 +0300 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> Message-ID: <4218091579686544@vla4-9d01d86ae0b7.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: for_petsc_community.rar Type: application/x-rar Size: 180995 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Jan 22 08:53:13 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 22 Jan 2020 14:53:13 +0000 Subject: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin In-Reply-To: <4218091579686544@vla4-9d01d86ae0b7.qloud-c.yandex.net> References: <35432251579093203@vla3-307f9063dbf4.qloud-c.yandex.net> <12790071579517006@iva7-8a22bc446c12.qloud-c.yandex.net> <5007321579602495@sas8-55d8cbf44a35.qloud-c.yandex.net> <553C26B0-C2F9-45F4-9EF8-E443EAE86E49@mcs.anl.gov> <4218091579686544@vla4-9d01d86ae0b7.qloud-c.yandex.net> Message-ID: > On Jan 22, 2020, at 3:49 AM, ??????? ????????? wrote: > > Thank you for your help! > > I ran ./configure with flags --with-64-bit-indices --download-fblaslapack. > The logs files are called configure_fblaslapack_64-bit-indices.log and test_fblaslapack_64-bit-indices.log respectively. > Fortran test example runs successfully, but solver does not compiled with PETSc correctly: > > > if (j==1) call MatSetValue(Mat_K,j3,j3,f0,Add_Values,ierr_g) So Add_values should be eight bytes but for some reason it is four. Try ADD_VALUES here, it should not matter but. Also try putting #include ex9f.F90: use petscvec at the beginning of the routine to make sure ADD_VALUES is defined. Make sure you don't have a local variable named Add_Values > 1 > Error: Type mismatch in argument ?i? at (1); passed INTEGER(4) to INTEGER(8) > > Also changing the ierr_g declaration from PetscErrorCode to PetscInt has no influence on compilation result. > > > Manual compliation of OpenBLAS and appropriate changes in ./configure solved my problem. > So I attached the associated log files named as configure_openblas_64-bit-indices.log and test_openblas_64-bit-indices.log > > > All operations were performed with barry/2020-01-15/support-default-integer-8 version of PETSc. > > > Kind regards, > Dmitry Melnichuk > > > > 21.01.2020, 16:57, "Smith, Barry F." : > > I would avoid OpenBLAS it just introduces one new variable that could introduce problems. > > PetscErrorCode is ALWAYS 32 bit, PetscInt becomes 64 bit with --with-64-bit-indices, PETScMPIInt is ALWAYS 32 bit, PetscBLASInt is usually 32 bit unless you build with a special BLAS that supports 64 bit indices. > > In theory the ex5f should be fine, we test it all the time with all possible values of the integer. Please redo the ./configure with --with-64-bit-indices --download-fblaslapack and send the configure.log this provides the most useful information on the decisions configure has made. > > Barry > > > > On Jan 21, 2020, at 4:28 AM, ??????? ????????? wrote: > > > First you need to figure out what is triggering: > > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. > > > Second > > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > > > Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. > > > Barry > > > I am in the process of finding out the causes of these errors. > > I'm inclined to the fact that BLAS has still some influence on what is happening. > Because testing of 32-bit version of PETSc gives such weird error with mpiexec.exe, but Fortran example ex5f completes succeccfully. > > I need to say that my solver compiled with 64-bit version of PETSc failed with Segmentation Violation error (the same as ex5f) when calling KSPSolve(Krylov,Vec_F,Vec_U,ierr). > During the execution KSPSolve appeals to VecNorm_Seq in bvec2.c. Also VecNorm_Seq uses several types of integer: PetscErrorCode, PetscInt, PetscBLASInt. > I suspect that PetscBLASInt may conflict with PetscInt. > Also I noted that execution of KSPSolve() does not even start , so arguments (Krylov,Vec_F,Vec_U,ierr) cannot be passed to KSPSolve(). > (inserted fprint() in the top of KSPSolve and saw no output) > > > So I tried to configure PETSc with --download-fblaslapack --with-64-bit-blas-indices, but got an error that > > fblaslapack does not support -with-64-bit-blas-indices > > Switching to flags --download-openblas -with-64-bit-blas-indices was unsuccessfully too because of error: > > Error during download/extract/detection of OPENBLAS: > Unable to download openblas > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": > fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. > Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to /yourselectedlocation > and use the configure option: > --download-openblas=/yourselectedlocation > Unable to download openblas > Could not execute "['git clone https://github.com/xianyi/OpenBLAS.git /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas']": > fatal: destination path '/cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages/git.openblas' already exists and is not an empty directory. > Unable to download package OPENBLAS from: git://https://github.com/xianyi/OpenBLAS.git > * If URL specified manually - perhaps there is a typo? > * If your network is disconnected - please reconnect and rerun ./configure > * Or perhaps you have a firewall blocking the download > * You can run with --with-packages-download-dir=/adirectory and ./configure will instruct you what packages to download manually > * or you can download the above URL manually, to /yourselectedlocation > and use the configure option: > --download-openblas=/yourselectedlocation > Could not locate downloaded package OPENBLAS in /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages > > But I checked the last location (.../externalpackages) and saw that OpenBLAS downloaded and unzipped. > > > > Kind regards, > Dmitry Melnichuk > > > 20.01.2020, 16:32, "Smith, Barry F." : > > First you need to figure out what is triggering: > > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > > Googling it finds all kinds of suggestions for Linux. But Windows? Maybe the debugger will help. > > Second > > > > VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > > Debugger is best to find out what is triggering this. Since it is the C side of things it would be odd that the Fortran change affects it. > > Barry > > > > > > > > On Jan 20, 2020, at 4:43 AM, ??????? ????????? wrote: > > Thank you so much for your assistance! > > As far as I have been able to find out, the errors "Type mismatch in argument ?ierr?" have been successfully fixed. > But execution of command "make PETSC_DIR=/cygdrive/d/... PETSC_ARCH=arch-mswin-c-debug check" leads to the appereance of Segmantation Violation error. > > I compiled PETSc with Microsoft MPI v10. > Does it make sense to compile PETSc with another MPI implementation (such as MPICH) in order to resolve the issue? > > Error message: > Running test examples to verify correct installation > Using PETSC_DIR=/cygdrive/d/Computational_geomechanics/installation/petsc-barry and PETSC_ARCH=arch-mswin-c-debug > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 2 MPI processes > See http://www.mcs.anl.gov/petsc/documentation/faq.html > C:/MPI/Bin/mpiexec.exe: error while loading shared libraries: ?: cannot open shared object file: No such file or directory > Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecNorm_Seq line 221 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNorm line 213 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/vec/vec/interface/rvector.c > [0]PETSC ERROR: [0] SNESSolve_NEWTONLS line 144 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/impls/ls/ls.c > [0]PETSC ERROR: [0] SNESSolve line 4375 /cygdrive/d/Computational_geomechanics/installation/petsc-barry/src/snes/interface/snes.c > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: unknown GIT Date: unknown > [0]PETSC ERROR: ./ex5f on a arch-mswin-c-debug named DESKTOP-R88IMOB by useruser Mon Jan 20 09:18:34 2020 > [0]PETSC ERROR: Configure options --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS=-O2 -CXXFLAGS=-O2 -FFLAGS="-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8" --download-fblaslapack --with-shared-libraries=no --with-64-bit-indices --force > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > job aborted: > [ranks] message > > [0] application aborted > aborting MPI_COMM_WORLD (comm=0x44000000), error 50152059, comm rank 0 > > ---- error analysis ----- > > [0] on DESKTOP-R88IMOB > ./ex5f aborted the job. abort code 50152059 > > ---- error analysis ----- > Completed test examples > > Kind regards, > Dmitry Melnichuk > > 19.01.2020, 07:47, "Smith, Barry F." : > > Dmitry, > > I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 > > Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls from Fortran pass 4 byte integers not promoted 8 byte integers. > > I have tested it with recent versions of MPICH and OpenMPI, it is fragile at compile time and may fail to compile with different versions of MPI. > > Good luck, > > Barry > > I do not recommend this approach for integer promotion in Fortran. Just blindly promoting all integers can often lead to problems. I recommend using the kind mechanism of > Fortran to insure that each variable is the type you want, you can recompile with different options to promote the kind declared variables you wish. Of course this is more intrusive and requires changes to the Fortran code. > > > On Jan 15, 2020, at 7:00 AM, ??????? ????????? wrote: > > Hello all! > > At present time I need to compile solver called Defmod (https://bitbucket.org/stali/defmod/wiki/Home), which is written in Fortran 95. > Defmod uses PETSc for solving linear algebra system. > Solver compilation with 32-bit version of PETSc does not cause any problem. > But solver compilation with 64-bit version of PETSc produces an error with size of ierr PETSc variable. > > 1. For example, consider the following statements written in Fortran: > > > PetscErrorCode :: ierr_m > PetscInt :: ierr > ... > ... > call VecDuplicate(Vec_U,Vec_Um,ierr) > call VecCopy(Vec_U,Vec_Um,ierr) > call VecGetLocalSize(Vec_U,j,ierr) > call VecGetOwnershipRange(Vec_U,j1,j2,ierr_m) > > > As can be seen first three subroutunes require ierr to be size of INTEGER(8), while the last subroutine (VecGetOwnershipRange) requires ierr to be size of INTEGER(4). > Using the same integer format gives an error: > > There is no specific subroutine for the generic ?vecgetownershiprange? at (1) > > 2. Another example is: > > > call MatAssemblyBegin(Mat_K,Mat_Final_Assembly,ierr) > CHKERRA(ierr) > call MatAssemblyEnd(Mat_K,Mat_Final_Assembly,ierr) > > > I am not able to define an appropriate size if ierr in CHKERRA(ierr). If I choose INTEGER(8), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(8) to INTEGER(4)" occurs. > If I define ierr as INTEGER(4), the error "Type mismatch in argument ?ierr? at (1); passed INTEGER(4) to INTEGER(8)" appears. > > > 3. If I change the sizes of ierr vaiables as error messages require, the compilation completed successfully, but an error occurs when calculating the RHS vector with following message: > > [0]PETSC ERROR: Out of range index value -4 cannot be negative > > > Command to configure 32-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check' --with-shared-libraries=no > > Command to configure 64-bit version of PETSc under Windows 10 using Cygwin: > ./configure --with-cc=x86_64-w64-mingw32-gcc --with-cxx=x86_64-w64-mingw32-g++ --with-fc=x86_64-w64-mingw32-gfortran --download-fblaslapack --with-mpi-include=/cygdrive/c/MPISDK/Include --with-mpi-lib=/cygdrive/c/MPISDK/Lib/libmsmpi.a --with-mpi-mpiexec=/cygdrive/c/MPI/Bin/mpiexec.exe --with-debugging=yes -CFLAGS='-O2' -CXXFLAGS='-O2' -FFLAGS='-O2 -static-libgfortran -static -lpthread -fno-range-check -fdefault-integer-8' --with-shared-libraries=no --with-64-bit-indices --known-64-bit-blas-indices > > > Kind regards, > Dmitry Melnichuk > > > > From st107539 at stud.uni-stuttgart.de Wed Jan 22 09:11:54 2020 From: st107539 at stud.uni-stuttgart.de (Felix Huber) Date: Wed, 22 Jan 2020 16:11:54 +0100 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product Message-ID: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> Hello, I currently investigate why our code does not show the expected weak scaling behaviour in a CG solver. Therefore I wanted to try out different communication methods for the VecScatter in the matrix-vector product. However, it seems like PETSc (version 3.7.6) always chooses either MPI_Alltoallv or MPI_Alltoallw when I pass different options via the PETSC_OPTIONS environment variable. Does anybody know, why this doesn't work as I expected? The matrix is a MPIAIJ matrix and created by a finite element discretization of a 3D Laplacian. Therefore it only communicates with 'neighboring' MPI ranks. Not sure if it helps, but the code is run on a Cray XC40. I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options from https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html which all result in a MPI_Alltoallv. When combined with `nopack` the communication uses MPI_Alltoallw. Best regards, Felix From dave.mayhem23 at gmail.com Wed Jan 22 09:29:58 2020 From: dave.mayhem23 at gmail.com (Dave May) Date: Wed, 22 Jan 2020 16:29:58 +0100 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> Message-ID: On Wed 22. Jan 2020 at 16:12, Felix Huber wrote: > Hello, > > I currently investigate why our code does not show the expected weak > scaling behaviour in a CG solver. Can you please send representative log files which characterize the lack of scaling (include the full log_view)? Are you using a KSP/PC configuration which should weak scale? Thanks Dave Therefore I wanted to try out > different communication methods for the VecScatter in the matrix-vector > product. However, it seems like PETSc (version 3.7.6) always chooses > either MPI_Alltoallv or MPI_Alltoallw when I pass different options via > the PETSC_OPTIONS environment variable. Does anybody know, why this > doesn't work as I expected? > > The matrix is a MPIAIJ matrix and created by a finite element > discretization of a 3D Laplacian. Therefore it only communicates with > 'neighboring' MPI ranks. Not sure if it helps, but the code is run on a > Cray XC40. > > I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options > from > > https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html > which all result in a MPI_Alltoallv. When combined with `nopack` the > communication uses MPI_Alltoallw. > > Best regards, > Felix > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Jan 22 10:56:38 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 22 Jan 2020 19:56:38 +0300 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> Message-ID: > On Jan 22, 2020, at 6:11 PM, Felix Huber wrote: > > Hello, > > I currently investigate why our code does not show the expected weak scaling behaviour in a CG solver. Therefore I wanted to try out different communication methods for the VecScatter in the matrix-vector product. However, it seems like PETSc (version 3.7.6) always chooses either MPI_Alltoallv or MPI_Alltoallw when I pass different options via the PETSC_OPTIONS environment variable. Does anybody know, why this doesn't work as I expected? > > The matrix is a MPIAIJ matrix and created by a finite element discretization of a 3D Laplacian. Therefore it only communicates with 'neighboring' MPI ranks. Not sure if it helps, but the code is run on a Cray XC40. > > I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options from https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html which all result in a MPI_Alltoallv. When combined with `nopack` the communication uses MPI_Alltoallw. > > Best regards, > Felix > 3.7.6 is a quite old version. You should consider upgrading From eijkhout at tacc.utexas.edu Wed Jan 22 11:03:08 2020 From: eijkhout at tacc.utexas.edu (Victor Eijkhout) Date: Wed, 22 Jan 2020 17:03:08 +0000 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> Message-ID: On , 2020Jan22, at 09:11, Felix Huber > wrote: weak scaling behaviour in a CG solver Norms and inner products have Log(P) complexity, so you?ll never get perfect weak scaling. Victor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Jan 22 11:33:27 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 22 Jan 2020 10:33:27 -0700 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> Message-ID: <87blqv2y6w.fsf@jedbrown.org> Victor Eijkhout writes: > On , 2020Jan22, at 09:11, Felix Huber > wrote: > > weak scaling behaviour in a CG solver > > Norms and inner products have Log(P) complexity, so you?ll never get perfect weak scaling. Allreduce is nearly constant time with hardware collectives on nice networks. The increased cost frequently observed is due to load imbalance causing different processes to enter at different times. https://www.mcs.anl.gov/~fischer/gop/ From jed at jedbrown.org Wed Jan 22 11:36:02 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 22 Jan 2020 10:36:02 -0700 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> Message-ID: <877e1j2y2l.fsf@jedbrown.org> Stefano Zampini writes: >> On Jan 22, 2020, at 6:11 PM, Felix Huber wrote: >> >> Hello, >> >> I currently investigate why our code does not show the expected weak scaling behaviour in a CG solver. Therefore I wanted to try out different communication methods for the VecScatter in the matrix-vector product. However, it seems like PETSc (version 3.7.6) always chooses either MPI_Alltoallv or MPI_Alltoallw when I pass different options via the PETSC_OPTIONS environment variable. Does anybody know, why this doesn't work as I expected? >> >> The matrix is a MPIAIJ matrix and created by a finite element discretization of a 3D Laplacian. Therefore it only communicates with 'neighboring' MPI ranks. Not sure if it helps, but the code is run on a Cray XC40. >> >> I tried the `ssend`, `rsend`, `sendfirst`, `reproduce` and no options from https://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Vec/VecScatterCreate.html which all result in a MPI_Alltoallv. When combined with `nopack` the communication uses MPI_Alltoallw. >> >> Best regards, >> Felix >> > > 3.7.6 is a quite old version. You should consider upgrading VecScatter has been greatly refactored (and the default implementation is entirely new) since 3.7. Anyway, I'm curious about your configuration and how you determine that MPI_Alltoallv/MPI_Alltoallw is being used. This has never been a default code path, so I suspect something in your environment or code making this happen. From jeremy at seamplex.com Wed Jan 22 12:27:43 2020 From: jeremy at seamplex.com (Jeremy Theler) Date: Wed, 22 Jan 2020 15:27:43 -0300 Subject: [petsc-users] Internal product through a matrix norm Message-ID: <53c2fe45d5785f465cac240d6c17b0c3d2a565fd.camel@seamplex.com> Sorry for the basic question, but here it goes. Say I have a vector u and a matrix K and I want to compute the scalar e = u^T K u (for example the strain energy if u are displacements a K is the stiffness matrix). Is there anything better (both in elegance and efficiency) than doing this? PetscScalar e; Vec Kx; VecDuplicate(x, &Kx); MatMult(K, x, Kx); VecDot(x, Kx, &e); -- jeremy theler www.seamplex.com From jed at jedbrown.org Wed Jan 22 12:30:20 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 22 Jan 2020 11:30:20 -0700 Subject: [petsc-users] Internal product through a matrix norm In-Reply-To: <53c2fe45d5785f465cac240d6c17b0c3d2a565fd.camel@seamplex.com> References: <53c2fe45d5785f465cac240d6c17b0c3d2a565fd.camel@seamplex.com> Message-ID: <87pnfb1gzn.fsf@jedbrown.org> Jeremy Theler writes: > Sorry for the basic question, but here it goes. > Say I have a vector u and a matrix K and I want to compute the scalar > > e = u^T K u > > (for example the strain energy if u are displacements a K is the > stiffness matrix). > > Is there anything better (both in elegance and efficiency) than doing > this? > > PetscScalar e; > Vec Kx; > > VecDuplicate(x, &Kx); > MatMult(K, x, Kx); > VecDot(x, Kx, &e); Nope; this is standard. From mark.cunningham at ariacoustics.com Fri Jan 24 08:24:01 2020 From: mark.cunningham at ariacoustics.com (Mark Cunningham) Date: Fri, 24 Jan 2020 09:24:01 -0500 Subject: [petsc-users] overset grids Message-ID: As a novice PETSc user, can I ask for some advice? I've found some test examples that do similar sorts of things but I would appreciate any suggestions. I have a matrix generated from overset meshes with a structure like A = |A0 B01 B0n| |B01 A1 B1n| : : |B0n ... An| where the Ai are finite difference stencils and the Bij represent interpolation between the grids. 1. If I create a DMDA composite object that defines each of the grid sizes through repeated calls to DMCreate1D (because we are omitting blanked points from the 3D mesh) and create the matrix through DMCreateMatrix, can I still fill the matrix through MatSetValues using the global index? (This is how the code operates now.) 2. If I have to declare the matrix to be matnest and I have to create each of the subblocks individually, how do I cope with some of the Bij being all zero? Must there be a MatSetValues call for each block? 3. The principle reason at the moment for the change is to use diag(A0...An) as the precondition matrix and use ILU on the blocks. If I set the preconditioner to be bjacobi and then fetch the subksps, can I then set the subpcs to be ilu? 4. A further complication is we would like to have a boundary condition where the values of the boundary points are defined by an expansion in functions that satisfy the radiation condition. So, I would like to define an index set that identifies the boundary points and define a PCSHELL on A that for the pcsetup phase will do a QR factorization of the auxiliary matrix: G, where we're solving Gy = QRy =c and then on pcapply do the Ry = Q* c triangular solve that will enable me to update the boundary points (y subset of x the solution vector.) How do I get the PCSHELL to work with the block jacobi strategy. Thanks for any suggestions. Mark Cunningham -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Fri Jan 24 09:52:07 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Fri, 24 Jan 2020 15:52:07 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: Message-ID: Hello, Anthony I tried petsc-3.8.4 + icc/gcc + Intel MPI 2019 update 5 + optimized/debug build, and ran with 1024 ranks, but I could not reproduce the error. Maybe you can try these: * Use the latest petsc + your test example, run with AND without -vecscatter_type mpi1, to see if they can report useful messages. * Or, use Intel MPI 2019 update 6 to see if this is an Intel MPI bug. $ cat ex50.c #include #include int main(int argc,char **argv) { PetscErrorCode ierr; PetscInt size; PetscInt X = 1024,Y = 128,Z=512; //PetscInt X = 512,Y = 64, Z=256; DM da; ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) return ierr; ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); ierr = DMDACreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,2*X+1,2*Y+1,2*Z+1,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,3,2,NULL,NULL,NULL,&da);CHKERRQ(ierr); ierr = DMSetFromOptions(da);CHKERRQ(ierr); ierr = DMSetUp(da);CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_WORLD,"Running with %D MPI ranks\n",size);CHKERRQ(ierr); ierr = DMDestroy(&da);CHKERRQ(ierr); ierr = PetscFinalize(); return ierr; } $ldd ex50 linux-vdso.so.1 => (0x00007ffdbcd43000) libpetsc.so.3.8 => /home/jczhang/petsc/linux-intel-opt/lib/libpetsc.so.3.8 (0x00002afd27e51000) libX11.so.6 => /lib64/libX11.so.6 (0x00002afd2a811000) libifport.so.5 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libifport.so.5 (0x00002afd2ab4f000) libmpicxx.so.12 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/libmpicxx.so.12 (0x00002afd2ad7d000) libdl.so.2 => /lib64/libdl.so.2 (0x00002afd2af9d000) libmpifort.so.12 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/libmpifort.so.12 (0x00002afd2b1a1000) libmpi.so.12 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release/libmpi.so.12 (0x00002afd2b55f000) librt.so.1 => /lib64/librt.so.1 (0x00002afd2c564000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002afd2c76c000) libimf.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libimf.so (0x00002afd2c988000) libsvml.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libsvml.so (0x00002afd2d00d000) libirng.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libirng.so (0x00002afd2ea99000) libm.so.6 => /lib64/libm.so.6 (0x00002afd2ee04000) libcilkrts.so.5 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libcilkrts.so.5 (0x00002afd2f106000) libstdc++.so.6 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/clck/2019.5/lib/intel64/libstdc++.so.6 (0x00002afd2f343000) libgcc_s.so.1 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/clck/2019.5/lib/intel64/libgcc_s.so.1 (0x00002afd2f655000) libirc.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libirc.so (0x00002afd2f86b000) libc.so.6 => /lib64/libc.so.6 (0x00002afd2fadd000) libintlc.so.5 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libintlc.so.5 (0x00002afd2feaa000) libxcb.so.1 => /lib64/libxcb.so.1 (0x00002afd3011c000) /lib64/ld-linux-x86-64.so.2 (0x00002afd27c2d000) libfabric.so.1 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib/libfabric.so.1 (0x00002afd30344000) libXau.so.6 => /lib64/libXau.so.6 (0x00002afd3057c000) --Junchao Zhang On Tue, Jan 21, 2020 at 2:25 AM Anthony Jourdon > wrote: Hello, I made a test to try to reproduce the error. To do so I modified the file $PETSC_DIR/src/dm/examples/tests/ex35.c I attach the file in case of need. The same error is reproduced for 1024 mpi ranks. I tested two problem sizes (2*512+1x2*64+1x2*256+1 and 2*1024+1x2*128+1x2*512+1) and the error occured for both cases, the first case is also the one I used to run before the OS and mpi updates. I also run the code with -malloc_debug and nothing more appeared. I attached the configure command I used to build a debug version of petsc. Thank you for your time, Sincerly. Anthony Jourdon ________________________________ De : Zhang, Junchao > Envoy? : jeudi 16 janvier 2020 16:49 ? : Anthony Jourdon > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] DMDA Error It seems the problem is triggered by DMSetUp. You can write a small test creating the DMDA with the same size as your code, to see if you can reproduce the problem. If yes, it would be much easier for us to debug it. --Junchao Zhang On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon > wrote: Dear Petsc developer, I need assistance with an error. I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. This code used to run very well on a super computer with the OS SLES11. Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 The code was running with no problem on 1024 and more mpi ranks. Recently, the OS of the computer has been updated to RHEL7 I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c Thank you for your time, Sincerly, Anthony Jourdon -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjwu16 at gmail.com Mon Jan 27 06:01:45 2020 From: yjwu16 at gmail.com (Yingjie Wu) Date: Mon, 27 Jan 2020 20:01:45 +0800 Subject: [petsc-users] Problem in using TS objects Message-ID: Dear PETSc developers Hi, Recently, I am using PETSc to write a one-dimensional hydrodynamics solver. At present, I use SNES object to complete the development of steady-state (excluding time term) program, and the result is very good. I want to continue with the dynamic solver, but have some problems. As shown in the attachment, I solve three conservation equations (partial differential equations), and use finite difference to separate one-dimensional meshes. The three main variables I solved are velocity V, pressure P, and energy E. therefore, I have the following question when writing transient programs using TS objects: 1. Three equations correspond to three variables. I used *TSSetIfunction* to set the residual equation. In theory, I can use *Vec u_t* (time derivative of state vector) to set the time term. But the time term in *Eq1* is the time partial derivative of density *\ rho*. The density *\rho* is a function of energy E and pressure P. How to set this time term which is not a main variable, but an intermediate variable (*\rho (E, P)*)) which is composed of two main variables? (In the equations, the direction of the mesh is z. g represents the acceleration of gravity. f_vis for resistance pressure drop, Q for heat source) I need some advices, if there are examples better. Thanks, Yingjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Equation.pdf Type: application/pdf Size: 25984 bytes Desc: not available URL: From mfadams at lbl.gov Mon Jan 27 07:30:49 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 27 Jan 2020 08:30:49 -0500 Subject: [petsc-users] overset grids In-Reply-To: References: Message-ID: On Fri, Jan 24, 2020 at 9:25 AM Mark Cunningham < mark.cunningham at ariacoustics.com> wrote: > As a novice PETSc user, can I ask for some advice? I've found some test > examples that do similar sorts of things but I would appreciate any > suggestions. > I have a matrix generated from overset meshes with a structure like > A = |A0 B01 B0n| > |B01 A1 B1n| > : : > |B0n ... An| where the Ai are finite difference stencils > and the Bij > represent interpolation between the grids. > 1. If I create a DMDA composite object that defines each of the grid sizes > through repeated calls to DMCreate1D (because we are omitting blanked > points from the 3D mesh) and create the matrix > through DMCreateMatrix, can I still fill the matrix through MatSetValues > using the global index? > You might just want to use an AIJ matrix if you are sparse (omitting points) > (This is how the code operates now.) > 2. If I have to declare the matrix to be matnest and I have to create each > of the subblocks individually, how do I cope with some of the Bij being all > zero? > A sparse matrix (AIJ) can be empty. > Must there be a MatSetValues call for each block? > With AIJ, yes. You need to keep track of indexing. > 3. The principle reason at the moment for the change is to use > diag(A0...An) as the precondition matrix and use ILU on the blocks. If I > set the preconditioner to be bjacobi and then fetch the subksps, can I then > set the subpcs to be ilu? > ILU is local, not MPI parallel, so you will get a diag block structure by default. So ILU is bjacobi with something on the blocks (eg, ILU), but you can specify bjacobi and then specify the PC on each block, not the KSP (I think). The KSP is global, but there may be a way to use a KSP PC if you really want to. > 4. A further complication is we would like to have a boundary condition > where the values of the boundary points are defined by an expansion in > functions that satisfy the radiation condition. So, I would like to define > an index set that identifies the boundary points and define a PCSHELL on A > that for the pcsetup phase will do a QR factorization of the auxiliary > matrix: G, where we're solving > Gy = QRy =c and then on pcapply do the Ry = Q* c triangular solve that > will enable me to update > the boundary points (y subset of x the solution vector.) How do I get the > PCSHELL to work with the block jacobi strategy. > Block Jacobi has a sub PC for each block. You can set these manually or with the command line (eg, -pc_type bjacobi -sub_pc_type ilu). You can register your shell PC, but you probably don't want to jump through these hoops at this point, so use things like KPSGetPC, PCGetSubBlocks (something like that). And keep digging down until you get to the block PCs. > > Thanks for any suggestions. > > Mark Cunningham > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Jan 27 07:54:33 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 27 Jan 2020 06:54:33 -0700 Subject: [petsc-users] Problem in using TS objects In-Reply-To: References: Message-ID: <87wo9drome.fsf@jedbrown.org> Yingjie Wu writes: > Dear PETSc developers > Hi, > Recently, I am using PETSc to write a one-dimensional hydrodynamics solver. > At present, I use SNES object to complete the development of steady-state > (excluding time term) program, and the result is very good. I want to > continue with the dynamic solver, but have some problems. As shown in the > attachment, I solve three conservation equations (partial differential > equations), and use finite difference to separate one-dimensional meshes. > The three main variables I solved are velocity V, pressure P, and energy E. > therefore, I have the following question when writing transient programs > using TS objects: > > 1. Three equations correspond to three variables. I used *TSSetIfunction* > to set the residual equation. In theory, I can use *Vec u_t* (time > derivative of state vector) to set the time term. But the time term in *Eq1* > is the time partial derivative of density *\ rho*. The density *\rho* is a > function of energy E and pressure P. How to set this time term which is not > a main variable, but an intermediate variable (*\rho (E, P)*)) which is > composed of two main variables? The standard answer to this problem is to use the chain rule to expand the time derivative in terms of the state variables. \rho_P P_t + \rho_E E_t + (\rho V)_z = 0. The trouble with this is that you will generally lose discrete conservation unless the function \rho(P,E) is sufficiently simple, e.g., linear. Conservation error will converge to zero, but only at the order of time discretization, versus being exact (up to machine epsilon or algebraic solver tolerances). In the conservation law community, the standard answer to this is to insist on using conservative variables for the state. This requires that equations of state be invertible, e.g., P(\rho, \rho e) = ..., perhaps by way of an implicit solve (implicitly-defined constitutive models are used in a number of domains). Sometimes this has negative effects on the conditioning of the algebraic systems; that can generally be repaired by preconditioning. A method that can be used with any time integrator and preconditioner is to expand the equations into a DAE, writing (conservative) - eq_of_state(primitive) = 0 (conservative)_t - f(primitive) = 0 This increases the number of degrees of freedom in your system and may add solver complexity (though it can typically be done at low overhead). As chance would have it, I'll soon be submitting a merge request that will let you set a callback to define "transient" variables (your conservative variables in this case) as a function of the state variables. Methods like BDF will then become conservative with a simple implementation (your IFunction will get time derivative of conserved variables as an input), though you'll still be expected to handle the chain rule correctly in IJacobian. This transformation won't work for explicit methods or Runge-Kutta methods. > (In the equations, the direction of the mesh is z. g represents the > acceleration of gravity. f_vis for resistance pressure drop, Q for heat > source) > > I need some advices, if there are examples better. > > Thanks, > Yingjie From st107539 at stud.uni-stuttgart.de Mon Jan 27 10:09:38 2020 From: st107539 at stud.uni-stuttgart.de (Felix Huber) Date: Mon, 27 Jan 2020 17:09:38 +0100 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: <877e1j2y2l.fsf@jedbrown.org> References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> <877e1j2y2l.fsf@jedbrown.org> Message-ID: Thank you all for you reply! > Are you using a KSP/PC configuration which should weak scale? Yes the system is solved with KSPSolve. There is no preconditioner yet, but I fixed the number of CG iterations to 3 to ensure an apples to apples comparison during the scaling measurements. >> VecScatter has been greatly refactored (and the default implementation >> is entirely new) since 3.7. I now tried to use PETSc 3.11 and the code runs fine. The communication seems to show a better weak scaling behavior now. I'll see if we can just upgrade to 3.11. > Anyway, I'm curious about your > configuration and how you determine that MPI_Alltoallv/MPI_Alltoallw is > being used. I used the Extrae profiler which intercepts all MPI calls and logs them into a file. This showed that Alltoall is being used for the communication, which I found surprising. With PETSc 3.11 the Alltoall calls are replaced by MPI_Start(all) and MPI_Wait(all), which sounds more reasonable to me. > This has never been a default code path, so I suspect > something in your environment or code making this happen. I attached some log files for some PETSc 3.7 runs on 1,19 and 115 nodes (24 cores each) which suggest polynomial scaling (vs logarithmic scaling). Could it be some installation setting of the PETSc version? (I use a preinstalled PETSc) > Can you please send representative log files which characterize the > lack of scaling (include the full log_view)? "Stage 1: activation" is the stage of interest, as it wraps the KSPSolve. The number of unkowns per rank is very small in the measurement, so most of the time should be communication. However, I just noticed, that the stage also contains an additional setup step which might be the reason why the MatMul takes longer than the KSPSolve. I can repeat the measurements if necessary. I should add, that I put a MPI_Barrier before the KSPSolve, to avoid any previous work imbalance to effect the KSPSolve call. Best regards, Felix -------------- next part -------------- A non-text attachment was scrubbed... Name: _petsc307-barrier.log Type: text/x-log Size: 42693 bytes Desc: not available URL: From david.knezevic at akselos.com Mon Jan 27 10:57:35 2020 From: david.knezevic at akselos.com (David Knezevic) Date: Mon, 27 Jan 2020 11:57:35 -0500 Subject: [petsc-users] Does MatCopy require Mats to have the same communicator? Message-ID: I have a case where I'd like to copy a Mat defined on COMM_WORLD to a new Mat defined on some sub-communicator. Does MatCopy support this, or would I have to write a custom copy operation? I see here that MatConvert requires identical communicators, but I don't see any mention of this for MatCopy, so I wanted to check. Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Jan 27 11:49:40 2020 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 27 Jan 2020 12:49:40 -0500 Subject: [petsc-users] Does MatCopy require Mats to have the same communicator? In-Reply-To: References: Message-ID: Maybe I'm not understanding you -- matCopy does not have your sub-communicator so how would it create a Mat with it ... You probably want to use MatGetSubmatrix. This is general, but the output will have the same communicator, but the idle processors will be empty, if that is what you specify. Now you just need to replace the communicator with your sub communicator. Not sure how to do this but now you have your data in the right place at least. Oh, there is a method to create a Mat with a sub communicator with non-empty rows. Now you use this and use the communicator in the new matrix as your sub-communicator. https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAdjCreateNonemptySubcommMat.html On Mon, Jan 27, 2020 at 11:59 AM David Knezevic wrote: > I have a case where I'd like to copy a Mat defined on COMM_WORLD to a new > Mat defined on some sub-communicator. Does MatCopy support this, or would I > have to write a custom copy operation? > > I see here > > that MatConvert requires identical communicators, but I don't see any > mention of this for MatCopy, so I wanted to check. > > Thanks, > David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.knezevic at akselos.com Mon Jan 27 12:03:03 2020 From: david.knezevic at akselos.com (David Knezevic) Date: Mon, 27 Jan 2020 13:03:03 -0500 Subject: [petsc-users] Does MatCopy require Mats to have the same communicator? In-Reply-To: References: Message-ID: > > Maybe I'm not understanding you -- matCopy does not have your > sub-communicator so how would it create a Mat with it ... > MatCopy has the two Mats, so I thought I could set up the copy to be based on the sub-communicator and MatCopy might have been set up to get the necessary communicator from the copy Mat... that was just a guess on my part and presumably not how it actually works... > You probably want to use MatGetSubmatrix. This is general, but > the output will have the same communicator, but the idle processors will be > empty, if that is what you specify. > Now you just need to replace the communicator with your sub communicator. > Not sure how to do this but now you have your data in the right place at > least. > > Oh, there is a method to create a Mat with a sub communicator with > non-empty rows. Now you use this and use the communicator in the new matrix > as your sub-communicator. > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAdjCreateNonemptySubcommMat.html > OK, thanks, that's helpful. Though Randall Mackie also just sent me this link: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateRedundantMatrix.html It seems like MatCreateRedundantMatrix is what I want to use, so I'll try that out. Best, David > > > > On Mon, Jan 27, 2020 at 11:59 AM David Knezevic < > david.knezevic at akselos.com> wrote: > >> I have a case where I'd like to copy a Mat defined on COMM_WORLD to a new >> Mat defined on some sub-communicator. Does MatCopy support this, or would I >> have to write a custom copy operation? >> >> I see here >> >> that MatConvert requires identical communicators, but I don't see any >> mention of this for MatCopy, so I wanted to check. >> >> Thanks, >> David >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jczhang at mcs.anl.gov Mon Jan 27 13:50:27 2020 From: jczhang at mcs.anl.gov (Zhang, Junchao) Date: Mon, 27 Jan 2020 19:50:27 +0000 Subject: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product In-Reply-To: References: <6f510cda-aa82-af7b-fd12-34297edba2b7@stud.uni-stuttgart.de> <877e1j2y2l.fsf@jedbrown.org> Message-ID: --Junchao Zhang On Mon, Jan 27, 2020 at 10:09 AM Felix Huber > wrote: Thank you all for you reply! > Are you using a KSP/PC configuration which should weak scale? Yes the system is solved with KSPSolve. There is no preconditioner yet, but I fixed the number of CG iterations to 3 to ensure an apples to apples comparison during the scaling measurements. >> VecScatter has been greatly refactored (and the default implementation >> is entirely new) since 3.7. I now tried to use PETSc 3.11 and the code runs fine. The communication seems to show a better weak scaling behavior now. I'll see if we can just upgrade to 3.11. > Anyway, I'm curious about your > configuration and how you determine that MPI_Alltoallv/MPI_Alltoallw is > being used. I used the Extrae profiler which intercepts all MPI calls and logs them into a file. This showed that Alltoall is being used for the communication, which I found surprising. With PETSc 3.11 the Alltoall calls are replaced by MPI_Start(all) and MPI_Wait(all), which sounds more reasonable to me. > This has never been a default code path, so I suspect > something in your environment or code making this happen. I attached some log files for some PETSc 3.7 runs on 1,19 and 115 nodes (24 cores each) which suggest polynomial scaling (vs logarithmic scaling). Could it be some installation setting of the PETSc version? (I use a preinstalled PETSc) I checked petsc 3.7.6 and did not think the vecscatter type could be set at configure time. Anyway, upgrading petsc is preferred. If that is not possible, we can work together to see what happened. > Can you please send representative log files which characterize the > lack of scaling (include the full log_view)? "Stage 1: activation" is the stage of interest, as it wraps the KSPSolve. The number of unkowns per rank is very small in the measurement, so most of the time should be communication. However, I just noticed, that the stage also contains an additional setup step which might be the reason why the MatMul takes longer than the KSPSolve. I can repeat the measurements if necessary. I should add, that I put a MPI_Barrier before the KSPSolve, to avoid any previous work imbalance to effect the KSPSolve call. You can use -log_sync, which adds an MPI_Barrier at the beginning of each event. Compare log_view files with and without -log_sync. If an event has much higher %T without -log_sync than with -log_sync, it means the code is not balanced. Alternatively, you can look at the Ratio column in log file without -log_sync. Best regards, Felix -------------- next part -------------- An HTML attachment was scrubbed... URL: From yjwu16 at gmail.com Mon Jan 27 20:38:44 2020 From: yjwu16 at gmail.com (Yingjie Wu) Date: Tue, 28 Jan 2020 10:38:44 +0800 Subject: [petsc-users] Problem in using TS objects In-Reply-To: <87wo9drome.fsf@jedbrown.org> References: <87wo9drome.fsf@jedbrown.org> Message-ID: Hi Jed, Thank you very much for your detailed answer. It helps me a lot. I'm going to use the "chain rule" method to solve my problem first, and may continue to try different ways later. For your reply, I have the following questions: 1. The method of transforming equations into DAE, for my problem, does it mean taking \ rho as a variable and adding its constitutive equation as a residual equation? f(\rho) = \rho - subroutine_compute_rho(P,E) 2. I hope to use finite difference to construct Jacobian matrix first(this means that I may not provide the IJacobian function). Is the previous switch "-snes_fd" still available for transient calculation? 3. For the new merge request you have mentioned, where should I get the information so as to get its new developments? Thanks again for your help. Yingjie Jed Brown ?2020?1?27??? ??9:54??? > Yingjie Wu writes: > > > Dear PETSc developers > > Hi, > > Recently, I am using PETSc to write a one-dimensional hydrodynamics > solver. > > At present, I use SNES object to complete the development of steady-state > > (excluding time term) program, and the result is very good. I want to > > continue with the dynamic solver, but have some problems. As shown in the > > attachment, I solve three conservation equations (partial differential > > equations), and use finite difference to separate one-dimensional meshes. > > The three main variables I solved are velocity V, pressure P, and energy > E. > > therefore, I have the following question when writing transient programs > > using TS objects: > > > > 1. Three equations correspond to three variables. I used *TSSetIfunction* > > to set the residual equation. In theory, I can use *Vec u_t* (time > > derivative of state vector) to set the time term. But the time term in > *Eq1* > > is the time partial derivative of density *\ rho*. The density *\rho* is > a > > function of energy E and pressure P. How to set this time term which is > not > > a main variable, but an intermediate variable (*\rho (E, P)*)) which is > > composed of two main variables? > > The standard answer to this problem is to use the chain rule to expand > the time derivative in terms of the state variables. > > \rho_P P_t + \rho_E E_t + (\rho V)_z = 0. > > The trouble with this is that you will generally lose discrete > conservation unless the function \rho(P,E) is sufficiently simple, e.g., > linear. Conservation error will converge to zero, but only at the order > of time discretization, versus being exact (up to machine epsilon or > algebraic solver tolerances). > > In the conservation law community, the standard answer to this is to > insist on using conservative variables for the state. This requires > that equations of state be invertible, e.g., > > P(\rho, \rho e) = ..., > > perhaps by way of an implicit solve (implicitly-defined constitutive > models are used in a number of domains). Sometimes this has negative > effects on the conditioning of the algebraic systems; that can generally > be repaired by preconditioning. > > A method that can be used with any time integrator and preconditioner is > to expand the equations into a DAE, writing > > (conservative) - eq_of_state(primitive) = 0 > (conservative)_t - f(primitive) = 0 > > This increases the number of degrees of freedom in your system and may > add solver complexity (though it can typically be done at low overhead). > > As chance would have it, I'll soon be submitting a merge request that > will let you set a callback to define "transient" variables (your > conservative variables in this case) as a function of the state > variables. Methods like BDF will then become conservative with a simple > implementation (your IFunction will get time derivative of conserved > variables as an input), though you'll still be expected to handle the > chain rule correctly in IJacobian. This transformation won't work for > explicit methods or Runge-Kutta methods. > > > (In the equations, the direction of the mesh is z. g represents the > > acceleration of gravity. f_vis for resistance pressure drop, Q for heat > > source) > > > > I need some advices, if there are examples better. > > > > Thanks, > > Yingjie > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gautam.bisht at pnnl.gov Mon Jan 27 20:57:42 2020 From: gautam.bisht at pnnl.gov (Bisht, Gautam) Date: Tue, 28 Jan 2020 02:57:42 +0000 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <3F79926B-D567-4592-8E4E-46D21628D2DF@pnnl.gov> References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> <3F79926B-D567-4592-8E4E-46D21628D2DF@pnnl.gov> Message-ID: <042A2165-BACC-41C0-8603-C4319C1A5176@pnnl.gov> Hi Matt, Could you issue MR to get changes from knepley/fix-dm-g2n-serial into PETSc master? I?m having some trouble during PETSc installation withexodusii using your branch, but the master seems fine. Thanks, -Gautam. On Jan 18, 2020, at 11:19 AM, 'Bisht, Gautam' via tdycores-dev > wrote: Hi Matt, Thanks for the fixes to the example. -Gautam On Jan 15, 2020, at 7:05 PM, Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 4:08 PM Matthew Knepley > wrote: On Wed, Jan 15, 2020 at 3:47 PM 'Bisht, Gautam' via tdycores-dev > wrote: Hi Matt, I?m running into error while using DMPlexNaturalToGlobalBegin/End and am hoping you have some insights in what I?m doing incorrectly. I create a 2x2x2 grid and distribute it across processors (N=1,2). I create a natural and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are the two issues: - When N = 1, PETSc complains about DMSetUseNatural() not being called before DMPlexDistribute(), which is certainly not the case. - For N=1 and 2, global vector doesn?t have valid entries. I?m not sure how to create the natural vector and have used DMCreateGlobalVector() to create the natural vector, which could be the issue. Attached is the sample code to reproduce the error and below is the screen output. Cool. I will run it and figure out the problem. 1) There was bad error reporting there. I am putting the fix in a new branch. It did not check for being on one process. If you run with knepley/fix-dm-g2n-serial It will work correctly in serial. 2) The G2N needs a serial data layout to work, so you have to make a Section _before_ distributing. I need to put that in the docs. I have fixed your example to do this and attached it. I run it with master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 1 ./ex_test -dm_plex_box_faces 2,2,2 -dm_view DM Object: 1 MPI processes type: plex DM_0x84000000_0 in 3 dimensions: 0-cells: 27 1-cells: 54 2-cells: 36 3-cells: 8 Labels: marker: 1 strata with value/size (1 (72)) Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 (4), 2 (4)) depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) Field p: adjacency FVM++ Natural vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. Global vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. Information about the mesh: [0] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 [0] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 [0] cell = 02; (0.250000, 0.750000, 0.250000); is_local = 1 [0] cell = 03; (0.750000, 0.750000, 0.250000); is_local = 1 [0] cell = 04; (0.250000, 0.250000, 0.750000); is_local = 1 [0] cell = 05; (0.750000, 0.250000, 0.750000); is_local = 1 [0] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 1 [0] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 1 master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 2 ./ex_test -dm_plex_box_faces 2,2,2 -dm_view DM Object: Parallel Mesh 2 MPI processes type: plex Parallel Mesh in 3 dimensions: 0-cells: 27 27 1-cells: 54 54 2-cells: 36 36 3-cells: 8 8 Labels: depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) marker: 1 strata with value/size (1 (72)) Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 (4), 6 (4)) Field p: adjacency FVM++ Natural vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 1. 2. 3. Process [1] 4. 5. 6. 7. Global vector: Vec Object: 2 MPI processes type: mpi Process [0] 2. 3. 6. 7. Process [1] 0. 1. 4. 5. Information about the mesh: [0] cell = 00; (0.250000, 0.750000, 0.250000); is_local = 1 [0] cell = 01; (0.750000, 0.750000, 0.250000); is_local = 1 [0] cell = 02; (0.250000, 0.750000, 0.750000); is_local = 1 [0] cell = 03; (0.750000, 0.750000, 0.750000); is_local = 1 [0] cell = 04; (0.250000, 0.250000, 0.250000); is_local = 0 [0] cell = 05; (0.750000, 0.250000, 0.250000); is_local = 0 [0] cell = 06; (0.250000, 0.250000, 0.750000); is_local = 0 [0] cell = 07; (0.750000, 0.250000, 0.750000); is_local = 0 [1] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 [1] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 [1] cell = 02; (0.250000, 0.250000, 0.750000); is_local = 1 [1] cell = 03; (0.750000, 0.250000, 0.750000); is_local = 1 [1] cell = 04; (0.250000, 0.750000, 0.250000); is_local = 0 [1] cell = 05; (0.750000, 0.750000, 0.250000); is_local = 0 [1] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 0 [1] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 0 Thanks, Matt Thanks, Matt >make ex_test ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test Natural vector: Vec Object: 1 MPI processes type: seq 0. 1. 2. 3. 4. 5. 6. 7. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: DM global to natural SF was not created. You must call DMSetUseNatural() before DMPlexDistribute(). [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 GIT Date: 2019-12-21 14:33:27 -0600 [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed Jan 15 12:34:03 2020 [0]PETSC ERROR: Configure options --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local --download-zlib --download-exodusii=yes --download-hdf5=yes --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes --download-mpich=yes --download-mumps=yes --download-scalapack=yes --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 PETSC_ARCH=darwin-gcc8 [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c Global vector: Vec Object: 1 MPI processes type: seq 0. 0. 0. 0. 0. 0. 0. 0. Information about the mesh: Rank = 0 local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test Natural vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 1. 2. 3. Process [1] 4. 5. 6. 7. Global vector: Vec Object: 2 MPI processes type: mpi Process [0] 0. 0. 0. 0. Process [1] 0. 0. 0. 0. Information about the mesh: Rank = 0 local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 Rank = 1 local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -Gautam On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev > wrote: On Jan 9, 2020, at 4:25 PM, Matthew Knepley > wrote: On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev > wrote: > On Jan 9, 2020, at 2:58 PM, Jed Brown > wrote: > > "'Bisht, Gautam' via tdycores-dev" > writes: > >>> Do you need to rely on the element number, or would coordinates (of a >>> centroid?) be sufficient for your purposes? >> >> I do need to rely on the element number. In my case, I have a mapping file that remaps data from one grid onto another grid. Though I?m currently creating a hexahedron mesh, in the future I would be reading in an unstructured grid from a file for which I cannot rely on coordinates. > > How does the mapping file work and how is it generated? In CESM/E3SM, the mapping file is used to map fluxes or states between grids of two components (e.g. land & atmosphere). The mapping method can be conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB script to create the mapping file. I?m surprised that this is not an issue for other codes that are using DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, they can specify material property for each grid cell. So, there should be a way to create a vectorscatter that will scatter material property read in the ?application?-order (i.e. order before calling DMPlexDistribute() ) to ghosted-order (i.e. order after calling DMPlexDistribute()). We did build something specific for this because some people wanted it. I wish I could purge this from all simulations. Its definitely destructive, but this is the way the world currently is. You want this: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html Perfect. Thanks. -Gautam Thanks, Matt > We can locate points and create interpolation with unstructured grids. > > -- > You received this message because you are subscribed to the Google Groups "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0%40pnnl.gov. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gn%3DxsVjjN8sX6km8ub%3Djkk8vxiU2DZVEi-4Kpbi_rM-0w%40mail.gmail.com. -- You received this message because you are subscribed to the Google Groups "tdycores-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to tdycores-dev+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/tdycores-dev/3F79926B-D567-4592-8E4E-46D21628D2DF%40pnnl.gov. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Jan 27 22:57:03 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 27 Jan 2020 21:57:03 -0700 Subject: [petsc-users] Problem in using TS objects In-Reply-To: References: <87wo9drome.fsf@jedbrown.org> Message-ID: <87pnf4qiu8.fsf@jedbrown.org> Yingjie Wu writes: > Hi Jed, > Thank you very much for your detailed answer. It helps me a lot. > I'm going to use the "chain rule" method to solve my problem first, and may > continue to try different ways later. > For your reply, I have the following questions: > 1. The method of transforming equations into DAE, for my problem, does it > mean taking \ rho as a variable and adding its constitutive equation as a > residual equation? > f(\rho) = \rho - subroutine_compute_rho(P,E) Yes. > 2. I hope to use finite difference to construct Jacobian matrix > first(this means that I may not provide the IJacobian function). Is the > previous switch "-snes_fd" still available for transient calculation? Yes. > 3. For the new merge request you have mentioned, where should I get the > information so as to get its new developments? You can enable notifications for this issue (see right sidebar) to get updates on my forthcoming MR. https://gitlab.com/petsc/petsc/issues/547 From knepley at gmail.com Tue Jan 28 05:12:23 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Jan 2020 06:12:23 -0500 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: <042A2165-BACC-41C0-8603-C4319C1A5176@pnnl.gov> References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> <3F79926B-D567-4592-8E4E-46D21628D2DF@pnnl.gov> <042A2165-BACC-41C0-8603-C4319C1A5176@pnnl.gov> Message-ID: On Mon, Jan 27, 2020 at 9:57 PM 'Bisht, Gautam' via tdycores-dev < tdycores-dev at googlegroups.com> wrote: > Hi Matt, > > Could you issue MR to get changes from knepley/fix-dm-g2n-serial into > PETSc master? I?m having some trouble during PETSc installation > withexodusii using your branch, but the master seems fine. > Yep, just submitted the MR. Should be in soon. Thanks, Matt > Thanks, > -Gautam. > > On Jan 18, 2020, at 11:19 AM, 'Bisht, Gautam' via tdycores-dev < > tdycores-dev at googlegroups.com> wrote: > > Hi Matt, > > Thanks for the fixes to the example. > > -Gautam > > On Jan 15, 2020, at 7:05 PM, Matthew Knepley wrote: > > On Wed, Jan 15, 2020 at 4:08 PM Matthew Knepley wrote: > >> On Wed, Jan 15, 2020 at 3:47 PM 'Bisht, Gautam' via tdycores-dev < >> tdycores-dev at googlegroups.com> wrote: >> >>> Hi Matt, >>> >>> I?m running into error while using DMPlexNaturalToGlobalBegin/End and am >>> hoping you have some insights in what I?m doing incorrectly. I create a >>> 2x2x2 grid and distribute it across processors (N=1,2). I create a natural >>> and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are >>> the two issues: >>> >>> - When N = 1, PETSc complains about DMSetUseNatural() not being called >>> before DMPlexDistribute(), which is certainly not the case. >>> - For N=1 and 2, global vector doesn?t have valid entries. >>> >>> I?m not sure how to create the natural vector and have used >>> DMCreateGlobalVector() to create the natural vector, which could be the >>> issue. >>> >>> Attached is the sample code to reproduce the error and below is the >>> screen output. >>> >> >> Cool. I will run it and figure out the problem. >> > > 1) There was bad error reporting there. I am putting the fix in a new > branch. It did not check for being on one process. If you run with > > knepley/fix-dm-g2n-serial > > It will work correctly in serial. > > 2) The G2N needs a serial data layout to work, so you have to make a > Section _before_ distributing. I need to put that in the docs. I have > fixed your example to do this and attached it. I run it with > > master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 1 > ./ex_test -dm_plex_box_faces 2,2,2 -dm_view > DM Object: 1 MPI processes > type: plex > DM_0x84000000_0 in 3 dimensions: > 0-cells: 27 > 1-cells: 54 > 2-cells: 36 > 3-cells: 8 > Labels: > marker: 1 strata with value/size (1 (72)) > Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 (4), > 2 (4)) > depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) > Field p: > adjacency FVM++ > Natural vector: > > Vec Object: 1 MPI processes > type: seq > 0. > 1. > 2. > 3. > 4. > 5. > 6. > 7. > > Global vector: > > Vec Object: 1 MPI processes > type: seq > 0. > 1. > 2. > 3. > 4. > 5. > 6. > 7. > > Information about the mesh: > [0] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 > [0] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 > [0] cell = 02; (0.250000, 0.750000, 0.250000); is_local = 1 > [0] cell = 03; (0.750000, 0.750000, 0.250000); is_local = 1 > [0] cell = 04; (0.250000, 0.250000, 0.750000); is_local = 1 > [0] cell = 05; (0.750000, 0.250000, 0.750000); is_local = 1 > [0] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 1 > [0] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 1 > > master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 2 ./ex_test > -dm_plex_box_faces 2,2,2 -dm_view > DM Object: Parallel Mesh 2 MPI processes > type: plex > Parallel Mesh in 3 dimensions: > 0-cells: 27 27 > 1-cells: 54 54 > 2-cells: 36 36 > 3-cells: 8 8 > Labels: > depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) > marker: 1 strata with value/size (1 (72)) > Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 (4), > 6 (4)) > Field p: > adjacency FVM++ > Natural vector: > > Vec Object: 2 MPI processes > type: mpi > Process [0] > 0. > 1. > 2. > 3. > Process [1] > 4. > 5. > 6. > 7. > > Global vector: > > Vec Object: 2 MPI processes > type: mpi > Process [0] > 2. > 3. > 6. > 7. > Process [1] > 0. > 1. > 4. > 5. > > Information about the mesh: > [0] cell = 00; (0.250000, 0.750000, 0.250000); is_local = 1 > [0] cell = 01; (0.750000, 0.750000, 0.250000); is_local = 1 > [0] cell = 02; (0.250000, 0.750000, 0.750000); is_local = 1 > [0] cell = 03; (0.750000, 0.750000, 0.750000); is_local = 1 > [0] cell = 04; (0.250000, 0.250000, 0.250000); is_local = 0 > [0] cell = 05; (0.750000, 0.250000, 0.250000); is_local = 0 > [0] cell = 06; (0.250000, 0.250000, 0.750000); is_local = 0 > [0] cell = 07; (0.750000, 0.250000, 0.750000); is_local = 0 > [1] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 > [1] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 > [1] cell = 02; (0.250000, 0.250000, 0.750000); is_local = 1 > [1] cell = 03; (0.750000, 0.250000, 0.750000); is_local = 1 > [1] cell = 04; (0.250000, 0.750000, 0.250000); is_local = 0 > [1] cell = 05; (0.750000, 0.750000, 0.250000); is_local = 0 > [1] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 0 > > [1] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 0 > > Thanks, > > Matt > > >> Thanks, >> >> Matt >> >> >>> >make ex_test >>> >>> >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test >>> Natural vector: >>> >>> Vec Object: 1 MPI processes >>> type: seq >>> 0. >>> 1. >>> 2. >>> 3. >>> 4. >>> 5. >>> 6. >>> 7. >>> [0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Object is in wrong state >>> [0]PETSC ERROR: DM global to natural SF was not created. >>> You must call DMSetUseNatural() before DMPlexDistribute(). >>> >>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html >>> >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 >>> GIT Date: 2019-12-21 14:33:27 -0600 >>> [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed >>> Jan 15 12:34:03 2020 >>> [0]PETSC ERROR: Configure options >>> --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate >>> --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local >>> --download-zlib --download-exodusii=yes --download-hdf5=yes >>> --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes >>> --download-mpich=yes --download-mumps=yes --download-scalapack=yes >>> --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 >>> --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 >>> PETSC_ARCH=darwin-gcc8 >>> [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in >>> /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c >>> >>> Global vector: >>> >>> Vec Object: 1 MPI processes >>> type: seq >>> 0. >>> 0. >>> 0. >>> 0. >>> 0. >>> 0. >>> 0. >>> 0. >>> >>> Information about the mesh: >>> >>> Rank = 0 >>> local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >>> local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >>> local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 >>> local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 >>> local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 >>> local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 >>> local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 >>> local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 >>> >>> >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test >>> Natural vector: >>> >>> Vec Object: 2 MPI processes >>> type: mpi >>> Process [0] >>> 0. >>> 1. >>> 2. >>> 3. >>> Process [1] >>> 4. >>> 5. >>> 6. >>> 7. >>> >>> Global vector: >>> >>> Vec Object: 2 MPI processes >>> type: mpi >>> Process [0] >>> 0. >>> 0. >>> 0. >>> 0. >>> Process [1] >>> 0. >>> 0. >>> 0. >>> 0. >>> >>> Information about the mesh: >>> >>> Rank = 0 >>> local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 >>> local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 >>> local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 >>> local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 >>> local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 >>> local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 >>> local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 >>> local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 >>> >>> Rank = 1 >>> local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >>> local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >>> local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 >>> local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 >>> local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 >>> local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 >>> local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 >>> local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 >>> >>> >>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> >>> >>> -Gautam >>> >>> >>> >>> >>> On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev < >>> tdycores-dev at googlegroups.com> wrote: >>> >>> >>> >>> On Jan 9, 2020, at 4:25 PM, Matthew Knepley wrote: >>> >>> On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev < >>> tdycores-dev at googlegroups.com> wrote: >>> >>> >>> > On Jan 9, 2020, at 2:58 PM, Jed Brown wrote: >>> > >>> > "'Bisht, Gautam' via tdycores-dev" >>> writes: >>> > >>> >>> Do you need to rely on the element number, or would coordinates (of a >>> >>> centroid?) be sufficient for your purposes? >>> >> >>> >> I do need to rely on the element number. In my case, I have a >>> mapping file that remaps data from one grid onto another grid. Though I?m >>> currently creating a hexahedron mesh, in the future I would be reading in >>> an unstructured grid from a file for which I cannot rely on coordinates. >>> > >>> > How does the mapping file work and how is it generated? >>> >>> In CESM/E3SM, the mapping file is used to map fluxes or states between >>> grids of two components (e.g. land & atmosphere). The mapping method can be >>> conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses >>> ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB >>> script to create the mapping file. >>> >>> I?m surprised that this is not an issue for other codes that are using >>> DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, >>> they can specify material property for each grid cell. So, there should be >>> a way to create a vectorscatter that will scatter material property read in >>> the ?application?-order (i.e. order before calling DMPlexDistribute() ) to >>> ghosted-order (i.e. order after calling DMPlexDistribute()). >>> >>> >>> We did build something specific for this because some people wanted it. >>> I wish I could purge this from all simulations. Its >>> definitely destructive, but this is the way the world currently is. >>> >>> You want this: >>> >>> >>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html >>> >>> >>> >>> Perfect. >>> >>> Thanks. >>> -Gautam >>> >>> >>> >>> Thanks, >>> >>> Matt >>> >>> >>> > We can locate points and create interpolation with unstructured grids. >>> > >>> > -- >>> > You received this message because you are subscribed to the Google >>> Groups "tdycores-dev" group. >>> > To unsubscribe from this group and stop receiving emails from it, send >>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>> > To view this discussion on the web visit >>> https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org >>> . >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "tdycores-dev" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov >>> >>> . >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "tdycores-dev" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com >>> >>> . >>> >>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "tdycores-dev" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov >>> >>> . >>> >>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "tdycores-dev" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/tdycores-dev/8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0%40pnnl.gov >>> >>> . >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gn%3DxsVjjN8sX6km8ub%3Djkk8vxiU2DZVEi-4Kpbi_rM-0w%40mail.gmail.com > > . > > > > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/3F79926B-D567-4592-8E4E-46D21628D2DF%40pnnl.gov > > . > > > -- > You received this message because you are subscribed to the Google Groups > "tdycores-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to tdycores-dev+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/tdycores-dev/042A2165-BACC-41C0-8603-C4319C1A5176%40pnnl.gov > > . > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 28 08:55:38 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Jan 2020 09:55:38 -0500 Subject: [petsc-users] overset grids In-Reply-To: References: Message-ID: On Fri, Jan 24, 2020 at 9:25 AM Mark Cunningham < mark.cunningham at ariacoustics.com> wrote: > As a novice PETSc user, can I ask for some advice? I've found some test > examples that do similar sorts of things but I would appreciate any > suggestions. > I have a matrix generated from overset meshes with a structure like > A = |A0 B01 B0n| > |B01 A1 B1n| > : : > |B0n ... An| where the Ai are finite difference stencils > and the Bij > represent interpolation between the grids. > 1. If I create a DMDA composite object that defines each of the grid sizes > through repeated calls to DMCreate1D (because we are omitting blanked > points from the 3D mesh) and create the matrix > through DMCreateMatrix, can I still fill the matrix through MatSetValues > using the global index? > (This is how the code operates now.) > I cannot understand this question as is. First, I think you mean a DMComposite object, made from many DMDA objects. I do not understand using DMDACreate1D() here. However, DMComposite does not understand how to couple grids together. Thus DMCreateMatrix() will give back a block diagonal matrix, with no allocated nonzero between grids. You can put them in yourself, _or_ you can just insert nonzeros, incurring the overhead of reallocating (which can be large). > 2. If I have to declare the matrix to be matnest and I have to create each > of the subblocks individually, how do I cope with some of the Bij being all > zero? Must there be a MatSetValues call for each block? > You can leave some blocks as NULL in MatNest. > 3. The principle reason at the moment for the change is to use > diag(A0...An) as the precondition matrix and use ILU on the blocks. If I > set the preconditioner to be bjacobi and then fetch the subksps, can I then > set the subpcs to be ilu? > Yes, if you want block ILU(0) you can get this using -pc_type bjacobi -sub_pc_type ilu I am not sure that BJACOBI makes the blocks match the MatNest layout. I will have to check. If not, this is easy to do by hand by getting out the ISes for the blocks. > 4. A further complication is we would like to have a boundary condition > where the values of the boundary points are defined by an expansion in > functions that satisfy the radiation condition. So, I would like to define > an index set that identifies the boundary points and define a PCSHELL on A > that for the pcsetup phase will do a QR factorization of the auxiliary > matrix: G, where we're solving > Gy = QRy =c and then on pcapply do the Ry = Q* c triangular solve that > will enable me to update > the boundary points (y subset of x the solution vector.) How do I get the > PCSHELL to work with the block jacobi strategy. > I am not sure I would do things this way, but maybe I do not understand well enough. Suppose we magically have the boundary values. Then we can just call https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatZeroRowsIS.html with your IS to set the rows of constrained unknowns to 1 and set the boundary values in the rhs b. So, it seems like you can just calculate the boundary values using your QR, and then insert them using the function above. Thanks, Matt > Thanks for any suggestions. > > Mark Cunningham > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jourdon_anthony at hotmail.fr Wed Jan 29 02:54:06 2020 From: jourdon_anthony at hotmail.fr (Anthony Jourdon) Date: Wed, 29 Jan 2020 08:54:06 +0000 Subject: [petsc-users] DMDA Error In-Reply-To: References: , Message-ID: Hello Junchao, Thank you for your answer ! Unfortunately the computer is having big performance troubles and is currently in maintenance. Maybe the error I am encountering is related to these issues. Anyway, as soon as I can I'll test what you proposed. I'll, of course, keep you informed about how it goes. Thank you very much. Sincerly, Anthony Jourdon ________________________________ De : Zhang, Junchao Envoy? : vendredi 24 janvier 2020 16:52 ? : Anthony Jourdon Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] DMDA Error Hello, Anthony I tried petsc-3.8.4 + icc/gcc + Intel MPI 2019 update 5 + optimized/debug build, and ran with 1024 ranks, but I could not reproduce the error. Maybe you can try these: * Use the latest petsc + your test example, run with AND without -vecscatter_type mpi1, to see if they can report useful messages. * Or, use Intel MPI 2019 update 6 to see if this is an Intel MPI bug. $ cat ex50.c #include #include int main(int argc,char **argv) { PetscErrorCode ierr; PetscInt size; PetscInt X = 1024,Y = 128,Z=512; //PetscInt X = 512,Y = 64, Z=256; DM da; ierr = PetscInitialize(&argc,&argv,(char*)0,NULL);if (ierr) return ierr; ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); ierr = DMDACreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DMDA_STENCIL_BOX,2*X+1,2*Y+1,2*Z+1,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,3,2,NULL,NULL,NULL,&da);CHKERRQ(ierr); ierr = DMSetFromOptions(da);CHKERRQ(ierr); ierr = DMSetUp(da);CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_WORLD,"Running with %D MPI ranks\n",size);CHKERRQ(ierr); ierr = DMDestroy(&da);CHKERRQ(ierr); ierr = PetscFinalize(); return ierr; } $ldd ex50 linux-vdso.so.1 => (0x00007ffdbcd43000) libpetsc.so.3.8 => /home/jczhang/petsc/linux-intel-opt/lib/libpetsc.so.3.8 (0x00002afd27e51000) libX11.so.6 => /lib64/libX11.so.6 (0x00002afd2a811000) libifport.so.5 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libifport.so.5 (0x00002afd2ab4f000) libmpicxx.so.12 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/libmpicxx.so.12 (0x00002afd2ad7d000) libdl.so.2 => /lib64/libdl.so.2 (0x00002afd2af9d000) libmpifort.so.12 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/libmpifort.so.12 (0x00002afd2b1a1000) libmpi.so.12 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/lib/release/libmpi.so.12 (0x00002afd2b55f000) librt.so.1 => /lib64/librt.so.1 (0x00002afd2c564000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002afd2c76c000) libimf.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libimf.so (0x00002afd2c988000) libsvml.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libsvml.so (0x00002afd2d00d000) libirng.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libirng.so (0x00002afd2ea99000) libm.so.6 => /lib64/libm.so.6 (0x00002afd2ee04000) libcilkrts.so.5 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libcilkrts.so.5 (0x00002afd2f106000) libstdc++.so.6 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/clck/2019.5/lib/intel64/libstdc++.so.6 (0x00002afd2f343000) libgcc_s.so.1 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/clck/2019.5/lib/intel64/libgcc_s.so.1 (0x00002afd2f655000) libirc.so => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libirc.so (0x00002afd2f86b000) libc.so.6 => /lib64/libc.so.6 (0x00002afd2fadd000) libintlc.so.5 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/compiler/lib/intel64_lin/libintlc.so.5 (0x00002afd2feaa000) libxcb.so.1 => /lib64/libxcb.so.1 (0x00002afd3011c000) /lib64/ld-linux-x86-64.so.2 (0x00002afd27c2d000) libfabric.so.1 => /blues/gpfs/home/software/spack-0.10.1/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2019.5-zqvneipqa4u52iwlyy5kx4hbsfnspz6g/compilers_and_libraries_2019.5.281/linux/mpi/intel64/libfabric/lib/libfabric.so.1 (0x00002afd30344000) libXau.so.6 => /lib64/libXau.so.6 (0x00002afd3057c000) --Junchao Zhang On Tue, Jan 21, 2020 at 2:25 AM Anthony Jourdon > wrote: Hello, I made a test to try to reproduce the error. To do so I modified the file $PETSC_DIR/src/dm/examples/tests/ex35.c I attach the file in case of need. The same error is reproduced for 1024 mpi ranks. I tested two problem sizes (2*512+1x2*64+1x2*256+1 and 2*1024+1x2*128+1x2*512+1) and the error occured for both cases, the first case is also the one I used to run before the OS and mpi updates. I also run the code with -malloc_debug and nothing more appeared. I attached the configure command I used to build a debug version of petsc. Thank you for your time, Sincerly. Anthony Jourdon ________________________________ De : Zhang, Junchao > Envoy? : jeudi 16 janvier 2020 16:49 ? : Anthony Jourdon > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] DMDA Error It seems the problem is triggered by DMSetUp. You can write a small test creating the DMDA with the same size as your code, to see if you can reproduce the problem. If yes, it would be much easier for us to debug it. --Junchao Zhang On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon > wrote: Dear Petsc developer, I need assistance with an error. I run a code that uses the DMDA related functions. I'm using petsc-3.8.4. This code used to run very well on a super computer with the OS SLES11. Petsc was built using an intel mpi 5.1.3.223 module and intel mkl version 2016.0.2.181 The code was running with no problem on 1024 and more mpi ranks. Recently, the OS of the computer has been updated to RHEL7 I rebuilt Petsc using new available versions of intel mpi (2019U5) and mkl (2019.0.5.281) which are the same versions for compilers and mkl. Since then I tested to run the exact same code on 8, 16, 24, 48, 512 and 1024 mpi ranks. Until 1024 mpi ranks no problem, but for 1024 an error related to DMDA appeared. I snip the first lines of the error stack here and the full error stack is attached. [534]PETSC ERROR: #1 PetscGatherMessageLengths() line 120 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/sys/utils/mpimesg.c [534]PETSC ERROR: #2 VecScatterCreate_PtoS() line 2288 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vpscat.c [534]PETSC ERROR: #3 VecScatterCreate() line 1462 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/vec/vec/utils/vscat.c [534]PETSC ERROR: #4 DMSetUp_DA_3D() line 1042 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/da3.c [534]PETSC ERROR: #5 DMSetUp_DA() line 25 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/impls/da/dareg.c [534]PETSC ERROR: #6 DMSetUp() line 720 in /scratch2/dlp/appli_local/SCR/OROGEN/petsc3.8.4_MPI/petsc-3.8.4/src/dm/interface/dm.c Thank you for your time, Sincerly, Anthony Jourdon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 29 07:52:15 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 29 Jan 2020 08:52:15 -0500 Subject: [petsc-users] DMPlex: Mapping cells before and after partitioning In-Reply-To: References: <875zhkmf0z.fsf@jedbrown.org> <8736come4e.fsf@jedbrown.org> <9AB001AF-8857-446A-AE69-E8D6A25CB8FA@pnnl.gov> <7C23ABBA-2F76-4EAB-9834-9391AD77E18B@pnnl.gov> <8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0@pnnl.gov> <3F79926B-D567-4592-8E4E-46D21628D2DF@pnnl.gov> <042A2165-BACC-41C0-8603-C4319C1A5176@pnnl.gov> Message-ID: On Tue, Jan 28, 2020 at 6:12 AM Matthew Knepley wrote: > On Mon, Jan 27, 2020 at 9:57 PM 'Bisht, Gautam' via tdycores-dev < > tdycores-dev at googlegroups.com> wrote: > >> Hi Matt, >> >> Could you issue MR to get changes from knepley/fix-dm-g2n-serial into >> PETSc master? I?m having some trouble during PETSc installation >> withexodusii using your branch, but the master seems fine. >> > > Yep, just submitted the MR. Should be in soon. > I forgot to say, this is now merged. Thanks, Matt > Thanks, > > Matt > > >> Thanks, >> -Gautam. >> >> On Jan 18, 2020, at 11:19 AM, 'Bisht, Gautam' via tdycores-dev < >> tdycores-dev at googlegroups.com> wrote: >> >> Hi Matt, >> >> Thanks for the fixes to the example. >> >> -Gautam >> >> On Jan 15, 2020, at 7:05 PM, Matthew Knepley wrote: >> >> On Wed, Jan 15, 2020 at 4:08 PM Matthew Knepley >> wrote: >> >>> On Wed, Jan 15, 2020 at 3:47 PM 'Bisht, Gautam' via tdycores-dev < >>> tdycores-dev at googlegroups.com> wrote: >>> >>>> Hi Matt, >>>> >>>> I?m running into error while using DMPlexNaturalToGlobalBegin/End and >>>> am hoping you have some insights in what I?m doing incorrectly. I create a >>>> 2x2x2 grid and distribute it across processors (N=1,2). I create a natural >>>> and a global vector; and then call DMPlexNaturalToGlobalBegin/End. Here are >>>> the two issues: >>>> >>>> - When N = 1, PETSc complains about DMSetUseNatural() not being called >>>> before DMPlexDistribute(), which is certainly not the case. >>>> - For N=1 and 2, global vector doesn?t have valid entries. >>>> >>>> I?m not sure how to create the natural vector and have used >>>> DMCreateGlobalVector() to create the natural vector, which could be the >>>> issue. >>>> >>>> Attached is the sample code to reproduce the error and below is the >>>> screen output. >>>> >>> >>> Cool. I will run it and figure out the problem. >>> >> >> 1) There was bad error reporting there. I am putting the fix in a new >> branch. It did not check for being on one process. If you run with >> >> knepley/fix-dm-g2n-serial >> >> It will work correctly in serial. >> >> 2) The G2N needs a serial data layout to work, so you have to make a >> Section _before_ distributing. I need to put that in the docs. I have >> fixed your example to do this and attached it. I run it with >> >> master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 1 >> ./ex_test -dm_plex_box_faces 2,2,2 -dm_view >> DM Object: 1 MPI processes >> type: plex >> DM_0x84000000_0 in 3 dimensions: >> 0-cells: 27 >> 1-cells: 54 >> 2-cells: 36 >> 3-cells: 8 >> Labels: >> marker: 1 strata with value/size (1 (72)) >> Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 >> (4), 2 (4)) >> depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) >> Field p: >> adjacency FVM++ >> Natural vector: >> >> Vec Object: 1 MPI processes >> type: seq >> 0. >> 1. >> 2. >> 3. >> 4. >> 5. >> 6. >> 7. >> >> Global vector: >> >> Vec Object: 1 MPI processes >> type: seq >> 0. >> 1. >> 2. >> 3. >> 4. >> 5. >> 6. >> 7. >> >> Information about the mesh: >> [0] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >> [0] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >> [0] cell = 02; (0.250000, 0.750000, 0.250000); is_local = 1 >> [0] cell = 03; (0.750000, 0.750000, 0.250000); is_local = 1 >> [0] cell = 04; (0.250000, 0.250000, 0.750000); is_local = 1 >> [0] cell = 05; (0.750000, 0.250000, 0.750000); is_local = 1 >> [0] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 1 >> [0] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 1 >> >> master *:~/Downloads/tmp/Gautam$ /PETSc3/petsc/bin/mpiexec -n 2 ./ex_test >> -dm_plex_box_faces 2,2,2 -dm_view >> DM Object: Parallel Mesh 2 MPI processes >> type: plex >> Parallel Mesh in 3 dimensions: >> 0-cells: 27 27 >> 1-cells: 54 54 >> 2-cells: 36 36 >> 3-cells: 8 8 >> Labels: >> depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8)) >> marker: 1 strata with value/size (1 (72)) >> Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 >> (4), 6 (4)) >> Field p: >> adjacency FVM++ >> Natural vector: >> >> Vec Object: 2 MPI processes >> type: mpi >> Process [0] >> 0. >> 1. >> 2. >> 3. >> Process [1] >> 4. >> 5. >> 6. >> 7. >> >> Global vector: >> >> Vec Object: 2 MPI processes >> type: mpi >> Process [0] >> 2. >> 3. >> 6. >> 7. >> Process [1] >> 0. >> 1. >> 4. >> 5. >> >> Information about the mesh: >> [0] cell = 00; (0.250000, 0.750000, 0.250000); is_local = 1 >> [0] cell = 01; (0.750000, 0.750000, 0.250000); is_local = 1 >> [0] cell = 02; (0.250000, 0.750000, 0.750000); is_local = 1 >> [0] cell = 03; (0.750000, 0.750000, 0.750000); is_local = 1 >> [0] cell = 04; (0.250000, 0.250000, 0.250000); is_local = 0 >> [0] cell = 05; (0.750000, 0.250000, 0.250000); is_local = 0 >> [0] cell = 06; (0.250000, 0.250000, 0.750000); is_local = 0 >> [0] cell = 07; (0.750000, 0.250000, 0.750000); is_local = 0 >> [1] cell = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >> [1] cell = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >> [1] cell = 02; (0.250000, 0.250000, 0.750000); is_local = 1 >> [1] cell = 03; (0.750000, 0.250000, 0.750000); is_local = 1 >> [1] cell = 04; (0.250000, 0.750000, 0.250000); is_local = 0 >> [1] cell = 05; (0.750000, 0.750000, 0.250000); is_local = 0 >> [1] cell = 06; (0.250000, 0.750000, 0.750000); is_local = 0 >> >> [1] cell = 07; (0.750000, 0.750000, 0.750000); is_local = 0 >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> >>> Matt >>> >>> >>>> >make ex_test >>>> >>>> >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 1 ./ex_test >>>> Natural vector: >>>> >>>> Vec Object: 1 MPI processes >>>> type: seq >>>> 0. >>>> 1. >>>> 2. >>>> 3. >>>> 4. >>>> 5. >>>> 6. >>>> 7. >>>> [0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Object is in wrong state >>>> [0]PETSC ERROR: DM global to natural SF was not created. >>>> You must call DMSetUseNatural() before DMPlexDistribute(). >>>> >>>> [0]PETSC ERROR: See >>>> https://www.mcs.anl.gov/petsc/documentation/faq.html >>>> >>>> for trouble shooting. >>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.12.2-537-g5f77d1e0e5 >>>> GIT Date: 2019-12-21 14:33:27 -0600 >>>> [0]PETSC ERROR: ./ex_test on a darwin-gcc8 named WE37411 by bish218 Wed >>>> Jan 15 12:34:03 2020 >>>> [0]PETSC ERROR: Configure options >>>> --with-blaslapack-lib=/System/Library/Frameworks/Accelerate.framework/Versions/Current/Accelerate >>>> --download-parmetis=yes --download-metis=yes --with-hdf5-dir=/opt/local >>>> --download-zlib --download-exodusii=yes --download-hdf5=yes >>>> --download-netcdf=yes --download-pnetcdf=yes --download-hypre=yes >>>> --download-mpich=yes --download-mumps=yes --download-scalapack=yes >>>> --with-cc=/opt/local/bin/gcc-mp-8 --with-cxx=/opt/local/bin/g++-mp-8 >>>> --with-fc=/opt/local/bin/gfortran-mp-8 --download-sowing=1 >>>> PETSC_ARCH=darwin-gcc8 >>>> [0]PETSC ERROR: #1 DMPlexNaturalToGlobalBegin() line 289 in >>>> /Users/bish218/projects/petsc/petsc_v3.12.2/src/dm/impls/plex/plexnatural.c >>>> >>>> Global vector: >>>> >>>> Vec Object: 1 MPI processes >>>> type: seq >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> >>>> Information about the mesh: >>>> >>>> Rank = 0 >>>> local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >>>> local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >>>> local_id = 02; (0.250000, 0.750000, 0.250000); is_local = 1 >>>> local_id = 03; (0.750000, 0.750000, 0.250000); is_local = 1 >>>> local_id = 04; (0.250000, 0.250000, 0.750000); is_local = 1 >>>> local_id = 05; (0.750000, 0.250000, 0.750000); is_local = 1 >>>> local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 1 >>>> local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 1 >>>> >>>> >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> >>>> >$PETSC_DIR/$PETSC_ARCH/bin/mpiexec -np 2 ./ex_test >>>> Natural vector: >>>> >>>> Vec Object: 2 MPI processes >>>> type: mpi >>>> Process [0] >>>> 0. >>>> 1. >>>> 2. >>>> 3. >>>> Process [1] >>>> 4. >>>> 5. >>>> 6. >>>> 7. >>>> >>>> Global vector: >>>> >>>> Vec Object: 2 MPI processes >>>> type: mpi >>>> Process [0] >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> Process [1] >>>> 0. >>>> 0. >>>> 0. >>>> 0. >>>> >>>> Information about the mesh: >>>> >>>> Rank = 0 >>>> local_id = 00; (0.250000, 0.750000, 0.250000); is_local = 1 >>>> local_id = 01; (0.750000, 0.750000, 0.250000); is_local = 1 >>>> local_id = 02; (0.250000, 0.750000, 0.750000); is_local = 1 >>>> local_id = 03; (0.750000, 0.750000, 0.750000); is_local = 1 >>>> local_id = 04; (0.250000, 0.250000, 0.250000); is_local = 0 >>>> local_id = 05; (0.750000, 0.250000, 0.250000); is_local = 0 >>>> local_id = 06; (0.250000, 0.250000, 0.750000); is_local = 0 >>>> local_id = 07; (0.750000, 0.250000, 0.750000); is_local = 0 >>>> >>>> Rank = 1 >>>> local_id = 00; (0.250000, 0.250000, 0.250000); is_local = 1 >>>> local_id = 01; (0.750000, 0.250000, 0.250000); is_local = 1 >>>> local_id = 02; (0.250000, 0.250000, 0.750000); is_local = 1 >>>> local_id = 03; (0.750000, 0.250000, 0.750000); is_local = 1 >>>> local_id = 04; (0.250000, 0.750000, 0.250000); is_local = 0 >>>> local_id = 05; (0.750000, 0.750000, 0.250000); is_local = 0 >>>> local_id = 06; (0.250000, 0.750000, 0.750000); is_local = 0 >>>> local_id = 07; (0.750000, 0.750000, 0.750000); is_local = 0 >>>> >>>> >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> >>>> >>>> -Gautam >>>> >>>> >>>> >>>> >>>> On Jan 9, 2020, at 4:57 PM, 'Bisht, Gautam' via tdycores-dev < >>>> tdycores-dev at googlegroups.com> wrote: >>>> >>>> >>>> >>>> On Jan 9, 2020, at 4:25 PM, Matthew Knepley wrote: >>>> >>>> On Thu, Jan 9, 2020 at 1:35 PM 'Bisht, Gautam' via tdycores-dev < >>>> tdycores-dev at googlegroups.com> wrote: >>>> >>>> >>>> > On Jan 9, 2020, at 2:58 PM, Jed Brown wrote: >>>> > >>>> > "'Bisht, Gautam' via tdycores-dev" >>>> writes: >>>> > >>>> >>> Do you need to rely on the element number, or would coordinates (of >>>> a >>>> >>> centroid?) be sufficient for your purposes? >>>> >> >>>> >> I do need to rely on the element number. In my case, I have a >>>> mapping file that remaps data from one grid onto another grid. Though I?m >>>> currently creating a hexahedron mesh, in the future I would be reading in >>>> an unstructured grid from a file for which I cannot rely on coordinates. >>>> > >>>> > How does the mapping file work and how is it generated? >>>> >>>> In CESM/E3SM, the mapping file is used to map fluxes or states between >>>> grids of two components (e.g. land & atmosphere). The mapping method can be >>>> conservative, nearest neighbor, bilinear, etc. While CESM/E3SM uses >>>> ESMF_RegridWeightGen to generate the mapping file, I?m using by own MATLAB >>>> script to create the mapping file. >>>> >>>> I?m surprised that this is not an issue for other codes that are using >>>> DMPlex. E.g In PFLOTRAN, when a user creates a custom unstructured grid, >>>> they can specify material property for each grid cell. So, there should be >>>> a way to create a vectorscatter that will scatter material property read in >>>> the ?application?-order (i.e. order before calling DMPlexDistribute() ) to >>>> ghosted-order (i.e. order after calling DMPlexDistribute()). >>>> >>>> >>>> We did build something specific for this because some people wanted it. >>>> I wish I could purge this from all simulations. Its >>>> definitely destructive, but this is the way the world currently is. >>>> >>>> You want this: >>>> >>>> >>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html >>>> >>>> >>>> >>>> Perfect. >>>> >>>> Thanks. >>>> -Gautam >>>> >>>> >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>> > We can locate points and create interpolation with unstructured grids. >>>> > >>>> > -- >>>> > You received this message because you are subscribed to the Google >>>> Groups "tdycores-dev" group. >>>> > To unsubscribe from this group and stop receiving emails from it, >>>> send an email to tdycores-dev+unsubscribe at googlegroups.com. >>>> > To view this discussion on the web visit >>>> https://protect2.fireeye.com/v1/url?k=b265c01b-eed0fed4-b265ea0e-0cc47adc5e60-1707adbf1790c7e4&q=1&e=0962f8e1-9155-4d9c-abdf-2b6481141cd0&u=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Ftdycores-dev%2F8736come4e.fsf%2540jedbrown.org >>>> . >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "tdycores-dev" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/tdycores-dev/9AB001AF-8857-446A-AE69-E8D6A25CB8FA%40pnnl.gov >>>> >>>> . >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "tdycores-dev" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gm%3DSY%3DyDiYOdBm1j_KZO5NYhu80ZhbFTV23O%2Bv-zVvFnA%40mail.gmail.com >>>> >>>> . >>>> >>>> >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "tdycores-dev" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/tdycores-dev/7C23ABBA-2F76-4EAB-9834-9391AD77E18B%40pnnl.gov >>>> >>>> . >>>> >>>> >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "tdycores-dev" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to tdycores-dev+unsubscribe at googlegroups.com. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/tdycores-dev/8A7925AE-08F5-4F81-AAA5-B2FDC3D833B0%40pnnl.gov >>>> >>>> . >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/CAMYG4Gn%3DxsVjjN8sX6km8ub%3Djkk8vxiU2DZVEi-4Kpbi_rM-0w%40mail.gmail.com >> >> . >> >> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/3F79926B-D567-4592-8E4E-46D21628D2DF%40pnnl.gov >> >> . >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "tdycores-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to tdycores-dev+unsubscribe at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/tdycores-dev/042A2165-BACC-41C0-8603-C4319C1A5176%40pnnl.gov >> >> . >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Thu Jan 30 14:49:47 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 30 Jan 2020 13:49:47 -0700 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: Hi All, It looks like a bug for me. PETSc was still trying to detect lgrind even we set "--with-lgrind=0". The configuration log is attached. Any way to disable lgrind detection. Thanks, Fande ---------- Forwarded message --------- From: Tomas Mondragon Date: Thu, Jan 30, 2020 at 9:54 AM Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC To: moose-users Configuration log is attached -- You received this message because you are subscribed to the Google Groups "moose-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/1a976f38-4944-425f-af72-f5ce7ce3ac85%40googlegroups.com . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ================================================================================ ================================================================================ Starting configure run at Fri, 24 Jan 2020 17:01:13 -0600 Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --download-hypre=downloaded_thirdParty_tarballs/hypre-93baaa8c9.tar.gz --with-debugging=no --with-shared-libraries=1 --download-fblaslapack=downloaded_thirdParty_tarballs/fblaslapack-v3.4.2-p2.tar.gz --download-metis=downloaded_thirdParty_tarballs/metis-v5.1.0-p7.tar.gz --download-ptscotch=downloaded_thirdParty_tarballs/scotch-v6.0.9.tar.gz --download-parmetis=downloaded_thirdParty_tarballs/parmetis-v4.0.3.tar.gz --download-superlu_dist=downloaded_thirdParty_tarballs/superLU-DIST-v6.2.0.tar.gz --download-mumps=downloaded_thirdParty_tarballs/mumps-v5.2.1-p2.tar.gz --download-scalapack=downloaded_thirdParty_tarballs/scalapack-v2.1.0-p1.tar.gz --download-slepc=downloaded_thirdParty_tarballs/slepc-bf89b9d.tar.gz --with-mpi=1 --with-cxx-dialect=C++11 --with-fortran-bindings=0 --with-sowing=0 --with-batch=1 --with-cudac=0 --with-lgrind=0 Working directory: /p/work2/tmondrag/moose/petsc Machine platform: ('Linux', 'r20i4n17', '4.12.14-95.29-default', '#1 SMP Thu Aug 1 15:34:33 UTC 2019 (47e48a4)', 'x86_64', 'x86_64') Python version: 2.7.13 (default, Jan 11 2017, 10:56:06) [GCC] ================================================================================ ================================================================================ TEST configureExternalPackagesDir from config.framework(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/framework.py:911) TESTING: configureExternalPackagesDir from config.framework(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/framework.py:911) ================================================================================ TEST configureDebuggers from config.utilities.debuggers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/debuggers.py:21) TESTING: configureDebuggers from config.utilities.debuggers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/debuggers.py:21) Find a default debugger and determine its arguments Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/gdb...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/gdb...not found Checking for program /usr/local/krb5/bin/gdb...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/gdb...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/gdb...not found Checking for program /opt/clmgr/sbin/gdb...not found Checking for program /opt/clmgr/bin/gdb...not found Checking for program /opt/sgi/sbin/gdb...not found Checking for program /opt/sgi/bin/gdb...not found Checking for program /usr/local/bin/gdb...not found Checking for program /usr/bin/gdb...found Defined make macro "GDB" to "/usr/bin/gdb" Defined "USE_DEBUGGER" to ""gdb"" Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Defined make macro "DSYMUTIL" to "true" child config.utilities.debuggers 0.009440 ================================================================================ TEST configureDirectories from PETSc.options.petscdir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/petscdir.py:23) TESTING: configureDirectories from PETSc.options.petscdir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/petscdir.py:23) Checks PETSC_DIR and sets if not set Version Information: #define PETSC_VERSION_RELEASE 0 #define PETSC_VERSION_MAJOR 3 #define PETSC_VERSION_MINOR 12 #define PETSC_VERSION_SUBMINOR 3 #define PETSC_VERSION_PATCH 0 #define PETSC_VERSION_DATE "unknown" #define PETSC_VERSION_GIT "unknown" #define PETSC_VERSION_DATE_GIT "unknown" #define PETSC_VERSION_EQ(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_ PETSC_VERSION_EQ #define PETSC_VERSION_LT(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_LE(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_GT(MAJOR,MINOR,SUBMINOR) \ #define PETSC_VERSION_GE(MAJOR,MINOR,SUBMINOR) \ child PETSc.options.petscdir 0.004710 ================================================================================ TEST getDatafilespath from PETSc.options.dataFilesPath(/p/work2/tmondrag/moose/petsc/config/PETSc/options/dataFilesPath.py:29) TESTING: getDatafilespath from PETSc.options.dataFilesPath(/p/work2/tmondrag/moose/petsc/config/PETSc/options/dataFilesPath.py:29) Checks what DATAFILESPATH should be child PETSc.options.dataFilesPath 0.000588 ================================================================================ TEST configureGit from config.sourceControl(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/sourceControl.py:24) TESTING: configureGit from config.sourceControl(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/sourceControl.py:24) Find the Git executable Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/git...found Defined make macro "GIT" to "git" Running Executable WITHOUT threads to time it out Executing: git --version stdout: git version 2.4.4 child config.sourceControl 0.004862 ================================================================================ TEST configureInstallationMethod from PETSc.options.petscclone(/p/work2/tmondrag/moose/petsc/config/PETSc/options/petscclone.py:20) TESTING: configureInstallationMethod from PETSc.options.petscclone(/p/work2/tmondrag/moose/petsc/config/PETSc/options/petscclone.py:20) lib/petsc/bin/maint exists. This appears to be a repository clone .git directory exists Running Executable WITHOUT threads to time it out Executing: ['git', 'describe', '--match=v*'] stdout: v3.12.3-632-gaf591a4 Running Executable WITHOUT threads to time it out Executing: ['git', 'log', '-1', '--pretty=format:%H'] stdout: af591a4651ec8b1ecff5c03491370c0e4ac5c3a7 Running Executable WITHOUT threads to time it out Executing: ['git', 'log', '-1', '--pretty=format:%ci'] stdout: 2020-01-24 13:29:59 -0600 Running Executable WITHOUT threads to time it out Executing: ['git', 'branch'] stdout: * (HEAD detached from 5ea3abf) master Defined "VERSION_GIT" to ""v3.12.3-632-gaf591a4"" Defined "VERSION_DATE_GIT" to ""2020-01-24 13:29:59 -0600"" Defined "VERSION_BRANCH_GIT" to ""(HEAD detached from 5ea3abf)"" child PETSc.options.petscclone 0.053460 ================================================================================ TEST setNativeArchitecture from PETSc.options.arch(/p/work2/tmondrag/moose/petsc/config/PETSc/options/arch.py:31) TESTING: setNativeArchitecture from PETSc.options.arch(/p/work2/tmondrag/moose/petsc/config/PETSc/options/arch.py:31) ================================================================================ TEST configureArchitecture from PETSc.options.arch(/p/work2/tmondrag/moose/petsc/config/PETSc/options/arch.py:43) TESTING: configureArchitecture from PETSc.options.arch(/p/work2/tmondrag/moose/petsc/config/PETSc/options/arch.py:43) Checks PETSC_ARCH and sets if not set No previous hashfile found Setting hashfile: arch-moose/lib/petsc/conf/configure-hash Deleting configure hash file: arch-moose/lib/petsc/conf/configure-hash Unable to delete configure hash file: arch-moose/lib/petsc/conf/configure-hash child PETSc.options.arch 0.077474 ================================================================================ TEST setInstallDir from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:35) TESTING: setInstallDir from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:35) setup installDir to either prefix or if that is not set to PETSC_DIR/PETSC_ARCH Defined make macro "PREFIXDIR" to "/p/home/tmondrag/WORK/moose/petsc/arch-moose" ================================================================================ TEST saveReconfigure from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:80) TESTING: saveReconfigure from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:80) ================================================================================ TEST cleanConfDir from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:73) TESTING: cleanConfDir from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:73) ================================================================================ TEST configureInstallDir from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:57) TESTING: configureInstallDir from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:57) Makes installDir subdirectories if it does not exist for both prefix install location and PETSc work install location Changed persistence directory to /p/home/tmondrag/WORK/moose/petsc/arch-moose/lib/petsc/conf ================================================================================ TEST restoreReconfigure from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:93) TESTING: restoreReconfigure from PETSc.options.installDir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/installDir.py:93) child PETSc.options.installDir 0.004305 ================================================================================ TEST setExternalPackagesDir from PETSc.options.externalpackagesdir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/externalpackagesdir.py:15) TESTING: setExternalPackagesDir from PETSc.options.externalpackagesdir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/externalpackagesdir.py:15) ================================================================================ TEST cleanExternalpackagesDir from PETSc.options.externalpackagesdir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/externalpackagesdir.py:22) TESTING: cleanExternalpackagesDir from PETSc.options.externalpackagesdir(/p/work2/tmondrag/moose/petsc/config/PETSc/options/externalpackagesdir.py:22) child PETSc.options.externalpackagesdir 0.000327 ================================================================================ TEST configureCLanguage from PETSc.options.languages(/p/work2/tmondrag/moose/petsc/config/PETSc/options/languages.py:27) TESTING: configureCLanguage from PETSc.options.languages(/p/work2/tmondrag/moose/petsc/config/PETSc/options/languages.py:27) Choose whether to compile the PETSc library using a C or C++ compiler C language is C Defined "CLANGUAGE_C" to "1" Defined make macro "CLANGUAGE" to "C" child PETSc.options.languages 0.001302 ================================================================================ TEST printEnvVariables from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1718) TESTING: printEnvVariables from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1718) **** printenv **** CPPFLAGS=-I/app/unsupported/COST/tcltk/8.6.4/gnu//include I_MPI_CC=gcc ARCHIVE_HOST=gold.erdc.hpc.mil LESS=-M -I -R PBS_NODENUM=0 ENVIRONMENT=BATCH PBS_O_LANG=en_US.UTF-8 XKEYSYMDB=/usr/X11R6/lib/X11/XKeysymDB BASH_FUNC_module%%=() { eval `/usr/share/Modules/$MODULE_VERSION/bin/modulecmd bash $*` } BC_ACCELERATOR_NODE_CORES=28 KRB5HOME=/usr/local/krb5 SHELL=/bin/bash MPI_LAUNCH_TIMEOUT=300 BC_STANDARD_NODE_CORES=36 XDG_DATA_DIRS=/usr/share BC_NODE_TYPE=STANDARD HISTSIZE=1000 CSI_HOME=/app LESS_ADVANCED_PREPROCESSOR=no MANPATH=/app/unsupported/COST/tcltk/8.6.4/gnu//man:/usr/local/krb5/share/man:/p/home/apps/hpe/mpt-2.19/man:/p/home/apps/gnu_compiler/7.2.0/share/man:/usr/local/man:/usr/share/man:/opt/c3/man:/opt/pbs/default/share/man:/opt/clmgr/man:/opt/sgi/share/man:/app/unsupported/local/man:/usr/share/catman:/usr/catman:/usr/man BC_BIGMEM_NODE_CORES=32 JAVA_HOME=/usr/lib64/jvm/java PROFILEREAD=true ARCHIVE_HOME=/erdc1/tmondrag FPATH=/p/home/apps/hpe/mpt-2.19/include MPIF90_F90=gfortran SDK_HOME=/usr/lib64/jvm/java PBS_NODEFILE=/var/spool/PBS/aux/3741873.jim10 CXX=g++ HOSTNAME=r20i4n17 PBS_QUEUE=itl PETSC_ARCH=arch-moose PBS_JOBID=3741873.jim10 MAIL=/var/spool/mail/tmondrag MACHTYPE=x86_64-suse-linux BC_PHI_NODE_CORES=0 JAVA_ROOT=/usr/lib64/jvm/java MINICOM=-c on CSHEDIT=emacs LESSOPEN=lessopen.sh %s PBS_O_LOGNAME=tmondrag CPATH=/p/home/apps/hpe/mpt-2.19/include:/p/home/apps/gnu_compiler/7.2.0/include USER=tmondrag INPUTRC=/etc/inputrc I_MPI_F77=gfortran PET_HOME=/app/unsupported/PETtools/CE CENTER=/gpfs/cwfs/tmondrag LANGUAGE=en_US.UTF-8 SHLVL=2 ALSA_CONFIG_PATH=/etc/alsa-pulse.conf CPU=x86_64 PETSC_DIR=/p/home/tmondrag/WORK/moose/scripts/../petsc PBS_O_PATH=/app/unsupported/COST/git/2.4.4/gnu//bin:/app/unsupported/COST/tcltk/8.6.4/gnu//bin:/usr/local/krb5/bin:/p/home/apps/hpe/mpt-2.19/bin:/p/home/apps/gnu_compiler/7.2.0/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/opt/sgi/sbin:/opt/sgi/bin:/usr/local/krb5/bin:/usr/local/krb5/libexec:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/opt/c3/bin:/opt/pbs/default/bin:/sbin:/bin:/pbs/SLB:/app/unsupported/local/bin:/app/mpiutil MODULESHOME=/usr/share/Modules/3.2.10 PBS_JOBCOOKIE=0000000076EA4204000000003DDB8E59 JAVA_BINDIR=/usr/lib64/jvm/java/bin INFOPATH=/p/home/apps/gnu_compiler/7.2.0/share/info DAAC_HOME=/app/DAAC PBS_O_HOST=jim07.ib0.ice-x.erdc.hpc.mil PROJECTS_HOME=/app/unsupported PKG_CONFIG_PATH=/app/unsupported/COST/tcltk/8.6.4/gnu//lib/pkgconfig GPG_TTY=not a tty XNLSPATH=/usr/X11R6/lib/X11/nls PBS_O_MAIL=/var/mail/tmondrag TMPDIR=/p/work2/tmondrag MODULEPATH=/app/unsupported/COST/modules:/p/home/apps/modulefiles/devel:/p/home/apps/modulefiles/apps:/p/home/apps/modulefiles/unsupported:/usr/share/modules:/usr/share/Modules/$MODULE_VERSION/modulefiles:/usr/share/modules/modulefiles:/p/home/tmondrag/my_modules BC_HOST=jim PBS_O_SHELL=/bin/bash XDG_CONFIG_DIRS=/etc/xdg I_MPI_CXX=g++ OMP_NUM_THREADS=1 PBS_JOBDIR=/p/home/tmondrag CMAKE_LIBRARY_PATH=/app/unsupported/COST/git/2.4.4/gnu//lib:/app/unsupported/COST/tcltk/8.6.4/gnu//lib COST_MODULES_DIR=/app/unsupported/COST/modules BC_NODE_ALLOC=1 COLORTERM=1 QEMU_AUDIO_DRV=pa _LMFILES_=/p/home/apps/modulefiles/devel/compiler/gcc/7.2.0:/p/home/apps/modulefiles/devel/mpi/sgimpt/2.19:/p/home/apps/modulefiles/unsupported/costinit:/app/unsupported/COST/modules/tcltk/gnu/8.6.4:/app/unsupported/COST/modules/git/gnu/2.4.4 PAGER=less ACCOUNT=ERDCV9100CDAG MODULE_VERSION=3.2.10 I_MPI_F90=gfortran HOME=/p/home/tmondrag LD_LIBRARY_PATH=/app/unsupported/COST/dependencies/openssl/1.0.2n/gnu/lib:/app/unsupported/COST/dependencies/curl/7.43.0/gnu/lib:/app/unsupported/COST/git/2.4.4/gnu//lib:/app/unsupported/COST/tcltk/8.6.4/gnu//lib:/p/home/apps/hpe/mpt-2.19/lib:/p/home/apps/gnu_compiler/7.2.0/lib64 LANG=en_US.UTF-8 LIBRARY_PATH=/p/home/apps/hpe/mpt-2.19/lib KRB5_HOME=/usr/local/krb5 NCPUS=1 G_BROKEN_FILENAMES=1 SAMPLES_HOME=/usr/local/cac/Example_Codes F90=gfortran PBS_O_WORKDIR=/p/home/tmondrag/WORK/moose _=./configure OSTYPE=linux PBS_TASKNUM=1 PBS_ACCOUNT=ERDCV9100CDAG PBS_O_SYSTEM=Linux BC_MPI_TASKS_ALLOC=1 BC_CORES_PER_NODE=36 CC=gcc NNTPSERVER=news G_FILENAME_ENCODING=@locale,UTF-8,ISO-8859-15,CP1252 HOST=r20i4n17 MPT_VERSION=2.19 GIT_HOME=/app/unsupported/COST/git/2.4.4/gnu/ F77=gfortran FROM_HEADER= LESSCLOSE=lessclose.sh %s %s PBS_MOMPORT=15003 C3_RSH=ssh -oConnectTimeout=10 -oForwardX11=no MPICXX_CXX=g++ JRE_HOME=/usr/lib64/jvm/java/jre MORE=-sl WORKDIR2=/p/work1/tmondrag PBS_O_HOME=/p/home/tmondrag HOSTTYPE=x86_64 LOGNAME=tmondrag BC_MEM_PER_NODE=121856 PATH=/app/unsupported/COST/git/2.4.4/gnu//bin:/app/unsupported/COST/tcltk/8.6.4/gnu//bin:/usr/local/krb5/bin:/p/home/apps/hpe/mpt-2.19/bin:/p/home/apps/gnu_compiler/7.2.0/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/opt/sgi/sbin:/opt/sgi/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/opt/c3/bin:/opt/pbs/default/bin:/sbin:/bin:/pbs/SLB:/app/unsupported/local/bin:/app/mpiutil PETTT_HOME=/app/unsupported/PETtools/CE PBS_ENVIRONMENT=PBS_BATCH CMAKE_INCLUDE_PATH=/app/unsupported/COST/tcltk/8.6.4/gnu//include SDL_AUDIODRIVER=pulse LDFLAGS=-L/app/unsupported/COST/git/2.4.4/gnu//lib -L/app/unsupported/COST/tcltk/8.6.4/gnu//lib MPICC_CC=gcc COST_HOME=/app/unsupported/COST MODULE_VERSION_STACK=3.2.10 I_MPI_FC=gfortran MPI_ROOT=/p/home/apps/hpe/mpt-2.19 USE_PCM_DB=2 LESSKEY=/etc/lesskey.bin TZ=US/Central JDK_HOME=/usr/lib64/jvm/java WORKDIR=/p/work2/tmondrag WINDOWMANAGER= AUDIODRIVER=pulseaudio PYTHONSTARTUP=/etc/pythonstart OLDPWD=/p/home/tmondrag/WORK/moose LOADEDMODULES=compiler/gcc/7.2.0:mpi/sgimpt/2.19:costinit:tcltk/gnu/8.6.4:git/gnu/2.4.4 PBS_O_QUEUE=itl PWD=/p/home/tmondrag/WORK/moose/petsc PBS_JOBNAME=PETSc_rebuild ================================================================================ TEST resetEnvCompilers from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1725) TESTING: resetEnvCompilers from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1725) =============================================================================== ***** WARNING: CC (set to gcc) found in environment variables - ignoring use ./configure CC=$CC if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: CXX (set to g++) found in environment variables - ignoring use ./configure CXX=$CXX if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: F77 (set to gfortran) found in environment variables - ignoring use ./configure F77=$F77 if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: F90 (set to gfortran) found in environment variables - ignoring use ./configure F90=$F90 if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: CPPFLAGS (set to -I/app/unsupported/COST/tcltk/8.6.4/gnu//include) found in environment variables - ignoring use ./configure CPPFLAGS=$CPPFLAGS if you really want to use that value ****** =============================================================================== =============================================================================== ***** WARNING: LDFLAGS (set to -L/app/unsupported/COST/git/2.4.4/gnu//lib -L/app/unsupported/COST/tcltk/8.6.4/gnu//lib) found in environment variables - ignoring use ./configure LDFLAGS=$LDFLAGS if you really want to use that value ****** =============================================================================== ================================================================================ TEST checkEnvCompilers from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1755) TESTING: checkEnvCompilers from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1755) ================================================================================ TEST checkMPICompilerOverride from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1688) TESTING: checkMPICompilerOverride from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1688) Check if --with-mpi-dir is used along with CC CXX or FC compiler options. This usually prevents mpi compilers from being used - so issue a warning ================================================================================ TEST requireMpiLdPath from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1709) TESTING: requireMpiLdPath from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1709) OpenMPI wrappers require LD_LIBRARY_PATH set ================================================================================ TEST checkInitialFlags from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:458) TESTING: checkInitialFlags from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:458) Initialize the compiler and linker flags Initialized CFLAGS to Initialized CFLAGS to Initialized LDFLAGS to Initialized CUDAFLAGS to Initialized CUDAFLAGS to Initialized LDFLAGS to Initialized CXXFLAGS to Initialized CXX_CXXFLAGS to Initialized LDFLAGS to Initialized FFLAGS to Initialized FFLAGS to Initialized LDFLAGS to Initialized CPPFLAGS to Initialized FPPFLAGS to Initialized CUDAPPFLAGS to -Wno-deprecated-gpu-targets Initialized CXXPPFLAGS to Initialized CC_LINKER_FLAGS to [] Initialized CXX_LINKER_FLAGS to [] Initialized FC_LINKER_FLAGS to [] Initialized CUDAC_LINKER_FLAGS to [] Initialized sharedLibraryFlags to [] Initialized dynamicLibraryFlags to [] ================================================================================ TEST checkCCompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:628) TESTING: checkCCompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:628) Locate a functional C compiler Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpicc...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpicc...not found Checking for program /usr/local/krb5/bin/mpicc...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpicc...found Defined make macro "CC" to "mpicc" All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Running Executable WITHOUT threads to time it out Executing: mpicc --version stdout: gcc (GCC) 7.2.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Since MPI c compiler starts with mpi, force searches for other compilers to only look for MPI compilers ================================================================================ TEST checkCPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:667) TESTING: checkCPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:667) Locate a functional C preprocessor Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpicc...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpicc...not found Checking for program /usr/local/krb5/bin/mpicc...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpicc...found Defined make macro "CPP" to "mpicc -E" Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: ================================================================================ TEST checkCUDACompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:703) TESTING: checkCUDACompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:703) Locate a functional CUDA compiler ================================================================================ TEST checkCUDAPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:743) TESTING: checkCUDAPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:743) Locate a functional CUDA preprocessor ================================================================================ TEST checkCxxCompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:842) TESTING: checkCxxCompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:842) Locate a functional Cxx compiler Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpicxx...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpicxx...not found Checking for program /usr/local/krb5/bin/mpicxx...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpicxx...found Defined make macro "CXX" to "mpicxx" Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Running Executable WITHOUT threads to time it out Executing: mpicxx --version stdout: g++ (GCC) 7.2.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ================================================================================ TEST checkCxxPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:883) TESTING: checkCxxPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:883) Locate a functional Cxx preprocessor Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpicxx...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpicxx...not found Checking for program /usr/local/krb5/bin/mpicxx...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpicxx...found Defined make macro "CXXPP" to "mpicxx -E" Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicxx -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Preprocess stderr before filtering:: Preprocess stderr after filtering:: ================================================================================ TEST checkFortranCompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:984) TESTING: checkFortranCompiler from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:984) Locate a functional Fortran compiler Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpif90...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpif90...not found Checking for program /usr/local/krb5/bin/mpif90...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpif90...found Defined make macro "FC" to "mpif90" Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status Running Executable WITHOUT threads to time it out Executing: mpif90 --version stdout: GNU Fortran (GCC) 7.2.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. ================================================================================ TEST checkFortranPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1021) TESTING: checkFortranPreprocessor from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1021) Locate a functional Fortran preprocessor Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpif90...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpif90...not found Checking for program /usr/local/krb5/bin/mpif90...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpif90...found Defined make macro "FPP" to "mpif90 -E" Deleting "FPP" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpif90...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpif90...not found Checking for program /usr/local/krb5/bin/mpif90...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpif90...found Defined make macro "FPP" to "mpif90 --use cpp32" Deleting "FPP" ================================================================================ TEST checkFortranComments from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1043) TESTING: checkFortranComments from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1043) Make sure fortran comment "!" works Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main ! comment end Fortran comments can use ! in column 1 ================================================================================ TEST checkLargeFileIO from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1174) TESTING: checkLargeFileIO from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1174) ================================================================================ TEST checkArchiver from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1273) TESTING: checkArchiver from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1273) Check that the archiver exists and can make a library usable by the compiler Running Executable WITHOUT threads to time it out Executing: ar -V stdout: GNU ar (GNU Binutils; SUSE Linux Enterprise 12) 2.32.0.20190909-9.36 Copyright (C) 2019 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Running Executable WITHOUT threads to time it out Executing: ar -V stdout: GNU ar (GNU Binutils; SUSE Linux Enterprise 12) 2.32.0.20190909-9.36 Copyright (C) 2019 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Defined make macro "FAST_AR_FLAGS" to "Scq" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int foo(int a) { return a+1; } Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/ar...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/ar...not found Checking for program /usr/local/krb5/bin/ar...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/ar...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/ar...not found Checking for program /opt/clmgr/sbin/ar...not found Checking for program /opt/clmgr/bin/ar...not found Checking for program /opt/sgi/sbin/ar...not found Checking for program /opt/sgi/bin/ar...not found Checking for program /usr/local/bin/ar...not found Checking for program /usr/bin/ar...found Defined make macro "AR" to "/usr/bin/ar" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/ranlib...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/ranlib...not found Checking for program /usr/local/krb5/bin/ranlib...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/ranlib...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/ranlib...not found Checking for program /opt/clmgr/sbin/ranlib...not found Checking for program /opt/clmgr/bin/ranlib...not found Checking for program /opt/sgi/sbin/ranlib...not found Checking for program /opt/sgi/bin/ranlib...not found Checking for program /usr/local/bin/ranlib...not found Checking for program /usr/bin/ranlib...found Defined make macro "RANLIB" to "/usr/bin/ranlib -c" Running Executable WITHOUT threads to time it out Executing: /usr/bin/ar cr /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconf1.a /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conf1.o Running Executable WITHOUT threads to time it out Executing: /usr/bin/ranlib -c /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconf1.a Possible ERROR while running ranlib: stderr: /usr/bin/ranlib: invalid option -- 'c' Ranlib is not functional with your archiver. Try --with-ranlib=true if ranlib is unnecessary. Running Executable WITHOUT threads to time it out Executing: ar -V stdout: GNU ar (GNU Binutils; SUSE Linux Enterprise 12) 2.32.0.20190909-9.36 Copyright (C) 2019 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Running Executable WITHOUT threads to time it out Executing: ar -V stdout: GNU ar (GNU Binutils; SUSE Linux Enterprise 12) 2.32.0.20190909-9.36 Copyright (C) 2019 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or (at your option) any later version. This program has absolutely no warranty. Defined make macro "FAST_AR_FLAGS" to "Scq" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int foo(int a) { return a+1; } Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/ar...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/ar...not found Checking for program /usr/local/krb5/bin/ar...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/ar...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/ar...not found Checking for program /opt/clmgr/sbin/ar...not found Checking for program /opt/clmgr/bin/ar...not found Checking for program /opt/sgi/sbin/ar...not found Checking for program /opt/sgi/bin/ar...not found Checking for program /usr/local/bin/ar...not found Checking for program /usr/bin/ar...found Defined make macro "AR" to "/usr/bin/ar" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/ranlib...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/ranlib...not found Checking for program /usr/local/krb5/bin/ranlib...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/ranlib...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/ranlib...not found Checking for program /opt/clmgr/sbin/ranlib...not found Checking for program /opt/clmgr/bin/ranlib...not found Checking for program /opt/sgi/sbin/ranlib...not found Checking for program /opt/sgi/bin/ranlib...not found Checking for program /usr/local/bin/ranlib...not found Checking for program /usr/bin/ranlib...found Defined make macro "RANLIB" to "/usr/bin/ranlib" Running Executable WITHOUT threads to time it out Executing: /usr/bin/ar cr /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconf1.a /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conf1.o Running Executable WITHOUT threads to time it out Executing: /usr/bin/ranlib /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconf1.a Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" extern int foo(int); int main() { int b = foo(1); if (b); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -L/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -lconf1 Defined make macro "AR_FLAGS" to "cr" Defined make macro "AR_LIB_SUFFIX" to "a" ================================================================================ TEST checkSharedLinker from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1387) TESTING: checkSharedLinker from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1387) Check that the linker can produce shared libraries Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Checking shared linker mpicc using flags ['-shared'] Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpicc...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpicc...not found Checking for program /usr/local/krb5/bin/mpicc...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpicc...found Defined make macro "LD_SHARED" to "mpicc" Trying C compiler flag Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -shared /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Valid C linker flag -shared Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o: relocation R_X86_64_PC32 against symbol `fprintf_ptr' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: nonrepresentable section on output collect2: error: ld returned 1 exit status Rejected C compiler flag because it was not compatible with shared linker mpicc using flags ['-shared'] Running Executable WITHOUT threads to time it out Executing: mpicc --help | head -n 20 stdout: Usage: gcc [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and Trying C compiler flag -fPIC Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fPIC Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Valid C linker flag -shared Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int foo(void); int main() { int ret = foo(); if (ret) {} ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -L/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -lconftest Using shared linker mpicc with flags ['-shared'] and library extension so Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux ================================================================================ TEST checkPIC from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1121) TESTING: checkPIC from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1121) Determine the PIC option for each compiler Trying Cxx for PIC code without any compiler flag Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o: relocation R_X86_64_PC32 against symbol `fprintf_ptr' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: nonrepresentable section on output collect2: error: ld returned 1 exit status Rejected Cxx compiler flag because shared linker cannot handle it Running Executable WITHOUT threads to time it out Executing: mpicxx --help | head -n 20 stdout: Usage: g++ [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and Trying Cxx compiler flag -fPIC for PIC code Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fPIC Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int (*fprintf_ptr)(FILE*,const char*,...) = fprintf; void foo(void){ fprintf_ptr(stdout,"hello"); return; } void bar(void){foo();} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Accepted Cxx compiler flag -fPIC for PIC code Trying FC for PIC code without any compiler flag Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: function foo(a) real:: a,x,bar common /xx/ x x=a foo = bar(x) end Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o: relocation R_X86_64_PC32 against symbol `xx_' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: nonrepresentable section on output collect2: error: ld returned 1 exit status Rejected FC compiler flag because shared linker cannot handle it Running Executable WITHOUT threads to time it out Executing: mpif90 --help | head -n 20 stdout: Usage: gfortran [options] file... Options: -pass-exit-codes Exit with highest error code from a phase. --help Display this information. --target-help Display target specific command line options. --help={common|optimizers|params|target|warnings|[^]{joined|separate|undocumented}}[,...]. Display specific types of command line options. (Use '-v --help' to display command line options of sub-processes). --version Display compiler version information. -dumpspecs Display all of the built in spec strings. -dumpversion Display the version of the compiler. -dumpmachine Display the compiler's target processor. -print-search-dirs Display the directories in the compiler's search path. -print-libgcc-file-name Display the name of the compiler's companion library. -print-file-name= Display the full path to library . -print-prog-name= Display the full path to compiler component . -print-multiarch Display the target's normalized GNU triplet, used as a component in the library path. -print-multi-directory Display the root directory for versions of libgcc. -print-multi-lib Display the mapping between command line options and Trying FC compiler flag -fPIC for PIC code Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -fPIC Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: function foo(a) real:: a,x,bar common /xx/ x x=a foo = bar(x) end Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Accepted FC compiler flag -fPIC for PIC code ================================================================================ TEST checkSharedLinkerPaths from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1499) TESTING: checkSharedLinkerPaths from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1499) Determine the shared linker path options - IRIX: -rpath - Linux, OSF: -Wl,-rpath, - Solaris: -R - FreeBSD: -Wl,-R, Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Running Executable WITHOUT threads to time it out Executing: mpicc -V Trying C linker flag -Wl,-rpath, Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -Wl,-rpath,/p/work2/tmondrag/moose/petsc -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Valid C linker flag -Wl,-rpath,/p/work2/tmondrag/moose/petsc Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Running Executable WITHOUT threads to time it out Executing: mpicc -V Trying Cxx linker flag -Wl,-rpath, Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -Wl,-rpath,/p/work2/tmondrag/moose/petsc /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Valid Cxx linker flag -Wl,-rpath,/p/work2/tmondrag/moose/petsc Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Running Executable WITHOUT threads to time it out Executing: mpicc -V Trying FC linker flag -Wl,-rpath, Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -Wl,-rpath,/p/work2/tmondrag/moose/petsc -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Valid FC linker flag -Wl,-rpath,/p/work2/tmondrag/moose/petsc ================================================================================ TEST checkLibC from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1534) TESTING: checkLibC from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1534) Test whether we need to explicitly include libc in shared linking - Mac OSX requires an explicit reference to libc for shared linking Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int foo(void) {void *chunk = malloc(31); free(chunk); return 0;} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o Shared linking does not require an explicit libc reference ================================================================================ TEST checkDynamicLinker from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1580) TESTING: checkDynamicLinker from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1580) Check that the linker can dynamicaly load shared libraries Checking for header: dlfcn.h All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.headers Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_DLFCN_H" to "1" Checking for functions [dlopen dlsym dlclose] in library ['dl'] [] All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.libraries Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dlopen(); static void _check_dlopen() { dlopen(); } char dlsym(); static void _check_dlsym() { dlsym(); } char dlclose(); static void _check_dlclose() { dlclose(); } int main() { _check_dlopen(); _check_dlsym(); _check_dlclose();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -ldl Defined "HAVE_LIBDL" to "1" Adding ['dl'] to LIBS Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Checking dynamic linker mpicc using flags ['-shared'] Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mpicc...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mpicc...not found Checking for program /usr/local/krb5/bin/mpicc...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mpicc...found Defined make macro "DYNAMICLINKER" to "mpicc" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -ldl Valid C linker flag -shared Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int foo(void) {printf("test");return 0;} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so -shared -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -ldl Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { void *handle = dlopen("/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/libconftest.so", 0); int (*foo)(void) = (int (*)(void)) dlsym(handle, "foo"); if (!foo) { printf("Could not load symbol\n"); return -1; } if ((*foo)()) { printf("Invalid return from foo()\n"); return -1; } if (dlclose(handle)) { printf("Could not close library\n"); return -1; } ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -ldl Using dynamic linker mpicc with flags ['-shared'] and library extension so ================================================================================ TEST output from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1629) TESTING: output from config.setCompilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/setCompilers.py:1629) Output module data as defines and substitutions Substituting "CC" with "mpicc" Substituting "CFLAGS" with " -fPIC" Defined make macro "CC_LINKER_SLFLAG" to "-Wl,-rpath," Substituting "CPP" with "mpicc -E" Substituting "CPPFLAGS" with "" Substituting "CXX" with "mpicxx" Substituting "CXX_CXXFLAGS" with " -fPIC" Substituting "CXXFLAGS" with "" Substituting "CXX_LINKER_SLFLAG" with "-Wl,-rpath," Substituting "CXXPP" with "mpicxx -E" Substituting "CXXPPFLAGS" with "" Substituting "FC" with "mpif90" Substituting "FFLAGS" with " -fPIC" Defined make macro "FC_LINKER_SLFLAG" to "-Wl,-rpath," Substituting "LDFLAGS" with "" Substituting "LIBS" with "-ldl " Substituting "SHARED_LIBRARY_FLAG" with "-shared" child config.setCompilers 7.550401 ================================================================================ TEST checkSharedDynamicPicOptions from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:36) TESTING: checkSharedDynamicPicOptions from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:36) ================================================================================ TEST configureSharedLibraries from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:52) TESTING: configureSharedLibraries from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:52) Checks whether shared libraries should be used, for which you must - Specify --with-shared-libraries - Have found a working shared linker Defines PETSC_USE_SHARED_LIBRARIES if they are used Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Defined make rule "shared_arch" with dependencies "shared_linux" and code [] Defined make macro "SONAME_FUNCTION" to "$(1).so.$(2)" Defined make macro "SL_LINKER_FUNCTION" to "-shared -Wl,-soname,$(call SONAME_FUNCTION,$(notdir $(1)),$(2))" Defined make macro "BUILDSHAREDLIB" to "yes" Defined "USE_SHARED_LIBRARIES" to "1" ================================================================================ TEST configureDynamicLibraries from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:94) TESTING: configureDynamicLibraries from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:94) Checks whether dynamic loading is available (with dlfcn.h and libdl) Defined "HAVE_DYNAMIC_LIBRARIES" to "1" ================================================================================ TEST configureSerializedFunctions from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:100) TESTING: configureSerializedFunctions from PETSc.options.sharedLibraries(/p/work2/tmondrag/moose/petsc/config/PETSc/options/sharedLibraries.py:100) Defines PETSC_SERIALIZE_FUNCTIONS if they are used Requires shared libraries child PETSc.options.sharedLibraries 0.007385 ================================================================================ TEST configureIndexSize from PETSc.options.indexTypes(/p/work2/tmondrag/moose/petsc/config/PETSc/options/indexTypes.py:30) TESTING: configureIndexSize from PETSc.options.indexTypes(/p/work2/tmondrag/moose/petsc/config/PETSc/options/indexTypes.py:30) Defined make macro "PETSC_INDEX_SIZE" to "32" child PETSc.options.indexTypes 0.000857 ================================================================================ TEST configureCompilerFlags from config.compilerFlags(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilerFlags.py:72) TESTING: configureCompilerFlags from config.compilerFlags(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilerFlags.py:72) Get the default compiler flags Defined make macro "C_VERSION" to "gcc (GCC) 7.2.0" Defined make macro "MPICC_SHOW" to "gcc -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi" Trying C compiler flag -Wall Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wall Trying C compiler flag -Wwrite-strings Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wwrite-strings Trying C compiler flag -Wno-strict-aliasing Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wno-strict-aliasing Trying C compiler flag -Wno-unknown-pragmas Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -Wno-unknown-pragmas Trying C compiler flag -fstack-protector Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fstack-protector Trying C compiler flag -mfp16-format=ieee Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -mfp16-format=ieee /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Possible ERROR while running compiler: exit code 1 stderr: gcc: error: unrecognized command line option ???-mfp16-format=ieee??? Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Rejecting compiler flag -mfp16-format=ieee due to nonzero status from link Rejecting compiler flag -mfp16-format=ieee due to gcc: error: unrecognized command line option ???-mfp16-format=ieee??? PETSc Error: No output file produced Rejected C compiler flag -mfp16-format=ieee Trying C compiler flag -fvisibility=hidden Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -fvisibility=hidden Defined make macro "MPICC_SHOW" to "gcc -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi" Trying C compiler flag -g Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -g Trying C compiler flag -O Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added C compiler flag -O Defined make macro "Cxx_VERSION" to "g++ (GCC) 7.2.0" Defined make macro "MPICXX_SHOW" to "g++ -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi++ -lmpi" Trying Cxx compiler flag -Wall Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wall Trying Cxx compiler flag -Wwrite-strings Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wwrite-strings Trying Cxx compiler flag -Wno-strict-aliasing Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wno-strict-aliasing Trying Cxx compiler flag -Wno-unknown-pragmas Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -Wno-unknown-pragmas Trying Cxx compiler flag -fstack-protector Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fstack-protector Trying Cxx compiler flag -fvisibility=hidden Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -fvisibility=hidden Defined make macro "MPICXX_SHOW" to "g++ -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi++ -lmpi" Trying Cxx compiler flag -g Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -g Trying Cxx compiler flag -O Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Added Cxx compiler flag -O Defined make macro "FC_VERSION" to "GNU Fortran (GCC) 7.2.0" Defined make macro "MPIFC_SHOW" to "gfortran -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi" Trying FC compiler flag -Wall Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -Wall Trying FC compiler flag -ffree-line-length-0 Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -ffree-line-length-0 /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -ffree-line-length-0 Trying FC compiler flag -Wno-unused-dummy-argument Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -Wno-unused-dummy-argument Defined make macro "MPIFC_SHOW" to "gfortran -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi" Trying FC compiler flag -g Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -g Trying FC compiler flag -O Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end Added FC compiler flag -O child config.compilerFlags 1.136165 ================================================================================ TEST checkRestrict from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:138) TESTING: checkRestrict from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:138) Check for the C/CXX restrict keyword Running Executable WITHOUT threads to time it out Executing: mpicc -V All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.compilers Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c:5:20: warning: unused variable ???x??? [-Wunused-variable] float * __restrict x;; ^ Source: #include "confdefs.h" #include "conffix.h" int main() { float * __restrict x;; return 0; } compilers: Set C restrict keyword to __restrict Defined "C_RESTRICT" to "__restrict" ================================================================================ TEST checkCFormatting from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:391) TESTING: checkCFormatting from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:391) Activate format string checking if using the GNU compilers ================================================================================ TEST checkCInline from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:108) TESTING: checkCInline from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:108) Check for C inline keyword Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" static inline int foo(int a) {return a;} int main() { foo(1);; return 0; } compilers: Set C Inline keyword to inline Defined "C_INLINE" to "inline" ================================================================================ TEST checkDynamicLoadFlag from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:402) TESTING: checkDynamicLoadFlag from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:402) Checks that dlopen() takes RTLD_XXX, and defines PETSC_HAVE_RTLD_XXX if it does Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include char *libname; int main() { dlopen(libname, RTLD_LAZY);dlopen(libname, RTLD_NOW);dlopen(libname, RTLD_LOCAL);dlopen(libname, RTLD_GLOBAL); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -ldl Defined "HAVE_RTLD_LAZY" to "1" Defined "HAVE_RTLD_NOW" to "1" Defined "HAVE_RTLD_LOCAL" to "1" Defined "HAVE_RTLD_GLOBAL" to "1" ================================================================================ TEST checkCLibraries from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:202) TESTING: checkCLibraries from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:202) Determines the libraries needed to link with C compiled code Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include void asub(void) {char s[16];printf("testing %s",s);} Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: program main print*,'testing' stop end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -ldl C libraries are not needed when using Fortran linker Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include void asub(void) {char s[16];printf("testing %s",s);} Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -ldl C libraries are not needed when using C++ linker ================================================================================ TEST checkDependencyGenerationFlag from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1345) TESTING: checkDependencyGenerationFlag from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1345) Check if -MMD works for dependency generation, and add it if it does Trying C compiler flag -MMD -MP Defined make macro "C_DEPFLAGS" to "-MMD -MP" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -MMD -MP /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Trying Cxx compiler flag -MMD -MP Defined make macro "CXX_DEPFLAGS" to "-MMD -MP" Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC -MMD -MP /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } ================================================================================ TEST checkC99Flag from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1390) TESTING: checkC99Flag from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1390) Check for -std=c99 or equivalent flag Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c:7:11: warning: variable ???x??? set but not used [-Wunused-but-set-variable] float x[2],y; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { float x[2],y; y = FLT_ROUNDS; // c++ comment int j = 2; for (int i=0; i<2; i++){ x[i] = i*j*y; } ; return 0; } Accepted C99 compile flag: Defined "HAVE_C99" to "1" ================================================================================ TEST checkRestrict from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:138) TESTING: checkRestrict from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:138) Check for the C/CXX restrict keyword Running Executable WITHOUT threads to time it out Executing: mpicc -V Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc: In function ???int main()???: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:5:20: warning: unused variable ???x??? [-Wunused-variable] float * __restrict x;; ^ Source: #include "confdefs.h" #include "conffix.h" int main() { float * __restrict x;; return 0; } compilers: Set Cxx restrict keyword to __restrict Defined "CXX_RESTRICT" to "__restrict" ================================================================================ TEST checkCxxOptionalExtensions from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:421) TESTING: checkCxxOptionalExtensions from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:421) Check whether the C++ compiler (IBM xlC, OSF5) need special flag for .c files which contain C++ Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { class somename { int i; };; return 0; } ================================================================================ TEST checkCxxInline from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:123) TESTING: checkCxxInline from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:123) Check for C++ inline keyword Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" static inline int foo(int a) {return a;} int main() { foo(1);; return 0; } compilers: Set Cxx Inline keyword to inline Defined "CXX_INLINE" to "inline" ================================================================================ TEST checkCxxLibraries from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:533) TESTING: checkCxxLibraries from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:533) Determines the libraries needed to link with C++ from C and Fortran Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void asub(void) {std::vector v; try { throw 20; } catch (int e) { std::cout << "An exception occurred"; }} Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o: in function `asub()': /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `__cxa_allocate_exception' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `typeinfo for int' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `__cxa_throw' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `__cxa_begin_catch' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `std::cout' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*)' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `__cxa_end_catch' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc:7: undefined reference to `__cxa_end_catch' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o: in function `__static_initialization_and_destruction_0': /p/home/apps/gnu_compiler/7.2.0/include/c++/7.2.0/iostream:74: undefined reference to `std::ios_base::Init::Init()' /usr/bin/ld: /p/home/apps/gnu_compiler/7.2.0/include/c++/7.2.0/iostream:74: undefined reference to `std::ios_base::Init::~Init()' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o:(.data.DW.ref._ZTIi[DW.ref._ZTIi]+0x0): undefined reference to `typeinfo for int' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o:(.data.DW.ref.__gxx_personality_v0[DW.ref.__gxx_personality_v0]+0x0): undefined reference to `__gxx_personality_v0' collect2: error: ld returned 1 exit status Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void asub(void) {std::vector v; try { throw 20; } catch (int e) { std::cout << "An exception occurred"; }} Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -lstdc++ -ldl compilers: C++ requires -lstdc++ to link with C compiler Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include void asub(void) {std::vector v; try { throw 20; } catch (int e) { std::cout << "An exception occurred"; }} Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: program main print*,'testing' stop end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -lstdc++ -ldl C++ libraries are not needed when using FC linker ================================================================================ TEST checkCxxDialect from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:449) TESTING: checkCxxDialect from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:449) Determine the Cxx dialect supported by the compiler [and correspoding compiler option - if any]. -with-cxx-dialect can take options: auto: use highest dialect configure can determine cxx17: [future?] cxx14: gnu++14 or c++14 cxx11: gnu++11 or c++11 0: disable CxxDialect check and use compiler default checkCxxDialect: checking CXX11 with flag: Defined "HAVE_CXX_DIALECT_CXX11" to "1" Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include template constexpr T Cubed( T x ) { return x*x*x; } int main() { std::random_device rd; std::mt19937 mt(rd()); std::normal_distribution dist(0,1); const double x = dist(mt); std::cout << x; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -V ================================================================================ TEST checkFortranNameMangling from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:838) TESTING: checkFortranNameMangling from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:838) Checks Fortran name mangling, and defines HAVE_FORTRAN_UNDERSCORE, HAVE_FORTRAN_NOUNDERSCORE, HAVE_FORTRAN_CAPS, or HAVE_FORTRAN_STDCALL Testing Fortran mangling type underscore with code void d1chk_(void){return;} Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" void d1chk_(void){return;} Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: program main call d1chk() end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -lstdc++ -ldl compilers: Fortran name mangling is underscore Defined "HAVE_FORTRAN_UNDERSCORE" to "1" Running Executable WITHOUT threads to time it out Executing: mpif90 --version stdout: GNU Fortran (GCC) 7.2.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Defined "FORTRAN_CHARLEN_T" to "int" ================================================================================ TEST checkFortranNameManglingDouble from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:879) TESTING: checkFortranNameManglingDouble from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:879) Checks if symbols containing an underscore append an extra underscore, and defines HAVE_FORTRAN_UNDERSCORE_UNDERSCORE if necessary Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" void d1_chk__(void){return;} Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: program main call d1_chk() end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o: in function `MAIN__': /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90:2: undefined reference to `d1_chk_' collect2: error: ld returned 1 exit status ================================================================================ TEST checkFortranLibraries from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:889) TESTING: checkFortranLibraries from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:889) Substitutes for FLIBS the libraries needed to link with Fortran This macro is intended to be used in those situations when it is necessary to mix, e.g. C++ and Fortran 77, source code into a single program or shared library. For example, if object files from a C++ and Fortran 77 compiler must be linked together, then the C++ compiler/linker must be used for linking (since special C++-ish things need to happen at link time like calling global constructors, instantiating templates, enabling exception support, etc.). However, the Fortran 77 intrinsic and run-time libraries must be linked in as well, but the C++ compiler/linker does not know how to add these Fortran 77 libraries. This code was translated from the autoconf macro which was packaged in its current form by Matthew D. Langston . However, nearly all of this macro came from the OCTAVE_FLIBS macro in octave-2.0.13/aclocal.m4, and full credit should go to John W. Eaton for writing this extremely useful macro. Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Possible ERROR while running compiler: stderr: /p/home/apps/hpe/mpt-2.19/include/mpif.h:561:54: integer MPI_STATUSES_IGNORE(MPI_STATUS_SIZE,1) 1 Warning: Unused variable ???mpi_statuses_ignore??? declared at (1) [-Wunused-variable] Source: program main #include call MPI_Allreduce() end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -lstdc++ -ldl Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: subroutine asub() print*,'testing' call MPI_Allreduce() return end Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main(int argc,char **args) {return 0;} Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o: in function `asub_': /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90:2: undefined reference to `_gfortran_st_write' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90:2: undefined reference to `_gfortran_transfer_character_write' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90:2: undefined reference to `_gfortran_st_write_done' collect2: error: ld returned 1 exit status Fortran code cannot directly be linked with C linker, therefore will determine needed Fortran libraries Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: program main end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -v -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -lstdc++ -ldl Possible ERROR while running linker: stderr: Driving: gfortran -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -v -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -lstdc++ -ldl -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include -I/p/home/apps/hpe/mpt-2.19/include -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -L/p/home/apps/hpe/mpt-2.19/lib -lmpi -l gfortran -l m -shared-libgcc Using built-in specs. COLLECT_GCC=gfortran COLLECT_LTO_WRAPPER=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/lto-wrapper Target: x86_64-pc-linux-gnu Configured with: /p/home/u4immtww/gcc-7.2.0/configure --prefix=/p/home/apps/gnu_compiler/7.2.0 --enable-languages=c,c++,fortran,go Thread model: posix gcc version 7.2.0 (GCC) Reading specs from /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64/libgfortran.spec rename spec lib to liborig COLLECT_GCC_OPTIONS='-o' '/p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest' '-v' '-fPIC' '-Wall' '-ffree-line-length-0' '-Wno-unused-dummy-argument' '-g' '-O' '-I' '/p/home/apps/hpe/mpt-2.19/include' '-I' '/p/home/apps/gnu_compiler/7.2.0/include' '-I' '/p/home/apps/hpe/mpt-2.19/include' '-L/p/home/apps/hpe/mpt-2.19/lib' '-shared-libgcc' '-mtune=generic' '-march=x86-64' COMPILER_PATH=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/ LIBRARY_PATH=/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/p/home/apps/hpe/mpt-2.19/lib/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-o' '/p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest' '-v' '-fPIC' '-Wall' '-ffree-line-length-0' '-Wno-unused-dummy-argument' '-g' '-O' '-I' '/p/home/apps/hpe/mpt-2.19/include' '-I' '/p/home/apps/gnu_compiler/7.2.0/include' '-I' '/p/home/apps/hpe/mpt-2.19/include' '-L/p/home/apps/hpe/mpt-2.19/lib' '-shared-libgcc' '-mtune=generic' '-march=x86-64' /p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/collect2 -plugin /p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/liblto_plugin.so -plugin-opt=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/lto-wrapper -plugin-opt=-fresolution=/p/work2/tmondrag/cceWXJkS.res -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lquadmath -plugin-opt=-pass-through=-lm -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest /usr/lib/../lib64/crt1.o /usr/lib/../lib64/crti.o /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/crtbegin.o -L/p/home/apps/hpe/mpt-2.19/lib -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/p/home/apps/hpe/mpt-2.19/lib -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../.. /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -lstdc++ -ldl -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 -lmpi -lgfortran -lm -lgcc_s -lgcc -lquadmath -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/crtend.o /usr/lib/../lib64/crtn.o COLLECT_GCC_OPTIONS='-o' '/p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest' '-v' '-fPIC' '-Wall' '-ffree-line-length-0' '-Wno-unused-dummy-argument' '-g' '-O' '-I' '/p/home/apps/hpe/mpt-2.19/include' '-I' '/p/home/apps/gnu_compiler/7.2.0/include' '-I' '/p/home/apps/hpe/mpt-2.19/include' '-L/p/home/apps/hpe/mpt-2.19/lib' '-shared-libgcc' '-mtune=generic' '-march=x86-64' compilers: Checking arg Driving: compilers: Unknown arg Driving: compilers: Checking arg gfortran compilers: Unknown arg gfortran compilers: Checking arg -o compilers: Unknown arg -o compilers: Checking arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest compilers: Unknown arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest compilers: Checking arg -v compilers: Unknown arg -v compilers: Checking arg -fPIC compilers: Unknown arg -fPIC compilers: Checking arg -Wall compilers: Unknown arg -Wall compilers: Checking arg -ffree-line-length-0 compilers: Unknown arg -ffree-line-length-0 compilers: Checking arg -Wno-unused-dummy-argument compilers: Unknown arg -Wno-unused-dummy-argument compilers: Checking arg -g compilers: Unknown arg -g compilers: Checking arg -O compilers: Unknown arg -O compilers: Checking arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o compilers: Unknown arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o compilers: Checking arg -lstdc++ compilers: Found library: -lstdc++ compilers: Checking arg -ldl compilers: Found library: -ldl compilers: Checking arg -I/p/home/apps/hpe/mpt-2.19/include compilers: Found include directory: /p/home/apps/hpe/mpt-2.19/include compilers: Checking arg -I/p/home/apps/gnu_compiler/7.2.0/include compilers: Found include directory: /p/home/apps/gnu_compiler/7.2.0/include compilers: Checking arg -I/p/home/apps/hpe/mpt-2.19/include compilers: Found include directory: /p/home/apps/hpe/mpt-2.19/include compilers: Checking arg -lpthread compilers: Found library: -lpthread compilers: Checking arg /usr/lib64/libcpuset.so.1 compilers: Unknown arg /usr/lib64/libcpuset.so.1 compilers: Checking arg /usr/lib64/libbitmask.so.1 compilers: Unknown arg /usr/lib64/libbitmask.so.1 compilers: Checking arg -L/p/home/apps/hpe/mpt-2.19/lib compilers: Found library directory: -L/p/home/apps/hpe/mpt-2.19/lib compilers: Checking arg -lmpi compilers: Found library: -lmpi compilers: Checking arg -l compilers: Found canonical library: -lgfortran compilers: Checking arg -l compilers: Found canonical library: -lm compilers: Checking arg -shared-libgcc compilers: Unknown arg -shared-libgcc compilers: Checking arg Using compilers: Unknown arg Using compilers: Checking arg built-in compilers: Unknown arg built-in compilers: Checking arg specs. compilers: Unknown arg specs. compilers: Checking arg COLLECT_GCC=gfortran compilers: Unknown arg COLLECT_GCC=gfortran compilers: Checking arg COLLECT_LTO_WRAPPER=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/lto-wrapper compilers: Unknown arg COLLECT_LTO_WRAPPER=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/lto-wrapper compilers: Checking arg Target: compilers: Unknown arg Target: compilers: Checking arg x86_64-pc-linux-gnu compilers: Unknown arg x86_64-pc-linux-gnu compilers: Checking arg Configured compilers: Unknown arg Configured compilers: Checking arg with: compilers: Unknown arg with: compilers: Checking arg /p/home/u4immtww/gcc-7.2.0/configure compilers: Unknown arg /p/home/u4immtww/gcc-7.2.0/configure compilers: Checking arg --prefix=/p/home/apps/gnu_compiler/7.2.0 compilers: Unknown arg --prefix=/p/home/apps/gnu_compiler/7.2.0 compilers: Checking arg --enable-languages=c,c++,fortran,go compilers: Unknown arg --enable-languages=c,c++,fortran,go compilers: Checking arg Thread compilers: Unknown arg Thread compilers: Checking arg model: compilers: Unknown arg model: compilers: Checking arg posix compilers: Unknown arg posix compilers: Checking arg gcc compilers: Unknown arg gcc compilers: Checking arg version compilers: Unknown arg version compilers: Checking arg 7.2.0 compilers: Unknown arg 7.2.0 compilers: Checking arg (GCC) compilers: Unknown arg (GCC) compilers: Checking arg Reading compilers: Unknown arg Reading compilers: Checking arg specs compilers: Unknown arg specs compilers: Checking arg from compilers: Unknown arg from compilers: Checking arg /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64/libgfortran.spec compilers: Unknown arg /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64/libgfortran.spec compilers: Checking arg rename compilers: Unknown arg rename compilers: Checking arg spec compilers: Unknown arg spec compilers: Checking arg lib compilers: Unknown arg lib compilers: Checking arg to compilers: Unknown arg to compilers: Checking arg liborig compilers: Unknown arg liborig compilers: Checking arg COLLECT_GCC_OPTIONS= compilers: Unknown arg COLLECT_GCC_OPTIONS= compilers: Checking arg COMPILER_PATH=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/ compilers: Skipping arg COMPILER_PATH=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/ compilers: Checking arg LIBRARY_PATH=/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/p/home/apps/hpe/mpt-2.19/lib/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../:/lib/:/usr/lib/ compilers: Skipping arg LIBRARY_PATH=/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64/:/lib/../lib64/:/usr/lib/../lib64/:/p/home/apps/hpe/mpt-2.19/lib/:/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../:/lib/:/usr/lib/ compilers: Checking arg COLLECT_GCC_OPTIONS= compilers: Unknown arg COLLECT_GCC_OPTIONS= compilers: Checking arg /p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/collect2 compilers: Unknown arg /p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/collect2 compilers: Checking arg -plugin compilers: Unknown arg -plugin compilers: Checking arg /p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/liblto_plugin.so compilers: Unknown arg /p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/liblto_plugin.so compilers: Checking arg -plugin-opt=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/lto-wrapper compilers: Unknown arg -plugin-opt=/p/home/apps/gnu_compiler/7.2.0/libexec/gcc/x86_64-pc-linux-gnu/7.2.0/lto-wrapper compilers: Checking arg -plugin-opt=-fresolution=/p/work2/tmondrag/cceWXJkS.res compilers: Unknown arg -plugin-opt=-fresolution=/p/work2/tmondrag/cceWXJkS.res compilers: Checking arg -plugin-opt=-pass-through=-lgcc_s compilers: Unknown arg -plugin-opt=-pass-through=-lgcc_s compilers: Checking arg -plugin-opt=-pass-through=-lgcc compilers: Unknown arg -plugin-opt=-pass-through=-lgcc compilers: Checking arg -plugin-opt=-pass-through=-lquadmath compilers: Unknown arg -plugin-opt=-pass-through=-lquadmath compilers: Checking arg -plugin-opt=-pass-through=-lm compilers: Unknown arg -plugin-opt=-pass-through=-lm compilers: Checking arg -plugin-opt=-pass-through=-lgcc_s compilers: Unknown arg -plugin-opt=-pass-through=-lgcc_s compilers: Checking arg -plugin-opt=-pass-through=-lgcc compilers: Unknown arg -plugin-opt=-pass-through=-lgcc compilers: Checking arg -plugin-opt=-pass-through=-lc compilers: Unknown arg -plugin-opt=-pass-through=-lc compilers: Checking arg -plugin-opt=-pass-through=-lgcc_s compilers: Unknown arg -plugin-opt=-pass-through=-lgcc_s compilers: Checking arg -plugin-opt=-pass-through=-lgcc compilers: Unknown arg -plugin-opt=-pass-through=-lgcc compilers: Checking arg --eh-frame-hdr compilers: Unknown arg --eh-frame-hdr compilers: Checking arg -m compilers: Unknown arg -m compilers: Checking arg elf_x86_64 compilers: Unknown arg elf_x86_64 compilers: Checking arg -dynamic-linker compilers: Unknown arg -dynamic-linker compilers: Checking arg /lib64/ld-linux-x86-64.so.2 compilers: Unknown arg /lib64/ld-linux-x86-64.so.2 compilers: Checking arg -o compilers: Unknown arg -o compilers: Checking arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest compilers: Unknown arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest compilers: Checking arg /usr/lib/../lib64/crt1.o compilers: Unknown arg /usr/lib/../lib64/crt1.o compilers: Checking arg /usr/lib/../lib64/crti.o compilers: Unknown arg /usr/lib/../lib64/crti.o compilers: Checking arg /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/crtbegin.o compilers: Unknown arg /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/crtbegin.o compilers: Checking arg -L/p/home/apps/hpe/mpt-2.19/lib compilers: Already in lflags so skipping: -L/p/home/apps/hpe/mpt-2.19/lib compilers: Checking arg -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 compilers: Found library directory: -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 compilers: Checking arg -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../../../lib64 compilers: Found library directory: -L/p/home/apps/gnu_compiler/7.2.0/lib64 compilers: Checking arg -L/lib/../lib64 compilers: Checking arg -L/usr/lib/../lib64 compilers: Checking arg -L/p/home/apps/hpe/mpt-2.19/lib compilers: Already in lflags so skipping: -L/p/home/apps/hpe/mpt-2.19/lib compilers: Checking arg -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/../../.. compilers: Found library directory: -L/p/home/apps/gnu_compiler/7.2.0/lib compilers: Checking arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o compilers: Unknown arg /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o compilers: Checking arg -lstdc++ compilers: Already in lflags: -lstdc++ compilers: Checking arg -ldl compilers: Already in lflags: -ldl compilers: Checking arg -lpthread compilers: Already in lflags: -lpthread compilers: Checking arg /usr/lib64/libcpuset.so.1 compilers: Unknown arg /usr/lib64/libcpuset.so.1 compilers: Checking arg /usr/lib64/libbitmask.so.1 compilers: Unknown arg /usr/lib64/libbitmask.so.1 compilers: Checking arg -lmpi compilers: Already in lflags: -lmpi compilers: Checking arg -lgfortran compilers: Found library: -lgfortran compilers: Checking arg -lm compilers: Found library: -lm compilers: Checking arg -lgcc_s compilers: Found library: -lgcc_s compilers: Checking arg -lgcc compilers: Found system library therefore skipping: -lgcc compilers: Checking arg -lquadmath compilers: Found library: -lquadmath compilers: Checking arg -lm compilers: Already in lflags: -lm compilers: Checking arg -lgcc_s compilers: Already in lflags: -lgcc_s compilers: Checking arg -lgcc compilers: Found system library therefore skipping: -lgcc compilers: Checking arg -lc compilers: Found system library therefore skipping: -lc compilers: Checking arg -lgcc_s compilers: Already in lflags: -lgcc_s compilers: Checking arg -lgcc compilers: Found system library therefore skipping: -lgcc compilers: Checking arg /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/crtend.o compilers: Unknown arg /p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0/crtend.o compilers: Checking arg /usr/lib/../lib64/crtn.o compilers: Unknown arg /usr/lib/../lib64/crtn.o compilers: Checking arg COLLECT_GCC_OPTIONS= compilers: Unknown arg COLLECT_GCC_OPTIONS= compilers: Libraries needed to link Fortran code with the C linker: ['-lstdc++', '-ldl', '-lpthread', '-Wl,-rpath,/p/home/apps/hpe/mpt-2.19/lib', '-L/p/home/apps/hpe/mpt-2.19/lib', '-lmpi', '-lgfortran', '-lm', '-Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0', '-L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0', '-Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib64', '-L/p/home/apps/gnu_compiler/7.2.0/lib64', '-Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib', '-L/p/home/apps/gnu_compiler/7.2.0/lib', '-lgfortran', '-lm', '-lgcc_s', '-lquadmath'] compilers: Libraries needed to link Fortran main with the C linker: [] compilers: Check that Fortran libraries can be used with C as the linker Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lstdc++ -ldl -lpthread -Wl,-rpath,/p/home/apps/hpe/mpt-2.19/lib -L/p/home/apps/hpe/mpt-2.19/lib -lmpi -lgfortran -lm -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib64 -L/p/home/apps/gnu_compiler/7.2.0/lib64 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib -L/p/home/apps/gnu_compiler/7.2.0/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lstdc++ -ldl -lpthread -Wl,-rpath,/p/home/apps/hpe/mpt-2.19/lib -L/p/home/apps/hpe/mpt-2.19/lib -lmpi -lgfortran -lm -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib64 -L/p/home/apps/gnu_compiler/7.2.0/lib64 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib -L/p/home/apps/gnu_compiler/7.2.0/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status compilers: Check that Fortran libraries can be used with C++ as linker compilers: Fortran libraries can be used from C++ Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lstdc++ -ldl -lpthread -Wl,-rpath,/p/home/apps/hpe/mpt-2.19/lib -L/p/home/apps/hpe/mpt-2.19/lib -lmpi -lgfortran -lm -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib64 -L/p/home/apps/gnu_compiler/7.2.0/lib64 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib -L/p/home/apps/gnu_compiler/7.2.0/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -lstdc++ -ldl -lpthread -Wl,-rpath,/p/home/apps/hpe/mpt-2.19/lib -L/p/home/apps/hpe/mpt-2.19/lib -lmpi -lgfortran -lm -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -L/p/home/apps/gnu_compiler/7.2.0/lib/gcc/x86_64-pc-linux-gnu/7.2.0 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib64 -L/p/home/apps/gnu_compiler/7.2.0/lib64 -Wl,-rpath,/p/home/apps/gnu_compiler/7.2.0/lib -L/p/home/apps/gnu_compiler/7.2.0/lib -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -lpetsc-ufod4vtr9mqHvKIQiVAm Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lpetsc-ufod4vtr9mqHvKIQiVAm collect2: error: ld returned 1 exit status ================================================================================ TEST checkFortranLinkingCxx from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1310) TESTING: checkFortranLinkingCxx from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1310) Check that Fortran can link C++ libraries Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" extern "C" void d1chk_(void); void foo(void){d1chk_();} Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" extern "C" void d1chk_(void); void d1chk_(void){return;} Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.F90 Successful compile: Source: program main call d1chk() end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilers/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/cxxobj.o /p/work2/tmondrag/petsc-N5i8ny/config.compilers/confc.o -lstdc++ -ldl compilers: Fortran can link C++ functions ================================================================================ TEST setupFrameworkCompilers from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1455) TESTING: setupFrameworkCompilers from config.compilers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilers.py:1455) child config.compilers 5.133946 ================================================================================ TEST configureClosure from config.utilities.closure(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/closure.py:17) TESTING: configureClosure from config.utilities.closure(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/closure.py:17) Determine if Apple ^close syntax is supported in C All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure/conftest.c:6:6: error: expected identifier or ???(??? before ???^??? token int (^closure)(int);; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int (^closure)(int);; return 0; } Compile failed inside link child config.utilities.closure 0.035873 ================================================================================ TEST checkFortranTypeSizes from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:56) TESTING: checkFortranTypeSizes from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:56) Check whether real*8 is supported and suggest flags which will allow support All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90:2:21: real*8 variable 1 Warning: Unused variable ???variable??? declared at (1) [-Wunused-variable] Source: program main real*8 variable end ================================================================================ TEST checkFortranPreprocessor from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:73) TESTING: checkFortranPreprocessor from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:73) Determine if Fortran handles preprocessing properly Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main #define dummy dummy #ifndef dummy fooey #endif end compilers: Fortran uses preprocessor ================================================================================ TEST checkFortranDefineCompilerOption from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:96) TESTING: checkFortranDefineCompilerOption from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:96) Check if -WF,-Dfoobar or -Dfoobar is the compiler option to define a macro Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -DTesting /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main #define dummy dummy #ifndef Testing fooey #endif end Defined make macro "FC_DEFINE_FLAG" to "-D" compilers: Fortran uses -D for defining macro ================================================================================ TEST checkFortran90 from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:146) TESTING: checkFortran90 from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:146) Determine whether the Fortran compiler handles F90 Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: program main INTEGER, PARAMETER :: int = SELECTED_INT_KIND(8) INTEGER (KIND=int) :: ierr ierr = 1 end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -lstdc++ -ldl Defined "USING_F90" to "1" Fortran compiler supports F90 ================================================================================ TEST checkFortran90FreeForm from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:159) TESTING: checkFortran90FreeForm from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:159) Determine whether the Fortran compiler handles F90FreeForm We also require that the compiler handles lines longer than 132 characters Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: program main INTEGER, PARAMETER :: int = SELECTED_INT_KIND(8); INTEGER (KIND=int) :: ierr; ierr = 1 end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -lstdc++ -ldl Defined "USING_F90FREEFORM" to "1" Fortran compiler supports F90FreeForm ================================================================================ TEST checkFortran2003 from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:173) TESTING: checkFortran2003 from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:173) Determine whether the Fortran compiler handles F2003 Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: module Base_module type, public :: base_type integer :: A contains procedure, public :: Print => BasePrint end type base_type contains subroutine BasePrint(this) class(base_type) :: this end subroutine BasePrint end module Base_module program main use,intrinsic :: iso_c_binding Type(C_Ptr),Dimension(:),Pointer :: CArray character(kind=c_char),pointer :: nullc => null() character(kind=c_char,len=5),dimension(:),pointer::list1 allocate(list1(5)) CArray = (/(c_loc(list1(i)),i=1,5),c_loc(nullc)/) end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -lstdc++ -ldl Defined "USING_F2003" to "1" Fortran compiler supports F2003 ================================================================================ TEST checkFortran90Array from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:208) TESTING: checkFortran90Array from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:208) Check for F90 array interfaces Running Executable WITHOUT threads to time it out Executing: mpif90 -V compilers: Using --with-batch, so guess that F90 uses a single argument for array pointers ================================================================================ TEST checkFortranModuleInclude from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:295) TESTING: checkFortranModuleInclude from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:295) Figures out what flag is used to specify the include path for Fortran modules Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: module configtest integer testint parameter (testint = 42) end module configtest Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: program main use configtest write(*,*) testint end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/configtest.o -lstdc++ -ldl compilers: Fortran module include flag -I found ================================================================================ TEST checkFortranModuleOutput from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:361) TESTING: checkFortranModuleOutput from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:361) Figures out what flag is used to specify the include path for Fortran modules Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -module /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: gfortran: error: unrecognized command line option ???-module???; did you mean ???-mhle???? Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -module compile failed Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -module:/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: gfortran: error: unrecognized command line option ???-module:/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir??? Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -module: compile failed Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fmod=/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: gfortran: error: unrecognized command line option ???-fmod=/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir??? Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -fmod= compile failed Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -J/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/confdir -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: module configtest integer testint parameter (testint = 42) end module configtest compilers: Fortran module output flag -J found ================================================================================ TEST checkFortranTypeStar from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:135) TESTING: checkFortranTypeStar from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:135) Determine whether the Fortran compiler handles type(*) Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: program main interface subroutine a(b) type(*) :: b(:) end subroutine end interface end Defined "HAVE_FORTRAN_TYPE_STAR" to "1" Fortran compiler supports type(*) ================================================================================ TEST checkFortranTypeInitialize from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:125) TESTING: checkFortranTypeInitialize from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:125) Determines if PETSc objects in Fortran are initialized by default (doesn't work with common blocks) Defined "FORTRAN_TYPE_INITIALIZE" to " = -2" Initializing Fortran objects ================================================================================ TEST configureFortranFlush from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:116) TESTING: configureFortranFlush from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:116) Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.F90 Successful compile: Source: program main call flush(6) end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran/conftest.o -lstdc++ -ldl Defined "HAVE_FORTRAN_FLUSH" to "1" ================================================================================ TEST checkDependencyGenerationFlag from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:406) TESTING: checkDependencyGenerationFlag from config.compilersFortran(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/compilersFortran.py:406) Check if -MMD works for dependency generation, and add it if it does Trying FC compiler flag -MMD -MP Defined make macro "FC_DEPFLAGS" to "-MMD -MP" Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O -MMD -MP /p/work2/tmondrag/petsc-N5i8ny/config.setCompilers/conftest.F90 Successful compile: Source: program main end child config.compilersFortran 2.903161 ================================================================================ TEST checkStdC from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:105) TESTING: checkStdC from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:105) Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include int main() { ; return 0; } Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c ================================================================================ TEST checkStat from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:138) TESTING: checkStat from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:138) Checks whether stat file-mode macros are broken, and defines STAT_MACROS_BROKEN if they are Preprocessing source: #include "confdefs.h" #include "conffix.h" #include #include #if defined(S_ISBLK) && defined(S_IFDIR) # if S_ISBLK (S_IFDIR) You lose. # endif #endif #if defined(S_ISBLK) && defined(S_IFCHR) # if S_ISBLK (S_IFCHR) You lose. # endif #endif #if defined(S_ISLNK) && defined(S_IFREG) # if S_ISLNK (S_IFREG) You lose. # endif #endif #if defined(S_ISSOCK) && defined(S_IFREG) # if S_ISSOCK (S_IFREG) You lose. # endif #endif Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c ================================================================================ TEST checkSysWait from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:173) TESTING: checkSysWait from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:173) Check for POSIX.1 compatible sys/wait.h, and defines HAVE_SYS_WAIT_H if found Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #ifndef WEXITSTATUS #define WEXITSTATUS(stat_val) ((unsigned)(stat_val) >> 8) #endif #ifndef WIFEXITED #define WIFEXITED(stat_val) (((stat_val) & 255) == 0) #endif int main() { int s; wait (&s); s = WIFEXITED (s) ? WEXITSTATUS (s) : 1; ; return 0; } Defined "HAVE_SYS_WAIT_H" to "1" ================================================================================ TEST checkTime from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:195) TESTING: checkTime from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:195) Checks if you can safely include both and , and if so defines TIME_WITH_SYS_TIME Checking for header: time.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_TIME_H" to "1" Checking for header: sys/time.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TIME_H" to "1" ================================================================================ TEST checkMath from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:201) TESTING: checkMath from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:201) Checks for the math headers and defines Checking for header: math.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { double pi = M_PI; if (pi); ; return 0; } Found math #defines, like M_PI ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: setjmp.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SETJMP_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: dos.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: fcntl.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FCNTL_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: io.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: malloc.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_MALLOC_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pwd.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PWD_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: strings.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STRINGS_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: unistd.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_UNISTD_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/sysinfo.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SYSINFO_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: machine/endian.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/param.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PARAM_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/procfs.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PROCFS_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/resource.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_RESOURCE_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/systeminfo.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/times.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TIMES_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/utsname.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_UTSNAME_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/socket.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SOCKET_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/wait.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_WAIT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netinet/in.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETINET_IN_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netdb.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETDB_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Direct.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: time.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_TIME_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Ws2tcpip.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/types.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TYPES_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: WindowsX.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: ieeefp.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: stdint.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STDINT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pthread.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PTHREAD_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: inttypes.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_INTTYPES_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: immintrin.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_IMMINTRIN_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: zmmintrin.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: setjmp.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SETJMP_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: dos.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: dos.h: No such file or directory #include ^~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: fcntl.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FCNTL_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: io.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: io.h: No such file or directory #include ^~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: malloc.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_MALLOC_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pwd.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PWD_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: strings.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STRINGS_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: unistd.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_UNISTD_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/sysinfo.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SYSINFO_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: machine/endian.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: machine/endian.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/param.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PARAM_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/procfs.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_PROCFS_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/resource.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_RESOURCE_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/systeminfo.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/times.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TIMES_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/utsname.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_UTSNAME_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/socket.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SOCKET_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/wait.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_WAIT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netinet/in.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETINET_IN_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: netdb.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_NETDB_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Direct.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Direct.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: time.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_TIME_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: Ws2tcpip.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: Ws2tcpip.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: sys/types.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_TYPES_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: WindowsX.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: WindowsX.h: No such file or directory #include ^~~~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: float.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_FLOAT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: ieeefp.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: ieeefp.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: stdint.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_STDINT_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: pthread.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_PTHREAD_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: inttypes.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_INTTYPES_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: immintrin.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_IMMINTRIN_H" to "1" ================================================================================ TEST check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) TESTING: check from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:77) Checks for "header", and defines HAVE_"header" if found Checking for header: zmmintrin.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: zmmintrin.h: No such file or directory #include ^~~~~~~~~~~~~compilation terminated.: ================================================================================ TEST checkRecursiveMacros from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:216) TESTING: checkRecursiveMacros from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:216) Checks that the preprocessor allows recursive macros, and if not defines HAVE_BROKEN_RECURSIVE_MACRO Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" void a(int i, int j) {} #define a(b) a(b,__LINE__) int main() { a(0); ; return 0; } child config.headers 2.168840 ================================================================================ TEST configureCacheDetails from config.utilities.cacheDetails(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/cacheDetails.py:78) TESTING: configureCacheDetails from config.utilities.cacheDetails(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/cacheDetails.py:78) Try to determine the size and associativity of the cache. All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include long getconf_LEVEL1_DCACHE_LINESIZE() { long val = sysconf(_SC_LEVEL1_DCACHE_LINESIZE); return (16 <= val && val <= 2147483647) ? val : 32; } int main() { ; return 0; } Skipping determination of LEVEL1_DCACHE_LINESIZE in batch mode, using default 32 Defined "LEVEL1_DCACHE_LINESIZE" to "32" child config.utilities.cacheDetails 0.053318 ================================================================================ TEST check_struct_sigaction from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:43) TESTING: check_struct_sigaction from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:43) Checks if "struct sigaction" exists in signal.h. This check is for C89 check. Checking for type: struct sigaction All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.types Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:18: warning: unused variable ???a??? [-Wunused-variable] struct sigaction a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include int main() { struct sigaction a;; return 0; } struct sigaction found Defined "HAVE_STRUCT_SIGACTION" to "1" ================================================================================ TEST check__int64 from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:49) TESTING: check__int64 from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:49) Checks if __int64 exists. This is primarily for windows. Checking for type: __int64 Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:1: error: unknown type name ???__int64???; did you mean ???__int64_t???? __int64 a;; ^~~~~~~ __int64_t /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:9: warning: unused variable ???a??? [-Wunused-variable] __int64 a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { __int64 a;; return 0; } __int64 found ================================================================================ TEST checkSizeTypes from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:55) TESTING: checkSizeTypes from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:55) Checks for types associated with sizes, such as size_t. Checking for type: size_t Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:8: warning: unused variable ???a??? [-Wunused-variable] size_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { size_t a;; return 0; } size_t found ================================================================================ TEST checkFileTypes from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:65) TESTING: checkFileTypes from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:65) Checks for types associated with files, such as mode_t, off_t, etc. Checking for type: mode_t Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:8: warning: unused variable ???a??? [-Wunused-variable] mode_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { mode_t a;; return 0; } mode_t found Checking for type: off_t Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:7: warning: unused variable ???a??? [-Wunused-variable] off_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { off_t a;; return 0; } off_t found ================================================================================ TEST checkIntegerTypes from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:60) TESTING: checkIntegerTypes from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:60) Checks for types associated with integers, such as int32_t. Checking for type: int32_t Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:9: warning: unused variable ???a??? [-Wunused-variable] int32_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { int32_t a;; return 0; } int32_t found ================================================================================ TEST checkPID from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:71) TESTING: checkPID from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:71) Checks for pid_t, and defines it if necessary Checking for type: pid_t Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:11:7: warning: unused variable ???a??? [-Wunused-variable] pid_t a;; ^ Source: #include "confdefs.h" #include "conffix.h" #include #include #include int main() { pid_t a;; return 0; } pid_t found ================================================================================ TEST checkUID from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:75) TESTING: checkUID from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:75) Checks for uid_t and gid_t, and defines them if necessary Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c ================================================================================ TEST checkC99Complex from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:82) TESTING: checkC99Complex from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:82) Check for complex numbers in in C99 std Note that since PETSc source code uses _Complex we test specifically for that, not complex Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:6:17: warning: variable ???x??? set but not used [-Wunused-but-set-variable] double _Complex x; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double _Complex x; x = I; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:6:17: warning: variable ???x??? set but not used [-Wunused-but-set-variable] double _Complex x; ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double _Complex x; x = I; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -lstdc++ -ldl Defined "HAVE_C99_COMPLEX" to "1" ================================================================================ TEST checkCxxComplex from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:93) TESTING: checkCxxComplex from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:93) Check for complex numbers in namespace std Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { std::complex x; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -lstdc++ -ldl Defined "HAVE_CXX_COMPLEX" to "1" ================================================================================ TEST checkFortranKind from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:114) TESTING: checkFortranKind from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:114) Checks whether selected_int_kind etc work USE_FORTRANKIND Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.F90 Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.F90:4:43: real(kind=selected_real_kind(10)) d 1 Warning: Unused variable ???d??? declared at (1) [-Wunused-variable] /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.F90:3:45: integer(kind=selected_int_kind(10)) i 1 Warning: Unused variable ???i??? declared at (1) [-Wunused-variable] Source: program main integer(kind=selected_int_kind(10)) i real(kind=selected_real_kind(10)) d end Defined "USE_FORTRANKIND" to "1" ================================================================================ TEST checkConst from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:126) TESTING: checkConst from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:126) Checks for working const, and if not found defines it to empty string Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:25:5: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (x[0]); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:26:5: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? { /* SCO 3.2v4 cc rejects this. */ ^ /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:25:10: warning: ???x[0]??? is used uninitialized in this function [-Wuninitialized] if (x[0]); ~^~~ /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:30:9: warning: ???t??? is used uninitialized in this function [-Wuninitialized] *t++ = 0; ~^~ /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c:46:25: warning: ???b??? is used uninitialized in this function [-Wuninitialized] struct s *b; b->j = 5; ~~~~~^~~ Source: #include "confdefs.h" #include "conffix.h" int main() { /* Ultrix mips cc rejects this. */ typedef int charset[2]; const charset x; /* SunOS 4.1.1 cc rejects this. */ char const *const *ccp; char **p; /* NEC SVR4.0.2 mips cc rejects this. */ struct point {int x, y;}; static struct point const zero = {0,0}; /* AIX XL C 1.02.0.0 rejects this. It does not let you subtract one const X* pointer from another in an arm of an if-expression whose if-part is not a constant expression */ const char *g = "string"; ccp = &g + (g ? g-g : 0); /* HPUX 7.0 cc rejects these. */ ++ccp; p = (char**) ccp; ccp = (char const *const *) p; /* This section avoids unused variable warnings */ if (zero.x); if (x[0]); { /* SCO 3.2v4 cc rejects this. */ char *t; char const *s = 0 ? (char *) 0 : (char const *) 0; *t++ = 0; if (*s); } { /* Someone thinks the Sun supposedly-ANSI compiler will reject this. */ int x[] = {25, 17}; const int *foo = &x[0]; ++foo; } { /* Sun SC1.0 ANSI compiler rejects this -- but not the above. */ typedef const int *iptr; iptr p = 0; ++p; } { /* AIX XL C 1.02.0.0 rejects this saying "k.c", line 2.27: 1506-025 (S) Operand must be a modifiable lvalue. */ struct s { int j; const int *ap[3]; }; struct s *b; b->j = 5; } { /* ULTRIX-32 V3.1 (Rev 9) vcc rejects this */ const int foo = 10; /* Get rid of unused variable warning */ if (foo); } ; return 0; } ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: short Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(short)==2)*2-1]; Defined "SIZEOF_SHORT" to "2" ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: int Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(int)==4)*2-1]; Defined "SIZEOF_INT" to "4" ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: enum Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(enum{ENUM_DUMMY})==4)*2-1]; Defined "SIZEOF_ENUM" to "4" ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: long Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(long)==8)*2-1]; Defined "SIZEOF_LONG" to "8" ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: void * Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(void *)==8)*2-1]; Defined "SIZEOF_VOID_P" to "8" ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: long long Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(long long)==8)*2-1]; Defined "SIZEOF_LONG_LONG" to "8" ================================================================================ TEST checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) TESTING: checkSizeof from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:182) Determines the size of type "typeName", and defines SIZEOF_"typeName" to be the size Checking for size of type: size_t Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include char assert_sizeof[(sizeof(size_t)==8)*2-1]; Defined "SIZEOF_SIZE_T" to "8" ================================================================================ TEST checkVisibility from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:216) TESTING: checkVisibility from config.types(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/types.py:216) Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __attribute__((visibility ("default"))) int foo(void);; return 0; } Defined "USE_VISIBILITY_C" to "1" Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.types/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" int main() { __attribute__((visibility ("default"))) int foo(void);; return 0; } Defined "USE_VISIBILITY_CXX" to "1" child config.types 1.678735 ================================================================================ TEST configureMemAlign from PETSc.options.memAlign(/p/work2/tmondrag/moose/petsc/config/PETSc/options/memAlign.py:29) TESTING: configureMemAlign from PETSc.options.memAlign(/p/work2/tmondrag/moose/petsc/config/PETSc/options/memAlign.py:29) Choose alignment Defined "MEMALIGN" to "16" Memory alignment is 16 child PETSc.options.memAlign 0.001174 ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [handle_sigfpes] in library ['fpe'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char handle_sigfpes(); static void _check_handle_sigfpes() { handle_sigfpes(); } int main() { _check_handle_sigfpes();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lfpe -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lfpe collect2: error: ld returned 1 exit status ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [socket] in library ['socket', 'nsl'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char socket(); static void _check_socket() { socket(); } int main() { _check_socket();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lsocket -lnsl -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lsocket collect2: error: ld returned 1 exit status ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [handle_sigfpes] in library ['fpe'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char handle_sigfpes(); static void _check_handle_sigfpes() { handle_sigfpes(); } int main() { _check_handle_sigfpes();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lfpe -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lfpe collect2: error: ld returned 1 exit status ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [socket] in library ['socket', 'nsl'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char socket(); static void _check_socket() { socket(); } int main() { _check_socket();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lsocket -lnsl -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: cannot find -lsocket collect2: error: ld returned 1 exit status ================================================================================ TEST checkMath from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:264) TESTING: checkMath from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:264) Check for sin() in libm, the math library Checking for functions [sin floor log10 pow] in library [''] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include double sin(double); static void _check_sin() { double x,y; scanf("%lf",&x); y = sin(x); printf("%f",y); ; } #include double floor(double); static void _check_floor() { double x,y; scanf("%lf",&x); y = floor(x); printf("%f",y); ; } #include double log10(double); static void _check_log10() { double x,y; scanf("%lf",&x); y = log10(x); printf("%f",y); ; } #include double pow(double, double); static void _check_pow() { double x,y; scanf("%lf",&x); y = pow(x,x); printf("%f",y); ; } int main() { _check_sin(); _check_floor(); _check_log10(); _check_pow();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o: undefined reference to symbol 'log10@@GLIBC_2.2.5' /usr/bin/ld: /lib64/libm.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Checking for functions [sin floor log10 pow] in library ['m'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include double sin(double); static void _check_sin() { double x,y; scanf("%lf",&x); y = sin(x); printf("%f",y); ; } #include double floor(double); static void _check_floor() { double x,y; scanf("%lf",&x); y = floor(x); printf("%f",y); ; } #include double log10(double); static void _check_log10() { double x,y; scanf("%lf",&x); y = log10(x); printf("%f",y); ; } #include double pow(double, double); static void _check_pow() { double x,y; scanf("%lf",&x); y = pow(x,x); printf("%f",y); ; } int main() { _check_sin(); _check_floor(); _check_log10(); _check_pow();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" CheckMath: using math library ['libm.a'] ================================================================================ TEST checkMathErf from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:283) TESTING: checkMathErf from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:283) Check for erf() in libm, the math library Checking for functions [erf] in library ['libm.a'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c: In function ???_check_erf???: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:74: warning: variable ???y??? set but not used [-Wunused-but-set-variable] static void _check_erf() { double (*checkErf)(double) = erf;double x = 0,y; y = (*checkErf)(x); } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_erf() { double (*checkErf)(double) = erf;double x = 0,y; y = (*checkErf)(x); } int main() { _check_erf();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" erf() found Defined "HAVE_ERF" to "1" ================================================================================ TEST checkMathTgamma from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:292) TESTING: checkMathTgamma from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:292) Check for tgamma() in libm, the math library Checking for functions [tgamma] in library ['libm.a'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c: In function ???_check_tgamma???: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:83: warning: variable ???y??? set but not used [-Wunused-but-set-variable] static void _check_tgamma() { double (*checkTgamma)(double) = tgamma;double x = 0,y; y = (*checkTgamma)(x); } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_tgamma() { double (*checkTgamma)(double) = tgamma;double x = 0,y; y = (*checkTgamma)(x); } int main() { _check_tgamma();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" tgamma() found Defined "HAVE_TGAMMA" to "1" ================================================================================ TEST checkMathFenv from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:301) TESTING: checkMathFenv from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:301) Checks if can be used with FE_DFL_ENV Checking for functions [fesetenv] in library ['libm.a'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_fesetenv() { fesetenv(FE_DFL_ENV);; } int main() { _check_fesetenv();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" Defined "HAVE_FENV_H" to "1" ================================================================================ TEST checkMathLog2 from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:309) TESTING: checkMathLog2 from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:309) Check for log2() in libm, the math library Checking for functions [log2] in library ['libm.a'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c: In function ???_check_log2???: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:81: warning: unused variable ???y??? [-Wunused-variable] static void _check_log2() { double (*checkLog2)(double) = log2; double x = 2.5, y = (*checkLog2)(x); } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_log2() { double (*checkLog2)(double) = log2; double x = 2.5, y = (*checkLog2)(x); } int main() { _check_log2();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lm -lstdc++ -ldl Defined "HAVE_LIBM" to "1" log2() found Defined "HAVE_LOG2" to "1" ================================================================================ TEST checkRealtime from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:318) TESTING: checkRealtime from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:318) Check for presence of clock_gettime() in realtime library (POSIX Realtime extensions) Checking for functions [clock_gettime] in library [''] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_clock_gettime() { struct timespec tp; clock_gettime(CLOCK_REALTIME,&tp);; } int main() { _check_clock_gettime();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lstdc++ -ldl realtime functions are linked in by default ================================================================================ TEST checkDynamic from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:334) TESTING: checkDynamic from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:334) Check for the header and libraries necessary for dynamic library manipulation Checking for functions [dlopen] in library ['dl'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char dlopen(); static void _check_dlopen() { dlopen(); } int main() { _check_dlopen();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -ldl -lstdc++ -ldl Defined "HAVE_LIBDL" to "1" Checking for header: dlfcn.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_DLFCN_H" to "1" child config.libraries 1.576837 ================================================================================ TEST configureLibraryOptions from PETSc.options.libraryOptions(/p/work2/tmondrag/moose/petsc/config/PETSc/options/libraryOptions.py:37) TESTING: configureLibraryOptions from PETSc.options.libraryOptions(/p/work2/tmondrag/moose/petsc/config/PETSc/options/libraryOptions.py:37) Sets PETSC_USE_DEBUG, PETSC_USE_INFO, PETSC_USE_LOG, PETSC_USE_CTABLE, PETSC_USE_FORTRAN_KERNELS, and PETSC_USE_AVX512_KERNELS Defined "USE_LOG" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -qversion Defined "USE_MALLOC_COALESCED" to "1" Defined "USE_INFO" to "1" Defined "USE_CTABLE" to "1" Defined "USE_BACKWARD_LOOP" to "1" **********Checking if running on BGL/IBM detected Checking for functions [bgl_perfctr_void] in library [''] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char bgl_perfctr_void(); static void _check_bgl_perfctr_void() { bgl_perfctr_void(); } int main() { _check_bgl_perfctr_void();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o: in function `_check_bgl_perfctr_void': /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5: undefined reference to `bgl_perfctr_void' collect2: error: ld returned 1 exit status Checking for functions [ADIOI_BGL_Open] in library [''] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char ADIOI_BGL_Open(); static void _check_ADIOI_BGL_Open() { ADIOI_BGL_Open(); } int main() { _check_ADIOI_BGL_Open();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o: in function `_check_ADIOI_BGL_Open': /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5: undefined reference to `ADIOI_BGL_Open' collect2: error: ld returned 1 exit status *********BGL/IBM test failure Defined "USE_AVX512_KERNELS" to "1" Defined "Alignx(a,b)" to " " ================================================================================ TEST configureISColorValueType from PETSc.options.libraryOptions(/p/work2/tmondrag/moose/petsc/config/PETSc/options/libraryOptions.py:95) TESTING: configureISColorValueType from PETSc.options.libraryOptions(/p/work2/tmondrag/moose/petsc/config/PETSc/options/libraryOptions.py:95) Sets PETSC_IS_COLOR_VALUE_TYPE, MPIU_COLORING_VALUE, IS_COLORING_MAX required by ISColor Defined "MPIU_COLORING_VALUE" to "MPI_UNSIGNED_SHORT" Defined "IS_COLORING_MAX" to "USHRT_MAX" Defined "IS_COLOR_VALUE_TYPE" to "short" Defined "IS_COLOR_VALUE_TYPE_F" to "integer2" child PETSc.options.libraryOptions 0.305376 child config.atomics 0.000011 ================================================================================ TEST checkSysinfo from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:106) TESTING: checkSysinfo from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:106) Check whether sysinfo takes three arguments, and if it does define HAVE_SYSINFO_3ARG Checking for functions [sysinfo] All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.functions Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.types -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sysinfo(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sysinfo) || defined (__stub___sysinfo) sysinfo_will_always_fail_with_ENOSYS(); #else sysinfo(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_SYSINFO" to "1" Checking for header: sys/sysinfo.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Defined "HAVE_SYS_SYSINFO_H" to "1" Checking for header: sys/systeminfo.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sys/systeminfo.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~compilation terminated.: Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:9:4: error: #error "Cannot check sysinfo without special headers" # error "Cannot check sysinfo without special headers" ^~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:13:30: warning: implicit declaration of function ???sysinfo??? [-Wimplicit-function-declaration] char buf[10]; long count=10; sysinfo(1, buf, count); ^~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #ifdef HAVE_SYS_SYSINFO_H # include #elif defined(HAVE_SYS_SYSTEMINFO_H) # include #else # error "Cannot check sysinfo without special headers" #endif int main() { char buf[10]; long count=10; sysinfo(1, buf, count); ; return 0; } Compile failed inside link ================================================================================ TEST checkVPrintf from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:125) TESTING: checkVPrintf from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:125) Checks whether vprintf requires a char * last argument, and if it does defines HAVE_VPRINTF_CHAR Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { va_list Argp; vprintf( "%d", Argp ); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl ================================================================================ TEST checkVFPrintf from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:131) TESTING: checkVFPrintf from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:131) Checks whether vfprintf requires a char * last argument, and if it does defines HAVE_VFPRINTF_CHAR Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { va_list Argp; vfprintf(stdout, "%d", Argp ); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl ================================================================================ TEST checkVSNPrintf from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:137) TESTING: checkVSNPrintf from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:137) Checks whether vsnprintf requires a char * last argument, and if it does defines HAVE_VSNPRINTF_CHAR Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { va_list Argp;char str[6]; vsnprintf(str,5, "%d", Argp ); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_VSNPRINTF" to "1" ================================================================================ TEST checkNanosleep from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:167) TESTING: checkNanosleep from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:167) Check for functional nanosleep() - as time.h behaves differently for different compiler flags - like -std=c89 Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { struct timespec tp; tp.tv_sec = 0; tp.tv_nsec = (long)(1e9); nanosleep(&tp,0); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_NANOSLEEP" to "1" ================================================================================ TEST checkMemmove from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:173) TESTING: checkMemmove from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:173) Check for functional memmove() - as MS VC requires correct includes to for this test Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { char c1[1], c2[1] = "c"; size_t n=1; memmove(c1,c2,n); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_MEMMOVE" to "1" ================================================================================ TEST checkSignalHandlerType from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:143) TESTING: checkSignalHandlerType from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:143) Checks the type of C++ signals handlers, and defines SIGNAL_CAST to the correct value Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.types -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include static void myhandler(int sig) {} int main() { signal(SIGFPE,myhandler); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "SIGNAL_CAST" to " " ================================================================================ TEST checkFreeReturnType from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:153) TESTING: checkFreeReturnType from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:153) Checks whether free returns void or int, and defines HAVE_FREE_RETURN_INT Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:6:25: error: void value not ignored as it ought to be int ierr; void *p; ierr = free(p); return 0; ^ /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:6:5: warning: variable ???ierr??? set but not used [-Wunused-but-set-variable] int ierr; void *p; ierr = free(p); return 0; ^~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int ierr; void *p; ierr = free(p); return 0; ; return 0; } Compile failed inside link ================================================================================ TEST checkVariableArgumentLists from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:159) TESTING: checkVariableArgumentLists from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:159) Checks whether the variable argument list functionality is working Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { va_list l1, l2; va_copy(l1, l2); return 0; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_VA_COPY" to "1" ================================================================================ TEST checkClassify from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:85) TESTING: checkClassify from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:85) Recursive decompose to rapidly classify functions as found or missing To confirm that a function is missing, we require a compile/link failure with only that function in a compilation unit. In contrast, we can confirm that many functions are present by compiling them all together in a large compilation unit. We optimistically compile everything together, then trim all functions that were named in the error message and bisect the result. The trimming is only an optimization to increase the likelihood of a big-batch compile succeeding; we do not rely on the compiler naming missing functions. Checking for functions [rand getdomainname _access snprintf realpath dlsym bzero _getcwd getwd uname _lseek sleep _sleep lseek usleep dlclose gethostname clock access _snprintf dlerror fork getpagesize sbreak memalign getcwd gethostbyname readlink _set_output_format PXFGETARG strcasecmp dlopen drand48 socket popen getrusage _mkdir time stricmp] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:16:6: warning: conflicting types for built-in function ???snprintf??? [-Wbuiltin-declaration-mismatch] char snprintf(); ^~~~~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:19:6: warning: conflicting types for built-in function ???bzero??? [-Wbuiltin-declaration-mismatch] char bzero(); ^~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:34:6: warning: conflicting types for built-in function ???fork??? [-Wbuiltin-declaration-mismatch] char fork(); ^~~~ /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:43:6: warning: conflicting types for built-in function ???strcasecmp??? [-Wbuiltin-declaration-mismatch] char strcasecmp(); ^~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char rand(); char getdomainname(); char _access(); char snprintf(); char realpath(); char dlsym(); char bzero(); char _getcwd(); char getwd(); char uname(); char _lseek(); char sleep(); char _sleep(); char lseek(); char usleep(); char dlclose(); char gethostname(); char clock(); char access(); char _snprintf(); char dlerror(); char fork(); char getpagesize(); char sbreak(); char memalign(); char getcwd(); char gethostbyname(); char readlink(); char _set_output_format(); char PXFGETARG(); char strcasecmp(); char dlopen(); char drand48(); char socket(); char popen(); char getrusage(); char _mkdir(); char time(); char stricmp(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_rand) || defined (__stub___rand) rand_will_always_fail_with_ENOSYS(); #else rand(); #endif #if defined (__stub_getdomainname) || defined (__stub___getdomainname) getdomainname_will_always_fail_with_ENOSYS(); #else getdomainname(); #endif #if defined (__stub__access) || defined (__stub____access) _access_will_always_fail_with_ENOSYS(); #else _access(); #endif #if defined (__stub_snprintf) || defined (__stub___snprintf) snprintf_will_always_fail_with_ENOSYS(); #else snprintf(); #endif #if defined (__stub_realpath) || defined (__stub___realpath) realpath_will_always_fail_with_ENOSYS(); #else realpath(); #endif #if defined (__stub_dlsym) || defined (__stub___dlsym) dlsym_will_always_fail_with_ENOSYS(); #else dlsym(); #endif #if defined (__stub_bzero) || defined (__stub___bzero) bzero_will_always_fail_with_ENOSYS(); #else bzero(); #endif #if defined (__stub__getcwd) || defined (__stub____getcwd) _getcwd_will_always_fail_with_ENOSYS(); #else _getcwd(); #endif #if defined (__stub_getwd) || defined (__stub___getwd) getwd_will_always_fail_with_ENOSYS(); #else getwd(); #endif #if defined (__stub_uname) || defined (__stub___uname) uname_will_always_fail_with_ENOSYS(); #else uname(); #endif #if defined (__stub__lseek) || defined (__stub____lseek) _lseek_will_always_fail_with_ENOSYS(); #else _lseek(); #endif #if defined (__stub_sleep) || defined (__stub___sleep) sleep_will_always_fail_with_ENOSYS(); #else sleep(); #endif #if defined (__stub__sleep) || defined (__stub____sleep) _sleep_will_always_fail_with_ENOSYS(); #else _sleep(); #endif #if defined (__stub_lseek) || defined (__stub___lseek) lseek_will_always_fail_with_ENOSYS(); #else lseek(); #endif #if defined (__stub_usleep) || defined (__stub___usleep) usleep_will_always_fail_with_ENOSYS(); #else usleep(); #endif #if defined (__stub_dlclose) || defined (__stub___dlclose) dlclose_will_always_fail_with_ENOSYS(); #else dlclose(); #endif #if defined (__stub_gethostname) || defined (__stub___gethostname) gethostname_will_always_fail_with_ENOSYS(); #else gethostname(); #endif #if defined (__stub_clock) || defined (__stub___clock) clock_will_always_fail_with_ENOSYS(); #else clock(); #endif #if defined (__stub_access) || defined (__stub___access) access_will_always_fail_with_ENOSYS(); #else access(); #endif #if defined (__stub__snprintf) || defined (__stub____snprintf) _snprintf_will_always_fail_with_ENOSYS(); #else _snprintf(); #endif #if defined (__stub_dlerror) || defined (__stub___dlerror) dlerror_will_always_fail_with_ENOSYS(); #else dlerror(); #endif #if defined (__stub_fork) || defined (__stub___fork) fork_will_always_fail_with_ENOSYS(); #else fork(); #endif #if defined (__stub_getpagesize) || defined (__stub___getpagesize) getpagesize_will_always_fail_with_ENOSYS(); #else getpagesize(); #endif #if defined (__stub_sbreak) || defined (__stub___sbreak) sbreak_will_always_fail_with_ENOSYS(); #else sbreak(); #endif #if defined (__stub_memalign) || defined (__stub___memalign) memalign_will_always_fail_with_ENOSYS(); #else memalign(); #endif #if defined (__stub_getcwd) || defined (__stub___getcwd) getcwd_will_always_fail_with_ENOSYS(); #else getcwd(); #endif #if defined (__stub_gethostbyname) || defined (__stub___gethostbyname) gethostbyname_will_always_fail_with_ENOSYS(); #else gethostbyname(); #endif #if defined (__stub_readlink) || defined (__stub___readlink) readlink_will_always_fail_with_ENOSYS(); #else readlink(); #endif #if defined (__stub__set_output_format) || defined (__stub____set_output_format) _set_output_format_will_always_fail_with_ENOSYS(); #else _set_output_format(); #endif #if defined (__stub_PXFGETARG) || defined (__stub___PXFGETARG) PXFGETARG_will_always_fail_with_ENOSYS(); #else PXFGETARG(); #endif #if defined (__stub_strcasecmp) || defined (__stub___strcasecmp) strcasecmp_will_always_fail_with_ENOSYS(); #else strcasecmp(); #endif #if defined (__stub_dlopen) || defined (__stub___dlopen) dlopen_will_always_fail_with_ENOSYS(); #else dlopen(); #endif #if defined (__stub_drand48) || defined (__stub___drand48) drand48_will_always_fail_with_ENOSYS(); #else drand48(); #endif #if defined (__stub_socket) || defined (__stub___socket) socket_will_always_fail_with_ENOSYS(); #else socket(); #endif #if defined (__stub_popen) || defined (__stub___popen) popen_will_always_fail_with_ENOSYS(); #else popen(); #endif #if defined (__stub_getrusage) || defined (__stub___getrusage) getrusage_will_always_fail_with_ENOSYS(); #else getrusage(); #endif #if defined (__stub__mkdir) || defined (__stub____mkdir) _mkdir_will_always_fail_with_ENOSYS(); #else _mkdir(); #endif #if defined (__stub_time) || defined (__stub___time) time_will_always_fail_with_ENOSYS(); #else time(); #endif #if defined (__stub_stricmp) || defined (__stub___stricmp) stricmp_will_always_fail_with_ENOSYS(); #else stricmp(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:110: warning: the `getwd' function is dangerous and should not be used. /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:74: undefined reference to `_access' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:104: undefined reference to `_getcwd' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:122: undefined reference to `_lseek' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:134: undefined reference to `_sleep' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:176: undefined reference to `_snprintf' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:200: undefined reference to `sbreak' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:230: undefined reference to `_set_output_format' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:236: undefined reference to `PXFGETARG' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:278: undefined reference to `_mkdir' /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:290: undefined reference to `stricmp' collect2: error: ld returned 1 exit status Checking for functions [rand getdomainname realpath dlsym bzero uname usleep dlclose gethostname clock dlerror] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:17:6: warning: conflicting types for built-in function ???bzero??? [-Wbuiltin-declaration-mismatch] char bzero(); ^~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char rand(); char getdomainname(); char realpath(); char dlsym(); char bzero(); char uname(); char usleep(); char dlclose(); char gethostname(); char clock(); char dlerror(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_rand) || defined (__stub___rand) rand_will_always_fail_with_ENOSYS(); #else rand(); #endif #if defined (__stub_getdomainname) || defined (__stub___getdomainname) getdomainname_will_always_fail_with_ENOSYS(); #else getdomainname(); #endif #if defined (__stub_realpath) || defined (__stub___realpath) realpath_will_always_fail_with_ENOSYS(); #else realpath(); #endif #if defined (__stub_dlsym) || defined (__stub___dlsym) dlsym_will_always_fail_with_ENOSYS(); #else dlsym(); #endif #if defined (__stub_bzero) || defined (__stub___bzero) bzero_will_always_fail_with_ENOSYS(); #else bzero(); #endif #if defined (__stub_uname) || defined (__stub___uname) uname_will_always_fail_with_ENOSYS(); #else uname(); #endif #if defined (__stub_usleep) || defined (__stub___usleep) usleep_will_always_fail_with_ENOSYS(); #else usleep(); #endif #if defined (__stub_dlclose) || defined (__stub___dlclose) dlclose_will_always_fail_with_ENOSYS(); #else dlclose(); #endif #if defined (__stub_gethostname) || defined (__stub___gethostname) gethostname_will_always_fail_with_ENOSYS(); #else gethostname(); #endif #if defined (__stub_clock) || defined (__stub___clock) clock_will_always_fail_with_ENOSYS(); #else clock(); #endif #if defined (__stub_dlerror) || defined (__stub___dlerror) dlerror_will_always_fail_with_ENOSYS(); #else dlerror(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_RAND" to "1" Defined "HAVE_GETDOMAINNAME" to "1" Defined "HAVE_REALPATH" to "1" Defined "HAVE_DLSYM" to "1" Defined "HAVE_BZERO" to "1" Defined "HAVE_UNAME" to "1" Defined "HAVE_USLEEP" to "1" Defined "HAVE_DLCLOSE" to "1" Defined "HAVE_GETHOSTNAME" to "1" Defined "HAVE_CLOCK" to "1" Defined "HAVE_DLERROR" to "1" Checking for functions [fork getpagesize memalign gethostbyname readlink strcasecmp dlopen drand48 socket popen getrusage time] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:13:6: warning: conflicting types for built-in function ???fork??? [-Wbuiltin-declaration-mismatch] char fork(); ^~~~ /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:18:6: warning: conflicting types for built-in function ???strcasecmp??? [-Wbuiltin-declaration-mismatch] char strcasecmp(); ^~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char fork(); char getpagesize(); char memalign(); char gethostbyname(); char readlink(); char strcasecmp(); char dlopen(); char drand48(); char socket(); char popen(); char getrusage(); char time(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_fork) || defined (__stub___fork) fork_will_always_fail_with_ENOSYS(); #else fork(); #endif #if defined (__stub_getpagesize) || defined (__stub___getpagesize) getpagesize_will_always_fail_with_ENOSYS(); #else getpagesize(); #endif #if defined (__stub_memalign) || defined (__stub___memalign) memalign_will_always_fail_with_ENOSYS(); #else memalign(); #endif #if defined (__stub_gethostbyname) || defined (__stub___gethostbyname) gethostbyname_will_always_fail_with_ENOSYS(); #else gethostbyname(); #endif #if defined (__stub_readlink) || defined (__stub___readlink) readlink_will_always_fail_with_ENOSYS(); #else readlink(); #endif #if defined (__stub_strcasecmp) || defined (__stub___strcasecmp) strcasecmp_will_always_fail_with_ENOSYS(); #else strcasecmp(); #endif #if defined (__stub_dlopen) || defined (__stub___dlopen) dlopen_will_always_fail_with_ENOSYS(); #else dlopen(); #endif #if defined (__stub_drand48) || defined (__stub___drand48) drand48_will_always_fail_with_ENOSYS(); #else drand48(); #endif #if defined (__stub_socket) || defined (__stub___socket) socket_will_always_fail_with_ENOSYS(); #else socket(); #endif #if defined (__stub_popen) || defined (__stub___popen) popen_will_always_fail_with_ENOSYS(); #else popen(); #endif #if defined (__stub_getrusage) || defined (__stub___getrusage) getrusage_will_always_fail_with_ENOSYS(); #else getrusage(); #endif #if defined (__stub_time) || defined (__stub___time) time_will_always_fail_with_ENOSYS(); #else time(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_FORK" to "1" Defined "HAVE_GETPAGESIZE" to "1" Defined "HAVE_MEMALIGN" to "1" Defined "HAVE_GETHOSTBYNAME" to "1" Defined "HAVE_READLINK" to "1" Defined "HAVE_STRCASECMP" to "1" Defined "HAVE_DLOPEN" to "1" Defined "HAVE_DRAND48" to "1" Defined "HAVE_SOCKET" to "1" Defined "HAVE_POPEN" to "1" Defined "HAVE_GETRUSAGE" to "1" Defined "HAVE_TIME" to "1" Checking for functions [_access] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _access(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__access) || defined (__stub____access) _access_will_always_fail_with_ENOSYS(); #else _access(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_access' collect2: error: ld returned 1 exit status Checking for functions [snprintf] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:13:6: warning: conflicting types for built-in function ???snprintf??? [-Wbuiltin-declaration-mismatch] char snprintf(); ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char snprintf(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_snprintf) || defined (__stub___snprintf) snprintf_will_always_fail_with_ENOSYS(); #else snprintf(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_SNPRINTF" to "1" Checking for functions [_getcwd] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _getcwd(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__getcwd) || defined (__stub____getcwd) _getcwd_will_always_fail_with_ENOSYS(); #else _getcwd(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_getcwd' collect2: error: ld returned 1 exit status Checking for functions [getwd] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char getwd(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_getwd) || defined (__stub___getwd) getwd_will_always_fail_with_ENOSYS(); #else getwd(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: warning: the `getwd' function is dangerous and should not be used. Defined "HAVE_GETWD" to "1" Checking for functions [_lseek] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _lseek(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__lseek) || defined (__stub____lseek) _lseek_will_always_fail_with_ENOSYS(); #else _lseek(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_lseek' collect2: error: ld returned 1 exit status Checking for functions [sleep] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sleep(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sleep) || defined (__stub___sleep) sleep_will_always_fail_with_ENOSYS(); #else sleep(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_SLEEP" to "1" Checking for functions [_sleep] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _sleep(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__sleep) || defined (__stub____sleep) _sleep_will_always_fail_with_ENOSYS(); #else _sleep(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_sleep' collect2: error: ld returned 1 exit status Checking for functions [lseek] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char lseek(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_lseek) || defined (__stub___lseek) lseek_will_always_fail_with_ENOSYS(); #else lseek(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_LSEEK" to "1" Checking for functions [access] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char access(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_access) || defined (__stub___access) access_will_always_fail_with_ENOSYS(); #else access(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_ACCESS" to "1" Checking for functions [_snprintf] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _snprintf(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__snprintf) || defined (__stub____snprintf) _snprintf_will_always_fail_with_ENOSYS(); #else _snprintf(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_snprintf' collect2: error: ld returned 1 exit status Checking for functions [sbreak] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char sbreak(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_sbreak) || defined (__stub___sbreak) sbreak_will_always_fail_with_ENOSYS(); #else sbreak(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `sbreak' collect2: error: ld returned 1 exit status Checking for functions [getcwd] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char getcwd(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_getcwd) || defined (__stub___getcwd) getcwd_will_always_fail_with_ENOSYS(); #else getcwd(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_GETCWD" to "1" Checking for functions [_set_output_format] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _set_output_format(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__set_output_format) || defined (__stub____set_output_format) _set_output_format_will_always_fail_with_ENOSYS(); #else _set_output_format(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_set_output_format' collect2: error: ld returned 1 exit status Checking for functions [PXFGETARG] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PXFGETARG(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_PXFGETARG) || defined (__stub___PXFGETARG) PXFGETARG_will_always_fail_with_ENOSYS(); #else PXFGETARG(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `PXFGETARG' collect2: error: ld returned 1 exit status Checking for functions [_mkdir] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char _mkdir(); #ifdef __cplusplus } #endif int main() { #if defined (__stub__mkdir) || defined (__stub____mkdir) _mkdir_will_always_fail_with_ENOSYS(); #else _mkdir(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `_mkdir' collect2: error: ld returned 1 exit status Checking for functions [stricmp] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* System header to define __stub macros and hopefully no other prototypes since they would conflict with our 'char funcname()' declaration below. */ #include /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" { #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char stricmp(); #ifdef __cplusplus } #endif int main() { #if defined (__stub_stricmp) || defined (__stub___stricmp) stricmp_will_always_fail_with_ENOSYS(); #else stricmp(); #endif ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o: in function `main': /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c:24: undefined reference to `stricmp' collect2: error: ld returned 1 exit status ================================================================================ TEST checkMmap from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:179) TESTING: checkMmap from config.functions(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/functions.py:179) Check for functional mmap() to allocate shared memory and define HAVE_MMAP Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include #include #include int main() { int fd; fd=open("/tmp/file",O_RDWR); mmap((void*)0,100,PROT_READ|PROT_WRITE,MAP_SHARED,fd,0); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.functions/conftest.o -lstdc++ -ldl Defined "HAVE_MMAP" to "1" child config.functions 4.459111 ================================================================================ TEST configureMemorySize from config.utilities.getResidentSetSize(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/getResidentSetSize.py:31) TESTING: configureMemorySize from config.utilities.getResidentSetSize(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/getResidentSetSize.py:31) Try to determine how to measure the memory usage child config.utilities.getResidentSetSize 0.000504 ================================================================================ TEST configureFortranCommandLine from config.utilities.fortranCommandLine(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/fortranCommandLine.py:27) TESTING: configureFortranCommandLine from config.utilities.fortranCommandLine(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/fortranCommandLine.py:27) Check for the mechanism to retrieve command line arguments in Fortran Checking for functions [] in library [''] [] Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.F90 Successful compile: Source: program main integer i character*(80) arg i = command_argument_count() call get_command_argument(i,arg) end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lstdc++ -ldl Defined "HAVE_FORTRAN_GET_COMMAND_ARGUMENT" to "1" child config.utilities.fortranCommandLine 0.185642 ================================================================================ TEST configureFeatureTestMacros from config.utilities.featureTestMacros(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/featureTestMacros.py:13) TESTING: configureFeatureTestMacros from config.utilities.featureTestMacros(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/featureTestMacros.py:13) Checks if certain feature test macros are support All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.c:4:10: fatal error: sysctl.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #define _POSIX_C_SOURCE 200112L #include int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #define _BSD_SOURCE #include int main() { ; return 0; } Defined "_BSD_SOURCE" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #define _DEFAULT_SOURCE #include int main() { ; return 0; } Defined "_DEFAULT_SOURCE" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #define _GNU_SOURCE #include int main() { cpu_set_t mset; CPU_ZERO(&mset);; return 0; } Defined "_GNU_SOURCE" to "1" child config.utilities.featureTestMacros 0.210373 ================================================================================ TEST configureMissingUtypeTypedefs from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:55) TESTING: configureMissingUtypeTypedefs from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:55) Checks if u_short is undefined All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c:6:9: warning: unused variable ???foo??? [-Wunused-variable] u_short foo; ^~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { u_short foo; ; return 0; } ================================================================================ TEST configureMissingFunctions from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:61) TESTING: configureMissingFunctions from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:61) Checks for SOCKETS ================================================================================ TEST configureMissingSignals from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:79) TESTING: configureMissingSignals from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:79) Check for missing signals, and define MISSING_ if necessary Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGABRT; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGALRM; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGBUS; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGCHLD; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGCONT; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGFPE; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGHUP; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGILL; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGINT; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGKILL; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGPIPE; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGQUIT; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGSEGV; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGSTOP; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGSYS; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGTERM; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGTRAP; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGTSTP; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGURG; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGUSR1; if (i); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int i=SIGUSR2; if (i); ; return 0; } ================================================================================ TEST configureMissingGetdomainnamePrototype from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:96) TESTING: configureMissingGetdomainnamePrototype from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:96) Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #ifdef PETSC_HAVE_UNISTD_H #include #endif #ifdef PETSC_HAVE_NETDB_H #include #endif int main() { int (*getdomainname_ptr)(char*,size_t) = getdomainname; char test[10]; if (getdomainname_ptr(test,10)) return 1; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #ifdef PETSC_HAVE_UNISTD_H #include #endif #ifdef PETSC_HAVE_NETDB_H #include #endif int main() { int (*getdomainname_ptr)(char*,size_t) = getdomainname; char test[10]; if (getdomainname_ptr(test,10)) return 1; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -lstdc++ -ldl ================================================================================ TEST configureMissingSrandPrototype from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:121) TESTING: configureMissingSrandPrototype from config.utilities.missing(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/missing.py:121) Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #include int main() { double (*drand48_ptr)(void) = drand48; void (*srand48_ptr)(long int) = srand48; long int seed=10; srand48_ptr(seed); if (drand48_ptr() > 0.5) return 1; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.cc Successful compile: Source: #include "confdefs.h" #include "conffix.h" #if !defined(_BSD_SOURCE) #define _BSD_SOURCE #endif #if !defined(_DEFAULT_SOURCE) #define _DEFAULT_SOURCE #endif #if !defined(_GNU_SOURCE) #define _GNU_SOURCE #endif #include int main() { double (*drand48_ptr)(void) = drand48; void (*srand48_ptr)(long int) = srand48; long int seed=10; srand48_ptr(seed); if (drand48_ptr() > 0.5) return 1; ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing/conftest.o -lstdc++ -ldl child config.utilities.missing 1.579093 ================================================================================ TEST configureFPTrap from config.utilities.FPTrap(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/FPTrap.py:27) TESTING: configureFPTrap from config.utilities.FPTrap(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/utilities/FPTrap.py:27) Checking the handling of floating point traps Checking for header: sigfpe.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sigfpe.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sigfpe.h: No such file or directory #include ^~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: sigfpe.h: No such file or directory #include ^~~~~~~~~~compilation terminated.: Checking for header: fpxcp.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: fpxcp.h: No such file or directory #include ^~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: fpxcp.h: No such file or directory #include ^~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: fpxcp.h: No such file or directory #include ^~~~~~~~~compilation terminated.: Checking for header: floatingpoint.h Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: floatingpoint.h: No such file or directory #include ^~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: floatingpoint.h: No such file or directory #include ^~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: floatingpoint.h: No such file or directory #include ^~~~~~~~~~~~~~~~~compilation terminated.: child config.utilities.FPTrap 0.058791 ================================================================================ TEST configureScalarType from PETSc.options.scalarTypes(/p/work2/tmondrag/moose/petsc/config/PETSc/options/scalarTypes.py:40) TESTING: configureScalarType from PETSc.options.scalarTypes(/p/work2/tmondrag/moose/petsc/config/PETSc/options/scalarTypes.py:40) Choose between real and complex numbers Scalar type is real All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c:6:21: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0; int a = isnormal(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0; int a = isnormal(b); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Defined "HAVE_ISNORMAL" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c:6:21: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0; int a = isnan(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0; int a = isnan(b); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Defined "HAVE_ISNAN" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.c:6:21: warning: unused variable ???a??? [-Wunused-variable] double b = 2.0; int a = isinf(b); ^ Source: #include "confdefs.h" #include "conffix.h" #include int main() { double b = 2.0; int a = isinf(b); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes/conftest.o -lstdc++ -ldl Defined "HAVE_ISINF" to "1" ================================================================================ TEST configurePrecision from PETSc.options.scalarTypes(/p/work2/tmondrag/moose/petsc/config/PETSc/options/scalarTypes.py:81) TESTING: configurePrecision from PETSc.options.scalarTypes(/p/work2/tmondrag/moose/petsc/config/PETSc/options/scalarTypes.py:81) Set the default real number precision for PETSc objects Checking C compiler works with __float128 Checking for functions [logq] in library ['quadmath'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c: In function ???_check_logq???: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:43: warning: ???f??? is used uninitialized in this function [-Wuninitialized] static void _check_logq() { __float128 f; logq(f);; } ^~~~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:40: note: ???f??? was declared here static void _check_logq() { __float128 f; logq(f);; } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_logq() { __float128 f; logq(f);; } int main() { _check_logq();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_LIBQUADMATH" to "1" C compiler with quadmath library Checking Fortran works with quadmath library Checking for functions [ ] in library ['quadmath'] [] Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.F90 Successful compile: Source: program main real*16 s,w; w = 2.0 ;s = cos(w) end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_LIBQUADMATH" to "1" Fortran works with quadmath library Checking for functions [logq] in library ['quadmath'] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c: In function ???_check_logq???: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:43: warning: ???f??? is used uninitialized in this function [-Wuninitialized] static void _check_logq() { __float128 f; logq(f);; } ^~~~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5:40: note: ???f??? was declared here static void _check_logq() { __float128 f; logq(f);; } ^ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_logq() { __float128 f; logq(f);; } int main() { _check_logq();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_LIBQUADMATH" to "1" Adding ['quadmath'] to LIBS Defined "HAVE_REAL___FLOAT128" to "1" Defined "USE_REAL_DOUBLE" to "1" Defined make macro "PETSC_SCALAR_SIZE" to "64" Precision is double child PETSc.options.scalarTypes 1.060660 ================================================================================ TEST configureMkdir from config.programs(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/programs.py:22) TESTING: configureMkdir from config.programs(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/programs.py:22) Make sure we can have mkdir automatically make intermediate directories Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mkdir...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mkdir...not found Checking for program /usr/local/krb5/bin/mkdir...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mkdir...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/mkdir...not found Checking for program /opt/clmgr/sbin/mkdir...not found Checking for program /opt/clmgr/bin/mkdir...not found Checking for program /opt/sgi/sbin/mkdir...not found Checking for program /opt/sgi/bin/mkdir...not found Checking for program /usr/local/bin/mkdir...not found Checking for program /usr/bin/mkdir...found Running Executable WITHOUT threads to time it out Executing: /usr/bin/mkdir -p .conftest/tmp Adding -p flag to /usr/bin/mkdir -p to automatically create directories Defined make macro "MKDIR" to "/usr/bin/mkdir -p" ================================================================================ TEST configureAutoreconf from config.programs(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/programs.py:44) TESTING: configureAutoreconf from config.programs(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/programs.py:44) Check for autoreconf Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/autoreconf...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/autoreconf...not found Checking for program /usr/local/krb5/bin/autoreconf...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/autoreconf...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/autoreconf...not found Checking for program /opt/clmgr/sbin/autoreconf...not found Checking for program /opt/clmgr/bin/autoreconf...not found Checking for program /opt/sgi/sbin/autoreconf...not found Checking for program /opt/sgi/bin/autoreconf...not found Checking for program /usr/local/bin/autoreconf...not found Checking for program /usr/bin/autoreconf...found All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.programs Running Executable WITHOUT threads to time it out Executing: ['/usr/bin/autoreconf'] autoreconf test successful! Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/libtoolize...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/libtoolize...not found Checking for program /usr/local/krb5/bin/libtoolize...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/libtoolize...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/libtoolize...not found Checking for program /opt/clmgr/sbin/libtoolize...not found Checking for program /opt/clmgr/bin/libtoolize...not found Checking for program /opt/sgi/sbin/libtoolize...not found Checking for program /opt/sgi/bin/libtoolize...not found Checking for program /usr/local/bin/libtoolize...not found Checking for program /usr/bin/libtoolize...found ================================================================================ TEST configurePrograms from config.programs(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/programs.py:71) TESTING: configurePrograms from config.programs(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/programs.py:71) Check for the programs needed to build and run PETSc Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/sh...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/sh...not found Checking for program /usr/local/krb5/bin/sh...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/sh...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/sh...not found Checking for program /opt/clmgr/sbin/sh...not found Checking for program /opt/clmgr/bin/sh...not found Checking for program /opt/sgi/sbin/sh...not found Checking for program /opt/sgi/bin/sh...not found Checking for program /usr/local/bin/sh...not found Checking for program /usr/bin/sh...found Defined make macro "SHELL" to "/usr/bin/sh" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/sed...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/sed...not found Checking for program /usr/local/krb5/bin/sed...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/sed...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/sed...not found Checking for program /opt/clmgr/sbin/sed...not found Checking for program /opt/clmgr/bin/sed...not found Checking for program /opt/sgi/sbin/sed...not found Checking for program /opt/sgi/bin/sed...not found Checking for program /usr/local/bin/sed...not found Checking for program /usr/bin/sed...found Defined make macro "SED" to "/usr/bin/sed" Running Executable WITHOUT threads to time it out Executing: /usr/bin/sed -i s/sed/sd/g "/p/work2/tmondrag/petsc-N5i8ny/config.programs/sed1" Adding SEDINPLACE cmd: /usr/bin/sed -i Defined make macro "SEDINPLACE" to "/usr/bin/sed -i" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/mv...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/mv...not found Checking for program /usr/local/krb5/bin/mv...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/mv...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/mv...not found Checking for program /opt/clmgr/sbin/mv...not found Checking for program /opt/clmgr/bin/mv...not found Checking for program /opt/sgi/sbin/mv...not found Checking for program /opt/sgi/bin/mv...not found Checking for program /usr/local/bin/mv...not found Checking for program /usr/bin/mv...found Defined make macro "MV" to "/usr/bin/mv" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/cp...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/cp...not found Checking for program /usr/local/krb5/bin/cp...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/cp...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/cp...not found Checking for program /opt/clmgr/sbin/cp...not found Checking for program /opt/clmgr/bin/cp...not found Checking for program /opt/sgi/sbin/cp...not found Checking for program /opt/sgi/bin/cp...not found Checking for program /usr/local/bin/cp...not found Checking for program /usr/bin/cp...found Defined make macro "CP" to "/usr/bin/cp" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/grep...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/grep...not found Checking for program /usr/local/krb5/bin/grep...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/grep...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/grep...not found Checking for program /opt/clmgr/sbin/grep...not found Checking for program /opt/clmgr/bin/grep...not found Checking for program /opt/sgi/sbin/grep...not found Checking for program /opt/sgi/bin/grep...not found Checking for program /usr/local/bin/grep...not found Checking for program /usr/bin/grep...found Defined make macro "GREP" to "/usr/bin/grep" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/rm...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/rm...not found Checking for program /usr/local/krb5/bin/rm...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/rm...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/rm...not found Checking for program /opt/clmgr/sbin/rm...not found Checking for program /opt/clmgr/bin/rm...not found Checking for program /opt/sgi/sbin/rm...not found Checking for program /opt/sgi/bin/rm...not found Checking for program /usr/local/bin/rm...not found Checking for program /usr/bin/rm...found Defined make macro "RM" to "/usr/bin/rm -f" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/diff...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/diff...not found Checking for program /usr/local/krb5/bin/diff...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/diff...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/diff...not found Checking for program /opt/clmgr/sbin/diff...not found Checking for program /opt/clmgr/bin/diff...not found Checking for program /opt/sgi/sbin/diff...not found Checking for program /opt/sgi/bin/diff...not found Checking for program /usr/local/bin/diff...not found Checking for program /usr/bin/diff...found Running Executable WITHOUT threads to time it out Executing: "/usr/bin/diff" -w "/p/work2/tmondrag/petsc-N5i8ny/config.programs/diff1" "/p/work2/tmondrag/petsc-N5i8ny/config.programs/diff2" Defined make macro "DIFF" to "/usr/bin/diff -w" Checking for program /usr/ucb/ps...not found Checking for program /usr/usb/ps...not found Checking for program /p/home/tmondrag/WORK/moose/scripts/../petsc/lib/petsc/bin/win32fe/ps...not found Unable to find programs ['ps'] providing listing of each search directory to help debug Path provided in Python program Warning /usr/ucb is not a directory Warning /usr/usb is not a directory Path provided by --with-executables-search-path ['win32fe.exe', 'win32feutils.dll'] Defined make macro "PYTHON" to "/usr/bin/python" Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/m4...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/m4...not found Checking for program /usr/local/krb5/bin/m4...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/m4...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/m4...not found Checking for program /opt/clmgr/sbin/m4...not found Checking for program /opt/clmgr/bin/m4...not found Checking for program /opt/sgi/sbin/m4...not found Checking for program /opt/sgi/bin/m4...not found Checking for program /usr/local/bin/m4...not found Checking for program /usr/bin/m4...found Defined make macro "M4" to "/usr/bin/m4" child config.programs 1.683024 ================================================================================ TEST configureMake from config.packages.make(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/make.py:87) TESTING: configureMake from config.packages.make(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/make.py:87) Check Guesses for GNU make Running Executable WITHOUT threads to time it out Executing: gmake --version stdout: GNU Make 4.0 Built for x86_64-unknown-linux-gnu Copyright (C) 1988-2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/gmake...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/gmake...not found Checking for program /usr/local/krb5/bin/gmake...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/gmake...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/gmake...not found Checking for program /opt/clmgr/sbin/gmake...not found Checking for program /opt/clmgr/bin/gmake...not found Checking for program /opt/sgi/sbin/gmake...not found Checking for program /opt/sgi/bin/gmake...not found Checking for program /usr/local/bin/gmake...not found Checking for program /usr/bin/gmake...found Defined make macro "MAKE" to "/usr/bin/gmake" ================================================================================ TEST setupGNUMake from config.packages.make(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/make.py:121) TESTING: setupGNUMake from config.packages.make(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/make.py:121) Setup other GNU make stuff Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux Defined make rule "libc" with dependencies "${LIBNAME}(${OBJSC})" and code [] Defined make rule "libcxx" with dependencies "${LIBNAME}(${OBJSCXX})" and code [] Defined make rule "libcu" with dependencies "${LIBNAME}(${OBJSCU})" and code [] Defined make rule "libf" with dependencies "${OBJSF}" and code -${AR} ${AR_FLAGS} ${LIBNAME} ${OBJSF} Defined make macro "OMAKE_PRINTDIR " to "/usr/bin/gmake --print-directory" Defined make macro "OMAKE" to "/usr/bin/gmake --no-print-directory" Defined make macro "MAKE_PAR_OUT_FLG" to "--output-sync=recurse" ================================================================================ TEST configureMakeNP from config.packages.make(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/make.py:168) TESTING: configureMakeNP from config.packages.make(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/make.py:168) check no of cores on the build machine [perhaps to do make '-j ncores'] module multiprocessing found 72 cores: using make_np = 42 Defined make macro "MAKE_NP" to "42" Defined make macro "MAKE_TEST_NP" to "30" Defined make macro "MAKE_LOAD" to "104.8" Defined make macro "NPMAX" to "72" child config.packages.make 0.020709 ================================================================================ TEST alternateConfigureLibrary from config.packages.OpenMPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.OpenMPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.OpenMPI 0.000827 Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux ================================================================================ TEST alternateConfigureLibrary from config.packages.MPICH(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.MPICH(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.MPICH 0.004006 ================================================================================ TEST checkDependencies from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:836) TESTING: checkDependencies from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:836) ================================================================================ TEST configureLibrary from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:617) TESTING: configureLibrary from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:617) ================================================================================== Checking for a functional MPI Checking for library in Compiler specific search MPI: [] ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names Checking for functions [MPI_Init MPI_Comm_create] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Init(); static void _check_MPI_Init() { MPI_Init(); } char MPI_Comm_create(); static void _check_MPI_Comm_create() { MPI_Comm_create(); } int main() { _check_MPI_Init(); _check_MPI_Comm_create();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Checking for optional headers [] in Compiler specific search MPI: ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files [] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Found header files [] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking for headers ['mpi.h'] in Compiler specific search MPI: ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['mpi.h'] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking include with compiler flags var CPPFLAGS ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Preprocess stderr before filtering:: Preprocess stderr after filtering:: Found header files ['mpi.h'] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkMPIDistro from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:478) TESTING: checkMPIDistro from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:478) Determine if MPICH_NUMVERSION, OMPI_MAJOR_VERSION or MSMPI_VER exist in mpi.h Used for consistency checking of MPI installation at compile time All intermediate test results are stored in /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:4:25: error: ???I_MPI_VERSION??? undeclared here (not in a function); did you mean ???MPI_VERSION???? const char *mpich_ver = I_MPI_VERSION; ^~~~~~~~~~~~~ MPI_VERSION Source: #include "confdefs.h" #include "conffix.h" #include const char *mpich_ver = I_MPI_VERSION; int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:4:17: error: ???I_MPI_NUMVERSION??? undeclared here (not in a function); did you mean ???MPI_SUBVERSION???? int mpich_ver = I_MPI_NUMVERSION; ^~~~~~~~~~~~~~~~ MPI_SUBVERSION Source: #include "confdefs.h" #include "conffix.h" #include int mpich_ver = I_MPI_NUMVERSION; int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:4:17: error: ???MVAPICH2_NUMVERSION??? undeclared here (not in a function); did you mean ???MPI_SUBVERSION???? int mpich_ver = MVAPICH2_NUMVERSION; ^~~~~~~~~~~~~~~~~~~ MPI_SUBVERSION Source: #include "confdefs.h" #include "conffix.h" #include int mpich_ver = MVAPICH2_NUMVERSION; int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:4:17: error: ???MPICH_NUMVERSION??? undeclared here (not in a function); did you mean ???MPI_SUBVERSION???? int mpich_ver = MPICH_NUMVERSION; ^~~~~~~~~~~~~~~~ MPI_SUBVERSION Source: #include "confdefs.h" #include "conffix.h" #include int mpich_ver = MPICH_NUMVERSION; int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:4:18: error: ???OMPI_MAJOR_VERSION??? undeclared here (not in a function); did you mean ???MPI_SUBVERSION???? int ompi_major = OMPI_MAJOR_VERSION; ^~~~~~~~~~~~~~~~~~ MPI_SUBVERSION /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:5:18: error: ???OMPI_MINOR_VERSION??? undeclared here (not in a function); did you mean ???OMPI_MAJOR_VERSION???? int ompi_minor = OMPI_MINOR_VERSION; ^~~~~~~~~~~~~~~~~~ OMPI_MAJOR_VERSION /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:6:20: error: ???OMPI_RELEASE_VERSION??? undeclared here (not in a function); did you mean ???OMPI_MINOR_VERSION???? int ompi_release = OMPI_RELEASE_VERSION; ^~~~~~~~~~~~~~~~~~~~ OMPI_MINOR_VERSION Source: #include "confdefs.h" #include "conffix.h" #include int ompi_major = OMPI_MAJOR_VERSION; int ompi_minor = OMPI_MINOR_VERSION; int ompi_release = OMPI_RELEASE_VERSION; int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: exit code 1 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:9:2: error: #error not MSMPI #error not MSMPI ^~~~~ Source: #include "confdefs.h" #include "conffix.h" #include #define xstr(s) str(s) #define str(s) #s #if defined(MSMPI_VER) char msmpi_hex[] = xstr(MSMPI_VER); #else #error not MSMPI #endif int main() { ; return 0; } Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux ================================================================================ TEST configureMPI2 from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:253) TESTING: configureMPI2 from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:253) Check for functions added to the interface in MPI-2 Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int flag;if (MPI_Finalized(&flag)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_FINALIZED" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Allreduce(MPI_IN_PLACE,0, 1, MPI_INT, MPI_SUM, MPI_COMM_SELF)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_IN_PLACE" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int count=2; int blocklens[2]={0,1}; MPI_Aint indices[2]={0,1}; MPI_Datatype old_types[2]={0,1}; MPI_Datatype *newtype = 0; if (MPI_Type_create_struct(count, blocklens, indices, old_types, newtype)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Comm_errhandler_fn * p_err_fun = 0; MPI_Errhandler * p_errhandler = 0; if (MPI_Comm_create_errhandler(p_err_fun,p_errhandler)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Comm_set_errhandler(MPI_COMM_WORLD,MPI_ERRORS_RETURN)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { if (MPI_Reduce_local(0, 0, 0, MPI_INT, MPI_SUM));; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_REDUCE_LOCAL" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { char version[MPI_MAX_LIBRARY_VERSION_STRING];int verlen;if (MPI_Get_library_version(version,&verlen)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_GET_LIBRARY_VERSION" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { int base[100]; MPI_Win win; if (MPI_Win_create(base,100,4,MPI_INFO_NULL,MPI_COMM_WORLD,&win)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_WIN_CREATE" to "1" Defined "HAVE_MPI_ONE_SIDED" to "1" ================================================================================ TEST configureMPI3 from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:298) TESTING: configureMPI3 from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:298) Check for functions added to the interface in MPI-3 Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Comm scomm; MPI_Aint size=128; int disp_unit=8,*baseptr; MPI_Win win; if (MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &scomm)); if (MPI_Win_allocate_shared(size,disp_unit,MPI_INFO_NULL,MPI_COMM_WORLD,&baseptr,&win)); if (MPI_Win_shared_query(win,0,&size,&disp_unit,&baseptr)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_PROCESS_SHARED_MEMORY" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:7:24: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (MPI_Iscatter(&send,1,MPI_INT,&recv,1,MPI_INT,0,MPI_COMM_WORLD,&req)); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:8:25: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? if (MPI_Iscatterv(&send,counts,displs,MPI_INT,&recv,1,MPI_INT,0,MPI_COMM_WORLD,&req)); ^~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int send=0,recv,counts[2]={1,1},displs[2]={1,2}; MPI_Request req; if (MPI_Iscatter(&send,1,MPI_INT,&recv,1,MPI_INT,0,MPI_COMM_WORLD,&req)); if (MPI_Iscatterv(&send,counts,displs,MPI_INT,&recv,1,MPI_INT,0,MPI_COMM_WORLD,&req)); if (MPI_Igather(&send,1,MPI_INT,&recv,1,MPI_INT,0,MPI_COMM_WORLD,&req)); if (MPI_Igatherv(&send,1,MPI_INT,&recv,counts,displs,MPI_INT,0,MPI_COMM_WORLD,&req)); if (MPI_Iallgather(&send,1,MPI_INT,&recv,1,MPI_INT,MPI_COMM_WORLD,&req)); if (MPI_Iallgatherv(&send,1,MPI_INT,&recv,counts,displs,MPI_INT,MPI_COMM_WORLD,&req)); if (MPI_Ialltoall(&send,1,MPI_INT,&recv,1,MPI_INT,MPI_COMM_WORLD,&req)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_NONBLOCKING_COLLECTIVES" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Comm distcomm; MPI_Request req; if (MPI_Dist_graph_create_adjacent(MPI_COMM_WORLD,0,0,MPI_WEIGHTS_EMPTY,0,0,MPI_WEIGHTS_EMPTY,MPI_INFO_NULL,0,&distcomm)); if (MPI_Neighbor_alltoallv(0,0,0,MPI_INT,0,0,0,MPI_INT,distcomm)); if (MPI_Ineighbor_alltoallv(0,0,0,MPI_INT,0,0,0,MPI_INT,distcomm,&req)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_NEIGHBORHOOD_COLLECTIVES" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:6:30: warning: ???win??? is used uninitialized in this function [-Wuninitialized] int ptr[1]; MPI_Win win; if (MPI_Get_accumulate(ptr,1,MPI_INT,ptr,1,MPI_INT,0,0,1,MPI_INT,MPI_SUM,win)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int ptr[1]; MPI_Win win; if (MPI_Get_accumulate(ptr,1,MPI_INT,ptr,1,MPI_INT,0,0,1,MPI_INT,MPI_SUM,win)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_GET_ACCUMULATE" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:6:47: warning: ???win??? is used uninitialized in this function [-Wuninitialized] int ptr[1]; MPI_Win win; MPI_Request req; if (MPI_Rget(ptr,1,MPI_INT,0,1,1,MPI_INT,win,&req)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int ptr[1]; MPI_Win win; MPI_Request req; if (MPI_Rget(ptr,1,MPI_INT,0,1,1,MPI_INT,win,&req)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_RGET" to "1" ================================================================================ TEST configureMPIEXEC from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:175) TESTING: configureMPIEXEC from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:175) Checking for location of mpiexec Defined make macro "MPIEXEC" to "Not_appropriate_for_batch_systems_You_must_use_your_batch_system_to_submit_MPI_jobs_speak_with_your_local_sys_admin" ================================================================================ TEST configureMPITypes from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:345) TESTING: configureMPITypes from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:345) Checking for MPI Datatype handles Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { int size; int ierr; MPI_Init(0,0); ierr = MPI_Type_size(MPI_LONG_DOUBLE, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } =============================================================================== ***** WARNING: Cannot determine if MPI_LONG_DOUBLE works on your system in batch-mode! Assuming it does work. Run with --known-mpi-long-double=0 if you know it does not work (very unlikely). Run with --known-mpi-long-double=1 to remove this warning message ***** =============================================================================== Defined "HAVE_MPI_LONG_DOUBLE" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { int size; int ierr; MPI_Init(0,0); ierr = MPI_Type_size(MPI_INT64_T, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } =============================================================================== ***** WARNING: Cannot determine if MPI_INT64_T works on your system in batch-mode! Assuming it does work. Run with --known-mpi-int64_t=0 if you know it does not work (very unlikely). Run with --known-mpi-int64_t=1 to remove this warning message ***** =============================================================================== Defined "HAVE_MPI_INT64_T" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" #include #include int main() { int size; int ierr; MPI_Init(0,0); ierr = MPI_Type_size(MPI_C_DOUBLE_COMPLEX, &size); if(ierr || (size == 0)) exit(1); MPI_Finalize(); ; return 0; } =============================================================================== ***** WARNING: Cannot determine if MPI_C_DOUBLE_COMPLEX works on your system in batch-mode! Assuming it does work. Run with --known-mpi-c-double-complex=0 if you know it does not work (very unlikely). Run with --known-mpi-c-double-complex=1 to remove this warning message ***** =============================================================================== Defined "HAVE_MPI_C_DOUBLE_COMPLEX" to "1" ================================================================================ TEST SGIMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:406) TESTING: SGIMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:406) Returns true if SGI MPI is used Checking for functions [MPI_SGI_barrier] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_SGI_barrier(); static void _check_MPI_SGI_barrier() { MPI_SGI_barrier(); } int main() { _check_MPI_SGI_barrier();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl SGI MPI detected - defining MISSING_SIGTERM Defined "MISSING_SIGTERM" to "1" ================================================================================ TEST CxxMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:416) TESTING: CxxMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:416) Make sure C++ can compile and link Checking for header mpi.h Checking for C++ MPI_Finalize() Checking for functions [MPI_Finalize] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpicxx -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.cc Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.cc: In function ???void _check_MPI_Finalize()???: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.cc:5:41: warning: variable ???ierr??? set but not used [-Wunused-but-set-variable] static void _check_MPI_Finalize() { int ierr; ^~~~ Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ #include static void _check_MPI_Finalize() { int ierr; ierr = MPI_Finalize();; } int main() { _check_MPI_Finalize();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicxx -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl ================================================================================ TEST FortranMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:432) TESTING: FortranMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:432) Make sure fortran include [mpif.h] and library symbols are found Checking for fortran mpi_init() Checking for functions [] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.F90 Possible ERROR while running compiler: stderr: /p/home/apps/hpe/mpt-2.19/include/mpif.h:561:54: integer MPI_STATUSES_IGNORE(MPI_STATUS_SIZE,1) 1 Warning: Unused variable ???mpi_statuses_ignore??? declared at (1) [-Wunused-variable] Source: program main #include "mpif.h" integer ierr call mpi_init(ierr) end Running Executable WITHOUT threads to time it out Executing: mpif90 -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Checking for mpi.mod Checking for functions [] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpif90 -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.compilersFortran -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.F90 Possible ERROR while running compiler: exit code 1 stderr: f951: Fatal Error: Reading module ???mpi??? at line 1 column 2: Unexpected EOF compilation terminated. Source: program main use mpi integer ierr,rank call mpi_init(ierr) call mpi_comm_rank(MPI_COMM_WORLD,rank,ierr) end Compile failed inside link ================================================================================ TEST configureIO from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:454) TESTING: configureIO from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:454) Check for the functions in MPI/IO - Define HAVE_MPIIO if they are present - Some older MPI 1 implementations are missing these Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:7:1: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (MPI_Type_get_extent(MPI_INT, &lb, &extent)); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:9:50: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? MPI_File fh; ^~~~~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:12:1: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (MPI_File_write_all(fh, buf, 1, MPI_INT, &status)); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:14:50: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? if (MPI_File_read_all(fh, buf, 1, MPI_INT, &status)); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:18:1: warning: this ???if??? clause does not guard... [-Wmisleading-indentation] if (MPI_File_set_view(fh, disp, MPI_INT, MPI_INT, "", info)); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:20:50: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ???if??? if (MPI_File_open(MPI_COMM_SELF, "", 0, info, &fh)); ^~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:12:5: warning: ???buf??? is used uninitialized in this function [-Wuninitialized] if (MPI_File_write_all(fh, buf, 1, MPI_INT, &status)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:18:5: warning: ???disp??? is used uninitialized in this function [-Wuninitialized] if (MPI_File_set_view(fh, disp, MPI_INT, MPI_INT, "", info)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:18:5: warning: ???info??? is used uninitialized in this function [-Wuninitialized] Source: #include "confdefs.h" #include "conffix.h" #include int main() { MPI_Aint lb, extent; if (MPI_Type_get_extent(MPI_INT, &lb, &extent)); MPI_File fh; void *buf; MPI_Status status; if (MPI_File_write_all(fh, buf, 1, MPI_INT, &status)); if (MPI_File_read_all(fh, buf, 1, MPI_INT, &status)); MPI_Offset disp; MPI_Info info; if (MPI_File_set_view(fh, disp, MPI_INT, MPI_INT, "", info)); if (MPI_File_open(MPI_COMM_SELF, "", 0, info, &fh)); if (MPI_File_close(&fh)); ; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPIIO" to "1" ================================================================================ TEST findMPIInc from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:558) TESTING: findMPIInc from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:558) Find MPI include paths from "mpicc -show" and use with CUDAC_FLAGS ================================================================================ TEST PetscArchMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:592) TESTING: PetscArchMPICheck from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/MPI.py:592) Checking for functions [MPI_Type_get_envelope MPI_Type_dup MPI_Init_thread MPI_Iallreduce MPI_Ibarrier MPI_Finalized MPI_Exscan MPI_Reduce_scatter MPI_Reduce_scatter_block] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPI_Type_get_envelope(); static void _check_MPI_Type_get_envelope() { MPI_Type_get_envelope(); } char MPI_Type_dup(); static void _check_MPI_Type_dup() { MPI_Type_dup(); } char MPI_Init_thread(); static void _check_MPI_Init_thread() { MPI_Init_thread(); } char MPI_Iallreduce(); static void _check_MPI_Iallreduce() { MPI_Iallreduce(); } char MPI_Ibarrier(); static void _check_MPI_Ibarrier() { MPI_Ibarrier(); } char MPI_Finalized(); static void _check_MPI_Finalized() { MPI_Finalized(); } char MPI_Exscan(); static void _check_MPI_Exscan() { MPI_Exscan(); } char MPI_Reduce_scatter(); static void _check_MPI_Reduce_scatter() { MPI_Reduce_scatter(); } char MPI_Reduce_scatter_block(); static void _check_MPI_Reduce_scatter_block() { MPI_Reduce_scatter_block(); } int main() { _check_MPI_Type_get_envelope(); _check_MPI_Type_dup(); _check_MPI_Init_thread(); _check_MPI_Iallreduce(); _check_MPI_Ibarrier(); _check_MPI_Finalized(); _check_MPI_Exscan(); _check_MPI_Reduce_scatter(); _check_MPI_Reduce_scatter_block();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Defined "HAVE_MPI_TYPE_GET_ENVELOPE" to "1" Defined "HAVE_MPI_TYPE_DUP" to "1" Defined "HAVE_MPI_INIT_THREAD" to "1" Defined "HAVE_MPI_IALLREDUCE" to "1" Defined "HAVE_MPI_IBARRIER" to "1" Defined "HAVE_MPI_FINALIZED" to "1" Defined "HAVE_MPI_EXSCAN" to "1" Defined "HAVE_MPI_REDUCE_SCATTER" to "1" Defined "HAVE_MPI_REDUCE_SCATTER_BLOCK" to "1" Checking for functions [MPIX_Iallreduce] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPIX_Iallreduce(); static void _check_MPIX_Iallreduce() { MPIX_Iallreduce(); } int main() { _check_MPIX_Iallreduce();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o: in function `_check_MPIX_Iallreduce': /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5: undefined reference to `MPIX_Iallreduce' collect2: error: ld returned 1 exit status Checking for functions [MPIX_Ibarrier] in library [] [] Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c Successful compile: Source: #include "confdefs.h" #include "conffix.h" /* Override any gcc2 internal prototype to avoid an error. */ char MPIX_Ibarrier(); static void _check_MPIX_Ibarrier() { MPIX_Ibarrier(); } int main() { _check_MPIX_Ibarrier();; return 0; } Running Executable WITHOUT threads to time it out Executing: mpicc -o /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o -lquadmath -lstdc++ -ldl Possible ERROR while running linker: exit code 1 stderr: /usr/bin/ld: /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.o: in function `_check_MPIX_Ibarrier': /p/work2/tmondrag/petsc-N5i8ny/config.libraries/conftest.c:5: undefined reference to `MPIX_Ibarrier' collect2: error: ld returned 1 exit status Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:6:5: warning: unused variable ???combiner??? [-Wunused-variable] int combiner = MPI_COMBINER_DUP;; ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int combiner = MPI_COMBINER_DUP;; return 0; } Defined "HAVE_MPI_COMBINER_DUP" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:6:5: warning: unused variable ???combiner??? [-Wunused-variable] int combiner = MPI_COMBINER_CONTIGUOUS;; ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int combiner = MPI_COMBINER_CONTIGUOUS;; return 0; } Defined "HAVE_MPI_COMBINER_CONTIGUOUS" to "1" Running Executable WITHOUT threads to time it out Executing: mpicc -c -o /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.o -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.closure -I/p/work2/tmondrag/petsc-N5i8ny/config.compilers -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.cacheDetails -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.functions -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.featureTestMacros -I/p/work2/tmondrag/petsc-N5i8ny/config.utilities.missing -I/p/work2/tmondrag/petsc-N5i8ny/PETSc.options.scalarTypes -I/p/work2/tmondrag/petsc-N5i8ny/config.libraries -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c Possible ERROR while running compiler: stderr: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c: In function ???main???: /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c:6:5: warning: unused variable ???combiner??? [-Wunused-variable] int combiner = MPI_COMBINER_NAMED;; ^~~~~~~~ Source: #include "confdefs.h" #include "conffix.h" #include int main() { int combiner = MPI_COMBINER_NAMED;; return 0; } Defined "HAVE_MPI_COMBINER_NAMED" to "1" ================================================================================ TEST checkVersion from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:986) TESTING: checkVersion from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:986) Uses self.version, self.minversion, self.maxversion, self.versionname, and self.versioninclude to determine if package has required version Preprocessing source: #include "confdefs.h" #include "conffix.h" #include "mpi.h" ;petscpkgver(MPI_VERSION); Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI /p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI/conftest.c For mpi need 2 <= 3 <= ================================================================================ TEST checkSharedLibrary from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:920) TESTING: checkSharedLibrary from config.packages.MPI(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:920) By default we don't care about checking if the library is shared child config.packages.MPI 4.614358 ================================================================================ TEST alternateConfigureLibrary from config.packages.zstd(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.zstd(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.zstd 0.001155 ================================================================================ TEST alternateConfigureLibrary from config.packages.yaml(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.yaml(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.yaml 0.000965 ================================================================================ TEST configureLibrary from config.packages.valgrind(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:861) TESTING: configureLibrary from config.packages.valgrind(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:861) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional valgrind Not checking for library in Compiler specific search VALGRIND: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] Checking for optional headers [] in Compiler specific search VALGRIND: ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files [] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Found header files [] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking for headers ['valgrind/valgrind.h'] in Compiler specific search VALGRIND: ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['valgrind/valgrind.h'] in ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking include with compiler flags var CPPFLAGS ['/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.: Not checking for library in Package specific search directory VALGRIND: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] Checking for optional headers [] in Package specific search directory VALGRIND: ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files [] in ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Found header files [] in ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking for headers ['valgrind/valgrind.h'] in Package specific search directory VALGRIND: ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['valgrind/valgrind.h'] in ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking include with compiler flags var CPPFLAGS ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/usr/local/include -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.: Not checking for library in Package specific search directory VALGRIND: [] because no functions given to check for ================================================================================ TEST check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) TESTING: check from config.libraries(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/libraries.py:157) Checks that the library "libName" contains "funcs", and if it does defines HAVE_LIB"libName" - libDir may be a list of directories - libName may be a list of library names No functions to check for in library [] [] Checking for optional headers [] in Package specific search directory VALGRIND: ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files [] in ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Found header files [] in ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking for headers ['valgrind/valgrind.h'] in Package specific search directory VALGRIND: ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] ================================================================================ TEST checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) TESTING: checkInclude from config.headers(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/headers.py:86) Checks if a particular include file can be found along particular include paths Checking for header files ['valgrind/valgrind.h'] in ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Checking include with compiler flags var CPPFLAGS ['/usr/local/include', '/p/home/apps/hpe/mpt-2.19/include', '/p/home/apps/gnu_compiler/7.2.0/include'] Preprocessing source: #include "confdefs.h" #include "conffix.h" #include Running Executable WITHOUT threads to time it out Executing: mpicc -E -I/p/work2/tmondrag/petsc-N5i8ny/config.setCompilers -I/p/work2/tmondrag/petsc-N5i8ny/config.types -I/p/work2/tmondrag/petsc-N5i8ny/config.packages.MPI -I/p/work2/tmondrag/petsc-N5i8ny/config.headers -I/usr/local/include -I/p/home/apps/hpe/mpt-2.19/include -I/p/home/apps/gnu_compiler/7.2.0/include /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c Possible ERROR while running preprocessor: exit code 1 stdout: # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "" # 1 "" # 31 "" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 32 "" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/confdefs.h" 1 # 2 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 # 1 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conffix.h" 1 # 3 "/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c" 2 stderr: /p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Source: #include "confdefs.h" #include "conffix.h" #include Preprocess stderr before filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. : Preprocess stderr after filtering:/p/work2/tmondrag/petsc-N5i8ny/config.headers/conftest.c:3:10: fatal error: valgrind/valgrind.h: No such file or directory #include ^~~~~~~~~~~~~~~~~~~~~compilation terminated.: VALGRIND: SearchDir DirPath not found.. skipping: /opt/local Running Executable WITHOUT threads to time it out Executing: uname -s stdout: Linux =============================================================================== It appears you do not have valgrind installed on your system. We HIGHLY recommend you install it from www.valgrind.org Or install valgrind-devel or equivalent using your package manager. Then rerun ./configure =============================================================================== child config.packages.valgrind 0.069498 ================================================================================ TEST alternateConfigureLibrary from config.packages.ssl(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.ssl(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.ssl 0.001138 ================================================================================ TEST alternateConfigureLibrary from config.packages.sprng(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.sprng(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.sprng 0.000956 Not checking sowing on user request of --with-sowing=0 child config.packages.sowing 0.000852 ================================================================================ TEST alternateConfigureLibrary from config.packages.revolve(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.revolve(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.revolve 0.000961 ================================================================================ TEST alternateConfigureLibrary from config.packages.radau5(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.radau5(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.radau5 0.000963 ================================================================================ TEST alternateConfigureLibrary from config.packages.pami(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.pami(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.pami 0.000771 ================================================================================ TEST alternateConfigureLibrary from config.packages.opengles(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.opengles(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.opengles 0.000786 ================================================================================ TEST alternateConfigureLibrary from config.packages.opencl(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.opencl(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.opencl 0.000786 ================================================================================ TEST alternateConfigureLibrary from config.packages.muparser(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.muparser(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.muparser 0.000952 Defined "PYTHON_EXE" to ""/usr/bin/python"" Running Executable WITHOUT threads to time it out Executing: /usr/bin/python -c "import Cython" Running Executable WITHOUT threads to time it out Executing: /usr/bin/python -c "import numpy" child config.packages.python 0.090408 ================================================================================ TEST alternateConfigureLibrary from config.packages.petsc4py(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/petsc4py.py:97) TESTING: alternateConfigureLibrary from config.packages.petsc4py(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/petsc4py.py:97) Defined make rule "petsc4py-build" with dependencies "" and code [] Defined make rule "petsc4py-install" with dependencies "" and code [] child config.packages.petsc4py 0.002105 ================================================================================ TEST alternateConfigureLibrary from config.packages.mpi4py(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/mpi4py.py:74) TESTING: alternateConfigureLibrary from config.packages.mpi4py(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/mpi4py.py:74) Defined make rule "mpi4py-build" with dependencies "" and code [] Defined make rule "mpi4py-install" with dependencies "" and code [] child config.packages.mpi4py 0.001805 ================================================================================ TEST alternateConfigureLibrary from config.packages.mpe(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.mpe(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.mpe 0.001006 ================================================================================ TEST alternateConfigureLibrary from config.packages.memkind(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.memkind(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.memkind 0.000817 ================================================================================ TEST alternateConfigureLibrary from config.packages.libmesh(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/libmesh.py:76) TESTING: alternateConfigureLibrary from config.packages.libmesh(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/libmesh.py:76) Defined make rule "libmesh-build" with dependencies "" and code [] Defined make rule "libmesh-install" with dependencies "" and code [] child config.packages.libmesh 0.001655 ================================================================================ TEST alternateConfigureLibrary from config.packages.moose(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.moose(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.moose 0.000814 ================================================================================ TEST alternateConfigureLibrary from config.packages.libjpeg(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.libjpeg(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.libjpeg 0.000966 ================================================================================ TEST alternateConfigureLibrary from config.packages.libceed(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) TESTING: alternateConfigureLibrary from config.packages.libceed(/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/package.py:924) Called if --with-packagename=0; does nothing by default child config.packages.libceed 0.000926 Checking for program /app/unsupported/COST/git/2.4.4/gnu//bin/lgrind...not found Checking for program /app/unsupported/COST/tcltk/8.6.4/gnu//bin/lgrind...not found Checking for program /usr/local/krb5/bin/lgrind...not found Checking for program /p/home/apps/hpe/mpt-2.19/bin/lgrind...not found Checking for program /p/home/apps/gnu_compiler/7.2.0/bin/lgrind...not found Checking for program /opt/clmgr/sbin/lgrind...not found Checking for program /opt/clmgr/bin/lgrind...not found Checking for program /opt/sgi/sbin/lgrind...not found Checking for program /opt/sgi/bin/lgrind...not found Checking for program /usr/local/bin/lgrind...not found Checking for program /usr/bin/lgrind...not found Checking for program /bin/lgrind...not found Checking for program /usr/games/lgrind...not found Checking for program /opt/c3/bin/lgrind...not found Checking for program /opt/pbs/default/bin/lgrind...not found Checking for program /sbin/lgrind...not found Checking for program /bin/lgrind...not found Checking for program /pbs/SLB/lgrind...not found Checking for program /app/unsupported/local/bin/lgrind...not found Checking for program /app/mpiutil/lgrind...not found Checking for program /p/home/tmondrag/WORK/moose/scripts/../petsc/lib/petsc/bin/win32fe/lgrind...not found Unable to find programs ['lgrind'] providing listing of each search directory to help debug Path provided in Python program Path provided by default path ['gitk', 'git-receive-pack', 'git-shell', 'git-cvsserver', 'git-upload-pack', 'git-upload-archive', 'git'] ['wish', 'wish8.6', 'tclsh8.6', 'tclsh'] Warning /usr/local/krb5/bin is not a directory ['mpiexec', 'mpt_query', 'mpirun', 'mpt_checkpoint', 'mpt_shepherd', 'mpicc', 'mpicxx', 'mpif77', 'mpif90', 'mpt_forward', 'mpt_bt', 'mpif08', 'oshrun', 'mpiexec_mpt', 'oshfort', 'oshcc', 'util', 'mpt_restart', 'rail-config', 'mpt_hugepage_config', 'omplace', 'oshCC'] ['c++', 'gccgo', 'go', 'x86_64-pc-linux-gnu-gcc-nm', 'x86_64-pc-linux-gnu-gfortran', 'x86_64-pc-linux-gnu-gccgo', 'x86_64-pc-linux-gnu-gcc', 'x86_64-pc-linux-gnu-gcc-ranlib', 'g++', 'gfortran', 'cpp', 'gcc', 'gcc-ar', 'x86_64-pc-linux-gnu-gcc-7.2.0', 'x86_64-pc-linux-gnu-gcc-ar', 'x86_64-pc-linux-gnu-c++', 'gcc-nm', 'x86_64-pc-linux-gnu-g++', 'gofmt', 'gcov-tool', 'gcc-ranlib', 'gcov', 'gcov-dump'] Warning /opt/clmgr/sbin is not a directory ['rpdcp', 'pdsh', 'pdcp', 'dshbak', 'cmu_stop_monitoring', 'cmu_start_monitoring', 'cmu_restart_monitoring', 'cmu_lock', 'cm', 'SmallMonitoringDaemon', 'SendUdpMessage', 'SecondaryServerMonitoringDaemon'] ['tempohbc', 'cattr'] ['cminfo'] [] ['python-argcomplete-check-easy-install-script', 'sh', 'luac', 'rdoc.ruby2.1', 'jstatd', 'nstat', 'isoinfo', 'rmic', 'mkroot', 'rmiregistry', 'mailq', 'rpmverify', 'rdoc', 'isodebug', 'getconf', 'serialver', 'devdump', 'sip', 'rake.ruby2.1', 'db_sql', 'wsgen', 'javac', 'rake', 'db_hotbackup', 'isovfy', 'netcat', 'ri', 'native2ascii', 'ri.ruby2.1', 'orbd', 'jps', 'routef', 'ss', 'gem2rpm-0.10.1', 'extcheck', 'jstat', 'wsimport', 'pkill', 'vim', 'db_dump', 'unpack200', 'funzip', 'policytool', 'vi', 'javap', 'db_deadlock', 'ps', 'xjc', 'javadoc', 'tcsh', 'awk', 'newaliases', 'routel', 'lnstat', 'gtk-update-icon-cache', 'ld', 'appletviewer', 'dbus-send', 'java', 'keytool', 'db_archive', 'pgrep', 'ftp', 'lua', 'db_printlog', 'gem2rpm', 'unzipsfx', 'dbus-launch', 'iptables-xml', 'db_load', 'easy_install', 'javah', 'tnameserv', 'jar', 'rpmquery', 'unzip', 'db_checkpoint', 'jmap', 'gem2rpm.ruby2.1', 'db_stat', 'zipgrep', 'jarsigner', 'modulecmd', 'schemagen', 'ctags', 'csh', 'jinfo', 'jconsole', 'db_verify', 'ksh', 'jstack', 'db_upgrade', 'rbash', 'db_recover', 'SUSEConnect', 'jrunscript', 'pack200', 'jsadebugd', 'install-info', 'servertool', 'add.modules', 'jhat', 'rmid', 'isodump', 'jdb', 'envsubst', 'kbdrate', 'smidiff', 'manpath', 'snmpset', 'systemctl', 'xml_spellcheck', 'xargs', 'gouldtoppm', 'spawn_console', 'gccmakedep', 'snmpnetstat', 'cnf', 'id', 'lwp-dump', 'pnmshear', 'Signature_conversion', 'lessecho', 'sg_map', 'apropos', 'pbmto10x', 'pgmmake', 'ib_write_bw', 'openssl', 'gcc-nm-4.8', 'Backarrow2BackSpace', 'openvt', 'roff2html', 'pnmstitch', 'pre-grohtml', 'grub2-syslinux2cfg', 'systemd-run', 'c_rehash', 'aclocal-1.13', 'x86info', 'pcxtoppm', 'scsi_satl', 'fuk', 'geqn', 'mk_changelog', 'pamarith', 'ibv_asyncwatch', 'jpico', 'gdb-add-index', 'pdf2dsc', 'cmake', 'libnetcfg', 'systemd-analyze', 'luit', 'date', 'emacsclient', 'bno_plot.py', 'nl', 'snmptest', 'ibdmsh', 'resizecons', 'llobdstat', 'quota', 'exportCRL.pl', 'zypper', 'gtester', 'mev', 'gmetric', 'lpstat', 'yacc', 'gvfs-set-attribute', 'grub2-kbdcomp', 'setfattr', 'ssh-add', 'mdeltree', 'nc', '411toppm', 'gprof', 'yum-repoquery3', 'rxe_cfg', 'ppmtoneo', 'eject', 'w3m', 'lpr', 'pnmtotiff', 'ppmshift', 'env', 'mkdirhier', 'bzgrep', 'sgm_dd', 'svgtopam', 'who', 'systemd-cat', 'db48_load', 'clean-binary-files', 'ppmquantall', 'ypwhich', 'yzpper', 'lzcmp', 'slptool', 'mkfontdir', 'sun-to-mime', 'gvfs-rename', 'pwd', 'rpmqpack', 'patchwork', 'cpan2dist', 'unshare', 'gtk-launch', 'pax', 'xmbind', 'ftview', 'pbmpscale', 'munge', 'gpart', 'join', 'zegrep', 'pnmhisteq', 'gdcmpgif', 'skill', 'mkhybrid', 'tkcon', 'pcre-config', 'run_perftest_loopback', 'hexdump', 'setcifsacl', 'asn1Coding', 'circo', 'write', 'lua5.2', 'xdg-desktop-icon', 'jpegtopnm', 'rmdir', 'uniq', 'ldapexop', 'opensc-explorer', 'slabtop', 'pamsplit', 'db48_checkpoint', 'zip', 'qttracereplay', 'factor', 'pamaddnoise', 'ptx', 'opiegen', 'grops', 'unix-lpr.sh', 'pnmnoraw', 'perl5.18.2', 'ftdump', 'chmod', 'chkstat', 'roff2text', 'timed-read', 'tee', 'c2ph', 'lsusb', 'col', 'ps2epsi', 'pgmbentley', 'qmlviewer', 'gvgen', 'gstack', 'nroff', 'ppmchange', 'setmetamode', 'pbmtomrf', 'edit', 'pamfixtrunc', 'uil', 'grub2-mkrelpath', 'xpath', 'pdfmom', 'ppmtolj', 'pamscale', 'pnmcat', 'tgz', 'bmptoppm', 'mv', 'spax', 'ncursesw5-config', 'sg_read', 'chgrp', 'qrttoppm', 'autoscan', 'gtk3-icon-browser', 'lsdev', 'mmount', 'webtidy', 'grep', 'pgmtofs', 'fitstopnm', 'oggdec', 'ustar', 'wdctl', 'pamtooctaveimg', 'glib-compile-resources', 'ftp-rfc', 'pfbtops', 'automake-1.13', 'makedumpfile', 'kbd', 'ftvalid', 'rawtoppm', 'findmnt', 'rezip_repo_rsyncable', 'btt', 'grub2-mklayout', 'cdda2wav', 'gconf-merge-tree', 'smidump', 'jstar', 'grolbp', 'ulockmgr_server', 'pygobject-codegen-2.0', 'create-jar-links', 'db48_dump', 'time', 'pdf2ps', 'sg_copy_results', 'convertdb1', 'rescan-scsi-bus.sh', 'grub2-mount', 'perlivp', 'addr2line', 'pnmdepth', 'gpgsm', 'showkey', 'gunzip', 'zcat', 'snmpbulkwalk', 'pamstack', 'isolinux-config', 'stdbuf', 'repo2solv.sh', 'migspeed', 'sg_rdac', 'makeg', 'watchgnupg', 'setterm', 'xmllint', 'gd2copypal', 'regshell', 'lrelease', 'mclasserase', 'procinfo', 'autoconf', 'pdbimgtopam', 'net', 'gpgsm-gencert.sh', 'pbmclean', 'expr', 'snmp-bridge-mib', 'pbmtopi3', 'grotty', 'pbmtobbnbg', 'affixcompress', 'yppasswd', 'gvfs-mount', 'newgidmap', 'gvmap', 'wc', 'whoami', 'ibnlparse', 'appdata2solv', 'ppmwheel', 'tput', 'snmpwalk', 'ftgrid', 'iasecc-tool', 'ppmdmkfont', 'pammixinterlace', 'showaudio', 'ppmtobmp', 'linguist', 'nproc', 'esd', 'psresize', 'mdel', 'generateCRL.pl', 'gslp', 'fold', 'journalctl', 'ppmtoyuvsplit', 'pamtotiff', 'msgfmt', 'sun-message.csh', 'lfs', 'libyui-terminal', 'tail', 'top', 'pamtofits', 'dumpkeys', 'ppmdim', 'profiles', 'pbmtoicon', 'mstvpd', 'namei', 'broadwayd', 'cleanlinks', 'ppmtoilbm', 'ipf-mod.pl', 'lftpget', 'cameratopam', 'pnmmontage', 'sg_sat_set_features', 'gapplication', 'gvcolor', 'ri.ruby.ruby2.1', 'create_package_descr', 'xdg-email', 'sg_emc_trespass', 'regtree', 'pbmtoln03', 'ibv_rc_pingpong', 'lppasswd', 'lftp_wrapper', 'podselect', 'chattr', 'captoinfo', 'unxz', 'getfilename', 'mutt', 'ccomps', 'nodeattr', 'vcut', 'drpmsync', 'pbmtomgr', 'autom4te', 'pbmtox10bm', 'mailstat', 'pbmtoibm23xx', 'mknod', 'lfs_migrate', 'gvfs-open', 'spawn_login', 'mimecheck', 'mountpoint', 'psfxtable', 'gtk-demo', 'ppmrough', 'infokey', 'smbget', 'ppmcolors', 'ifnames', 'pnmtopnm', 'ldapcompare', 'delv', 'pygtk-codegen-2.0', 'perl', 'objcopy', 'fragiso', 'pnmtile', 'pnmtopclxl', 'lzcat', 'unmunch', 'pbmtoatk', 'grub2-script-check', 'moc', 'umount', 'cifsdd', 'testsaslauthd', 'qpaeq', 'moc-qt5', 'roff2ps', 'ex', 'psed', 'pbmtopsg3', 'memhog', 'comm', 'grn', 'smbcacls', 'sg_read_block_limits', 'chvt', 'ftmulti', 'bzcmp', 'eventlogadm', 'setfacl', 'pamtompfont', 'xz', 'lzegrep', 'gemtopbm', 'sg_ident', 'gettext.sh', 'ibv_ud_pingpong', 'config_data', 'rdate', 'scdaemon', 'ipmi_sim', 'gcr-viewer', 'inf2cdtext.pl', 'vimdiff', 'testrb', 'systemd-path', 'pgmslice', 'rpmlocate', 'chroot', 'ps2pdf12', 'sun-message', 'lsblk', 'pgmminkowski', 'openipmish', 'gml2gv', 'ebrowse', 'post-grohtml', 'sg_senddiag', 'amuFormat.sh', 'applydeltaiso', 'gdbus-codegen', 'smbtree', 'jmacs', 'pinky', 'richtext', 'tty', 'fixscribeps', 'pgmhist', 'pgpring', 'pgpewrap', 'pamgradient', 'sccmap', 'installcheck', 'pamtohtmltbl', 'libtool', 'pf2afm', 'gc', 'lsattr', 'ypdomainname', 'colmux', 'mshowfat', 'libpng16-config', 'cal', 'pdfroff', 'lupdate', 'oggenc', 'rasttopnm', 'xdg-settings', 'eu-elfcmp', 'getcifsacl', 'neqn', 'psfgettable', 'pamenlarge', 'keytab-lilo', 'systemd-journalctl', 'sg_prevent', 'curl', 'analyze', 'gtk-update-icon-cache-2.0', 'abs2rel', 'scsi_mandat', 'bzcat', 'freetype-config', 'xorrecord', 'ppmtorgb3', 'pammosaicknit', 'smtpd.py', 'ibdmtr', 'syslinux2ansi', 'unzip-plain', 'pamseq', 'xdg-terminal', 'cryptdir', 'old', 'ranlib', 'sftp', 'smbpasswd', 'bashbug', 'pngtogd', 'samba-regedit', 'nisdomainname', 'mkzftree', 'ib_send_bw', 'gemtopnm', 'attr', 'sg_verify', 'yume', 'smixlate', 'fribidi', 'nfs4_setfacl', 'makealias', 'grub2-mkfont', 'paste', 'cat', 'mandb', 'pamfunc', 'smbspool', 'sg_scan', 'sginfo', 'ncurses6-config', 'sha256sum', 'pamtopam', 'dlook.pl', 'cdinfo', 'rpmgraph', 'eu-findtextrel', 'ppmpat', 'strace-graph', 'getkeycodes', 'smbtar', 'mdatopbm', 'sum', 'setpriv', 'eyuvtoppm', 'grolj4', 'gdbus', 'lzless', 'xsubpp', 'pampaintspill', 'pstops', 'gcc', 'cyrus_sasl_sample_client', 'hipstopgm', 'gvfs-copy', 'gencat', 'jpeg2ktopam', 's2p', 'fmt', 'dc', 'setleds', 'plot-llstat', 'tbl', 'getafm', 'pnmalias', 'basename', 'fc-validate', 'sg_read_attr', 'ppmtouil', 'lwp-download', 'sg_reassign', 'ls', 'ppmtopcx', 'pydoc', 'opensc-tool', 'dbilogstrip', 'gfortran', 'emacs', 'zdiff', 'bzless', 'zipinfo', 'sfdp', 'mime-info-to-mime', 'fiascotopnm', 'winicontopam', 'lispmtopgm', 'lsmem', 'ppmtopuzz', 'dot', 'lsmod', 'aulastlog', 'gropdf', 'nfs4_editfacl', 'grub2-editenv', 'localedef', 'unicode_start', 'xml_merge', 'create_update_source.sh', 'sg_stpg', 'diff3', 'mzip', 'yuvtoppm', 'ofl', 'funzip-plain', 'ogginfo', 'lsusb.py', 'gpg-connect-agent', 'ipg', 'ppmquant', 'cdda2mp3.new', 'python2.7-config', 'wall', 'pamsharpness', 'audiosend', 'smilint', 'ypchfn', 'Backarrow2Delete', 'macptopbm', 'turbostat', 'pamflip', 'sg_ses', 'dig', 'epsffit', 'gettext', 'systemd-inhibit', 'makedumpfile-R.pl', 'libpng12-config', 'comps2solv', 'mkdiskimage', 'pamditherbw', 'gem2rpm.ruby2.1-0.10.1', 'sync', 'eu-ar', 'installation_sources', 'preconv', 'pod2html', 'bunzip2', 'pl2pm', 'setJava', 'unbuffer', 'udevadm', 'sg_vpd', 'uuencode', 'ppmmix', 'gmake', 'find2perl', 'gtbl', 'pyvenv', 'scout', 'sudo', 'ptardiff', 'fusermount', 'ypmatch', 'glib-genmarshal', 'sleep', 'dbiprof', 'dbwrap_tool', 'gconftool-rebuild', 'termidx', 'fstopgm', 'hunspell', 'fixdlsrps', 'sg_read_buffer', 'lsinitrd', 'checkmedia', 'gvfs-tree', 'g++-4.8', 'pnmtoplainpnm', 'neato', 'zipgrep-plain', 'catchsegv', 'unexpand', 'pr', 'gtk-update-icon-cache-3.0', 'unlzma', 'lprsetup.sh', 'pkttyagent', 'readcd', 'eu-ld', 'lastb', 'ppmtoeyuv', 'fillup', 'cluster', 'mcat', 'gsbj', 'python-argcomplete-check-easy-install-script-2.7', 'rpmmd2solv', 'dot2gxl', 'grub2-mknetdir', 'pktopbm', 'pbmtosunicon', 'ldd', 'Mail', 'gvfs-save', 'nmbstatus', 'dot_builtins', 'f2py', 'blkrawverify', 'timedatectl', 'pbmtoxbm', 'egrep', 'ppmdcfont', 'osage', 'pamoil', 'mimegzip', 'systemd-cgls', 'modhash', 'sort', 'ppmtopgm', 'qt3to4', 'truncate', 'xkibitz', 'dirmngr', 'bioradtopgm', 'glib-gettextize', 'make_method', 'setarch', 'annotate', 'quotasync', 'python', 'dracut', 'pbmtopk', 'pstree', 'genDDNSkey', 'pamfile', 'ppmforge', 'gpg-zip', 'fgconsole', 'fipshmac', 'icedax', 'pamtopfm', 'dbiproxy', 'zcmp', 'mmove', 'pylupdate4', 'ybmtopbm', 'xmlpatternsvalidator', 'fixpspps', 'rgb3toppm', 'rngtest', 'wish8.6', 'nsenter', 'cupstestdsc', 'zsoelim', 'ftbench', '.fipscheck.hmac', 'webpng', 'collectl', 'bmptopnm', 'create_sha1sum', 'x86_64-suse-linux-gnu-pkg-config', 'erb', 'grep-changelog', 'ps2pdfwr', 'lprm', 'rpmdb', 'fixwwps', 'gdk-pixbuf-query-loaders-64', 'db48_verify', 'lz', 'lsscsi', 'pdksh', 'btrecord', 'grub2-glue-efi', 'zipdetails', 'smime_keys', 'qvfb', 'mren', 'build-jar-repository', 'pamstretch', 'tfmtodit', 'lwp-request', 'zipsplit', 'infotopam', 'xorriso', 'colrm', 'snmpdelta', 'otp-md4', 'sed', 'sha512sum', 'bzip2recover', 'xhost', 'perf', 'deltainfoxml2solv', 'bzip2', 'dislocate', 'mimencode', 'md5pass', 'gs', 'qtconfig', 'stty', 'ppmtoleaf', 'sg_unmap', 'pgmtolispm', 'gtk-builder-convert', 'gvfs-less', 'gcore', 'ppmflash', 'piconv', 'expand', 'ionice', 'du', 'createpatch', 'ibv_xsrq_pingpong', 'xdg-desktop-menu', 'pmap', 'fixqt4headers.pl', 'pamtoxvmini', 'pamgauss', 'cpack', 'wordforms', 'pnmrotate', 'gcc-4.8', 'ainfo', 'grub2-emu-lite', 'slogin', 'python3.4m', 'ibv_uc_pingpong', 'ptar', 'snice', 'auvirt', 'pbmtocmuwm', 'psselect', 'rake.ruby.ruby2.1', 'xzfgrep', 'gvfs-trash', 'chrp-addnote', 'gsettings-data-convert', 'lzmadec', 'irb.ruby2.1', 'shar', 'brushtopbm', 'nmblookup', 'm4', 'unzipsfx-plain', 'peekfd', 'sg_rtpg', 'pjtoppm', 'mail', 'db48_deadlock', 'applydeltarpm', 'reset', 'uuidgen', 'pgmmedian', 'snmpget', 'gvmap.sh', 'ibdmchk', 'nfs4_getfacl', 'pamexec', 'scsi_readcap', 'pnmindex', 'opieftpd', 'kbdinfo', 'get-versions', 'df', 'grub2-menulst2cfg', 'spctoppm', 'prctl', 'avstopam', 'pamdepth', 'qhelpgenerator', 'lookbib', 'zypp-CheckAccessDeleted', 'lsns', 'cdda2ogg', 'unicode_stop', 'display-buttons', 'xwdtopnm', 'glib-compile-schemas', 'gtk-query-settings', 'lzma', 'unlink', 'mtools', 'snmpusm', 'snmpcheck', 'gdiffmk', 'sg_rmsn', 'pgmkernel', 'a2p', 'at', 'xorrisofs', 'screen', 'luitx', 'ar', 'rsyncstats', 'xmkmf', 'pygtk-demo', 'dlook', 'dirsplit', 'bison', 'eps2eps', 'systemd-firstboot', 'flock', 'create_md5sums', 'gslj', 'lsof', 'od', 'pbmmask', 'psfaddtable', 'bzdiff', 'initviocons', 'pamedge', 'pgmcrater', 'patch-metamail', 'pulseaudio', 'sgitopnm', 'troff', 'lzdiff', 'trace', 'db48_sql', 'smbclient', 'ppmcie', 'gcov', 'xtotroff', 'mbadblocks', 'cpio', 'snmpdf', 'ipcrm', 'scp', 'test_xauth', 'cpumap', 'gvfs-mkdir', 'usbhid-dump', 'pkcs15-crypt', 'ppm3d', 'less', 'netkey-tool', 'rpms2solv', 'testparm', 'vttest', 'vlock', 'ppmddumpfont', 'sg_start', 'pamtouil', 'rcc', 'mdu', 'tagmedia', 'ps2ascii', 'dos2unix', 'pbmtopgm', 'pod2usage', 'groups', 'ib_read_bw', 'fixfmps', 'ngettext', 'pnmtosir', 'pkcs15-init', 'xdg-mime', 'gcc-ar-4.8', 'pamendian', 'xmlwf', 'gawk', 'metamail', 'sg_logs', 'ippfind', 'richtoatk', 'gio-querymodules-64', 'sg_get_config', 'ftgamma', 'pnmsmooth', 'eu-addr2line', 'screendump', 'ipmish', 'pnmpad', 'gd2togif', 'numactl', 'regdiff', 'pamrgbatopng', 'info', 'systemd-stdio-bridge', 'chkconfig', 'extcompose', 'tknewsbiff', 'find-jar', 'readom', 'sg_map26', 'display-coords', 'ldapdelete', 'audiocompose', 'filesize', 'gtk-query-immodules-3.0-64', 'man', 'pamtodjvurle', 'ibdiagui', 'make', 'lzmore', 'uudecode', 'prtstat', 'opiekey', 'scriptreplay', 'grub2-fstest', 'mksh', 'arpaname', 'sip-2.7', 'ppmcolormask', 'last', 'pamrubber', 'pwdx', 'printf', 'pamslice', 'sputoppm', 'base32', 'sg_timestamp', 'gethostip', 'ppmtoacad', 'wbinfo', 'makedeltarpm', 'pbmtoescp2', 'eu-ranlib', 'gtk-builder-tool', 'pbmtodjvurle', 'fastjar', 'isosize', 'os-prober', 'encode_keychange', 'aclocal', 'pbmtog3', 'crontab', 'linux64', 'true', 'tzselect', 'colcrt', 'pbmtocis', 'db48_printlog', 'erb.ruby2.1', 'ppmtogif', 'db48_upgrade', 'grub2-render-label', 'intstats', 'tgatoppm', 'scsi_ready', 'pampick', 'xml_pp', 'rmcp_ping', 'ppmdist', 'qdbuscpp2xml', 'star', 'python3.4', 'packages2eula.pl', 'ul', 'flex', 'cpanp-run-perl', 'roff2x', 'ppmnorm', 'diff', 'pbmlife', 'dvipdf', 'aulast', 'mcomp', 'mkzimage_cmdline', 'munch', 'aa-easyprof', 'sg_raw', 'ps2pdf', 'mtvtoppm', 'mdir', 'systemd-ask-password', 'gpg2', 'python3', 'tailf', 'pkaction', 'fips_standalone_hmac', 'sg_ses_microcode', 'rpcgen', 'pkexec', 'ypchsh', 'neotoppm', 'podchecker', 'fdp', 'mkhtmlindex', 'pi3topbm', 'libtoolize', 'dir', 'ps2pdf13', 'grub2-mkimage', 'socklist', 'mstregdump', 'patch', 'ppmntsc', 'vorbiscomment', 'elfedit', 'pamtoavs', 'c++', 'newuidmap', 'scsi_start', 'h2xs', 'fixtpps', 'mstmread', 'westcos-tool', 'atktopbm', 'snmpbulkget', 'view', 'pammasksharpen', 'minfo', 'sg_turs', 'passwd', 'ppmtoyuv', 'chacl', 'pc1toppm', 'kill', 'ntlm_auth', 'psmerge', 'repomdxml2solv', 'pnmscale', 'ssh-agent', 'ibv_devinfo', 'gxditview', 'unmunge', 'install', 'tabs', 'free', 'whereis', 'pinentry', 'shownonascii', 'isohybrid.pl', 'pbmtozinc', 'etags', 'seq', 'ucs2any', 'ibtopodiff', 'uuenpipe', 'guess_encoding', 'outpsfheader', 'unix2mac', 'column', 'ftstring', 'systool', 'mkisofs', 'x86_64', 'ppmbrighten', 'fixmacps', 'locale', 'opielogin', 'sg_sat_identify', 'shuf', 'pamtojpeg2k', 'rebuild-jar-repository', 'pi1toppm', 'gvfs-mime', 'vitmp', 'readlink', 'gtester-report', 'xmlpatterns', 'gpm-root', 'opiesu', 'pnmtopalm', 'setup-pulseaudio', 'grub2-mkrescue', 'lockfile', 'size', 'cut', 'dbus-launch.x11', 'spottopgm', 'disable-paste', 'mcookie', 'rpcclient', 'cistopbm', 'rletopnm', 'gcc-ar', 'mtoolstest', 'readmult', 'command-not-found', 'opiepasswd', 'pnmquantall', 'grepjar', 'xzdiff', 'rexec', 'json_xs', 'mktemp', 'btrfs-map-logical', 'xdg-open', 'sprof', 'yes', 'mergelib', 'snapper', 'ping6', 'nslookup', 'tic', 'bdftruncate', 'curl-config', 'xzcat', 'ldapsearch', 'ib_atomic_bw', 'tsort', 'infotocap', 'automake', 'setpalette', 'bc', 'dotty', 'trust', 'getunimap', 'xauth', 'mesg', 'zeisstopnm', 'gpgv', 'grap2graph', 'afmtodit', 'bcfree', 'printafm', 'joe', 'bzmore', 'chmorph', 'sg', 'machinectl', 'pbmminkowski', 'pbmtoepsi', 'xzgrep', 'xbmtopbm', 'mmroff', 'addftinfo', 'sg_write_long', 'toe', 'lftp', 'tclsh', 'sc-hsm-tool', 'gconftool-2', 'pamchannel', 'designer', 'gpg-agent', 'ftdiff', 'qmake-qt5', 'sunicontopnm', 'link', 'sg_write_same', 'net-snmp-cert', 'lneato', 'pgmtopgm', 'ssh', 'tred', 'pfmtopam', 'sgi_config_changed', 'touch', 'gpgv2', 'ppmtopi1', 'ssh-keygen', 'pamtosrf', 'smbcontrol', 'scsi_temperature', 'atrm', 'gdk-pixbuf-csource', 'systemd-detect-virt', 'mimezip', 'pod2man', 'safe-rm', 'mt', 'eu-unstrip', 'gsdj500', 'unshar', 'cardos-tool', 'timed-run', 'ppmtosixel', 'scsi_logging_level', 'gpasswd', 'snmptls', 'perldoc', 'cancel', 'xdg-icon-resource', 'sg_sat_read_gplog', 'nop', 'gvpack', 'hltest', 'sg_get_lba_status', 'sirtopnm', 'loginctl', 'pnmtojpeg', 'ppmmake', 'pnmconvol', 'showexternal', 'wish', 'mvxattr', 'pamtopdbimg', 'mkmanifest', 'gd2topng', 'pnminterp', 'icontopbm', 'ruby', 'snmptable', 'sg_reset_wp', 'sg_opcodes', 'rm', 'opieinfo', 'pnmtoxwd', 'pamsistoaglyph', 'vimtutor', 'safe-rmdir', 'readelf', 'eqn2graph', 'bcomps', 'uname26', 'gcov-4.8', 'fipscheck', 'numastat', 'sun-audio-file', 'twopi', 'pnmnlfilt', 'gvfs-move', 'piv-tool', 'loadkeys', 'extlinux', 'dconf', 'ppmfade', 'genisoimage', 'unpigz', 'ipptool', 'strings', 'thinkjettopbm', 'autoreconf', 'pgmtexture', 'array', 'rds-ping', 'sdrcomp', 'merge-pciids', 'hostnamectl', 'gvfs-rm', 'mkinfodir', 'dmesg', 'kbd_mode', 'openct-tool', 'autoupdate', 'giftopnm', 'giftogd2', 'vdir', 'relnotes', 'create_directory.yast', 'ximtoppm', 'gpgparsemail', 'updateinfoxml2solv', 'pnmtoddif', 'ppmtoascii', 'cksum', 'gdk-pixbuf-pixdata', 'ddbugtopbm', 'grodvi', 'jbigtopnm', 'ldappasswd', 'fadot', 'pnmcrop', 'pnmtorle', 'gv2gxl', 'raw_ethernet_bw', 'xpmtoppm', 'cifscreds', 'psbook', 'pnmgamma', 'ppmtoppm', 'eu-objdump', 'openipmicmd', 'batch', 'pamtowinicon', 'i386', 'acyclic', 'pamx', 'xzmore', 'pbmupc', 'fc-scan', 'mpartition', 'rpmspec', 'pngtogd2', 'pamstretch-gen', 'mmencode', 'xzless', 'fixproc', 'pamrecolor', 'cp', 'gzexe', 'dirname', 'ppmtoxpm', 'btreplay', 'loadunimap', 'rpmsign', 'sha224sum', 'sg_luns', 'get_kernel_version', 'lexgrog', 'easy_install-2.7', 'prove', 'lp', 'sudoedit', 'pod2latex', 'zforce', 'grub2-file', 'ib_atomic_lat', 'pnmtojbig', 'expiry', 'fc-cat', 'ppmtv', 'lex', 'wbmptopbm', 'systemd-tmpfiles', 'base64', 'mattrib', 'uudepipe', 'pamfix', 'qmlplugindump', 'pbmtomatrixorbital', 'gpgtar', 'pixeltool', 'mac2unix', 'koi8rxterm', 'xzdec', 'strace', 'ipmilan', 'sbigtopgm', 'openipmi_eventd', 'pgmtopbm', 'mailto-hebrew', 'raw_ethernet_lat', 'testrb.ruby2.1', 'fc-match', 'getent', 'gfortran-4.8', 'pbmtoppa', 'pbmtextps', 'mxtar', 'uncompress', 'passmass', 'pstruct', 'mstmtserver', 'pamtopnm', 'mgrtopbm', 'printenv', 'grub2-emu', 'pbmtonokia', 'qdbusxml2cpp', 'lesskey', 'mailto', 'enc2xs', 'zipnote', 'psidtopgm', 'setfont', 'ib_send_lat', 'systemd-tty-ask-password-agent', 'pgmoil', 'cmuwmtopbm', 'gen-s390-cd-kernel.pl', 'smt', 'lzfgrep', 'getfacl', 'sg_xcopy', 'pnmhistmap', 'build-classpath', 'uname', 'unflatten', 'h2ph', 'znew', 'xml_grep', 'coredumpctl', 'mimeit', 'bzegrep', 'list_audio_tracks', 'pbmtolj', 'splitmail', 'sqlite3', 'mapscrn', 'recode', 'sha384sum', 'lastlog', 'pnmremap', 'ppmtopj', 'systemd-escape', 'prlimit', 'pbmtogo', 'ppmdither', 'ps2ps2', 'newgrp', 'pamdice', 'pnmtops', 'chage', 'pbmtext', 'pnmcolormap', 'regpatch', 'dwp', 'rsh', 'shred', 'agentxtrap', 'zsh', 'remunge', 'eu-strip', 'gdlib-config', 'tr', 'pathchk', 'stat', 'pbmtomda', 'pbmtomacp', 'pnmtotiffcmyk', 'rpmdb2solv', 'build-classpath-directory', 'snmpvacm', 'gxl2gv', 'cups-config', 'sha1sum', 'traptoemail', 'sg_read_long', 'xterm', 'pydoc3.4', 'gstat', 'raw_ethernet_fs_rate', 'pgmdeshadow', 'lzmainfo', 'gpgsplit', 'ibis', 'pamthreshold', 'pbmpage', 'dircolors', 'ppmtoicr', 'roff2pdf', 'uptime', 'makedeltaiso', 'emacs-x11', 'gpgconf', 'arch', 'ping', 'pamdeinterlace', 'augparse', 'c++filt', 'mcd', 'leaftoppm', 'eu-nm', 'mstfwreset', 'verify_blkparse', 'snmpinform', 'resize', 'manweb', 'xzegrep', 'pnmmargin', 'ispellaff2myspell', 'cpuset', 'expect', 'gdtopng', 'mk_listings', 'strip', 'mkfontscale', 'bzfgrep', 'lconvert', 'line', 'smbprngenpdf', 'tkmib', 'imake', 'stty.pl', 'os_release', 'srftopam', 'systemd-nspawn', 'ppmtolss16', 'gvfs-cat', 'sg_requests', 'uic', 'ipmitool', 'groffer', 'smistrip', 'syslinux', 'gvimtutor', 'catman', 'snmptranslate', 'decryptdir', 'bdftogd', 'embedspu', 'git_version.tcl', 'db48_hotbackup', 'pampop9', 'more', 'llstat', 'gst-device-monitor-1.0', 'escp2topbm', 'gcc-ranlib', 'create_sha1sums', 'hpftodit', 'pstree.x11', 'ppmtoarbtxt', 'pphs', 'metasend', 'check-binary-files', 'gnuctags', 'mmd', 'xzcmp', 'mount', 'sldtoppm', 'chcon', 'atq', 'clrunimap', 'pinentry-curses', 'pnmtopng', 'sg_rep_zones', 'mcheck', 'xpstat', 'ipmicmd', 'gxl2dot', 'sg_zone', 'augtool', 'tac', 'vmstat', 'osirrox', 'sg_safte', 'pamperspective', 'tifftopnm', 'lpunlock', 'look', 'python2-config', 'uz', 'otp-md5', 'gendiff', 'python2.7', 'mstmwrite', 'uxterm', 'systemd-sysusers', 'flex++', 'ppmtopict', 'lzgrep', 'scsi_stop', 'ib_write_lat', 'dirmngr-client', 'sg_inq', 'setvtrgb', 'lpoptions', 'fc-pattern', 'palmtopnm', 'xdg-su', 'rvim', 'jvmjar', 'showpicture', 'openpgp-tool', 'gsettings', 'tartest', 'ppmdraw', 'ibdiagnet', 'pgmedge', 'pnmscalefixed', 'red', 'ppmshadow', 'rev', 'pamundice', 'ypcat', 'perlbug', 'ctest', 'zgrep', 'mgrep', 'numfmt', 'perlthanks', 'rjoe', 'blkiomon', 'wodim', 'pnmflip', 'ncurses5-config', 'pkcs15-tool', 'mcopy', 'migratepages', 'setkeycodes', 'rftp', 'script', 'gvfs-ls', 'grub2-mkpasswd-pbkdf2', 'users', 'kernel-install', 'ldapwhoami', 'systemd-hwdb', 'cc', 'ps2pdf14', 'pnmpsnr', 'net-snmp-create-v3-user', 'systemd-machine-id-setup', 'iconv', 'cdrecord', 'pamsharpmap', 'fallocate', 'tset', 'wordlist2hunspell', 'compress_susetags', 'snmpgetnext', 'ruby.ruby2.1', 'gzip', 'dvdrecord', 'pgmramp', 'ycpc', 'find', 'syslinux-mtools', 'gpg', 'pbmtowbmp', 'mlabel', 'pitchplay', 'sg_referrals', 'pbmmake', 'cpupower', 'csplit', 'chem', 'pnmtofiasco', 'yume-opkg', 'hunzip', 'gem.ruby2.1', 'rename', 'lpq', 'qhelpconverter', 'usb-devices', 'fc-cache', 'sg_reset', 'watch', 'pbmtoascii', 'ChangeSymlinks', 'su', 'hdifftopam', 'isohybrid', 'systemd-loginctl', 'kmod', 'python2', 'rcc-qt5', 'rds-stress', 'killall', 'gpgkey2ssh', 'stdlogctl', 'create_repo_for_patch.sh', 'sg_write_verify', 'pnmpaste', 'sdiff', 'pnmtofits', 'db48_recover', 'ilbmtoppm', 'eidenv', 'clear', 'ld.bfd', 'host', 'chrt', 'sharesec', 'diff-jars', 'pnmmercator', 'sg_modes', 'ppmlabel', 'pgmmorphconv', 'echo', 'pbmtoptx', 'lkbib', 'pbmtogem', 'uic-qt5', 'asn1Decoding', 'gem', 'gdb', 'mwm', 'hzip', 'sg_wr_mode', 'sg_sync', 'pgmtoppm', 'pnminvert', 'pamtilt', 'ppmrelief', 'ptty_try', 'nsupdate', 'which', 'ppmtojpeg', 'eu-strings', 'sg_write_buffer', 'rlogin-cwd', 'showconsolefont', 'xml_split', 'false', 'sg_test_rwbuf', 'db48_archive', 'ipcmk', 'pamsumm', 'ppmrainbow', 'winicontoppm', 'logname', 'pyuic4', 'cryptoflex-tool', 'ccmakedep', 'ldapmodrdn', 'ppmtomap', 'dump2slvl.pl', 'fc-list', 'eu-make-debug-archive', 'localectl', 'emacs-gtk', 'pamtotga', 'sgp_dd', 'mstconfig', 'qdoc3', 'kbxutil', 'pgmabel', 'linux32', 'cpanp', 'mouse-test', 'gvfs-monitor-file', 'setlogcons', 'mkdir', 'sg_compare_and_write', 'oldfind', 'eu-elflint', 'pyrcc4', 'pamvalidate', 'crash', 'ppmglobe', 'igawk', 'mergesolv', 'fc-query', 'chfn', 'mshortname', 'shasum', 'net-snmp-config', 'utmpdump', 'pamwipeout', 'w3mman', 'qmake', 'split', 'yuvsplittoppm', 'psfstriptable', 'dumpsolv', 'pic', 'ogg123', 'numademo', 'tclsh8.6', 'fixwfwps', 'pic2graph', 'yum-repoquery', 'corelist', 'showpartial', 'pbmreduce', 'nm', 'chmem', 'snmpstatus', 'pxelinux-options', 'ppmtoterm', 'mtype', 'cmp', 'modsign-verify', 'pamsummcol', 'systemconfigurator', 'xml2-config', 'm17n-db', 'ruby-find-versioned', 'includeres', 'smiquery', 'ptargrep', 'plymouth', 'sg_format', 'gvfs-monitor-dir', 'ed', 'pnmcut', 'timeout', 'lessclose.sh', 'gsnd', 'getdelays', 'rdoc.ruby.ruby2.1', 'ppmtospu', 'memdiskfind', 'pgmnorm', 'pyvenv-3.4', 'pamstereogram', 'idn', 'gst-discoverer-1.0', 'wget', 'ldapmodify', 'findsmb', 'unix2dos', 'realpath', 'taskset', 'tload', 'whatis', 'rview', 'update-mime-database', 'gtk-encode-symbolic-svg', 'pfbtopfa', 'ppmspread', 'setvesablank', 'pamtohdiff', 'sg_decode_sense', 'zypp-NameReqPrv', 'ibv_srq_pingpong', 'ldapurl', 'kibitz', 'ssh-keyscan', 'ssh-copy-id', 'oLschema2ldif', 'as', 'pkg-config', 'splain', 'xmlcatalog', 'scan_scsi.linux', 'autoheader', 'glib-mkenums', 'pbmtoplot', 'doxygen', 'fixpsditps', 'gst-play-1.0', 'getfattr', 'xdg-screensaver', 'indxbib', 'pigz', 'solterm', 'rcvAppleSingle', 'systemd-cgtop', 'mailx', 'multixterm', 'ib_read_lat', 'ipcs', 'ausyscall', 'pod2text', 'imgtoppm', 'ln', 'mstmcra', 'revpath', 'pnmnorm', 'g++', 'pambayer', 'rds-info', 'testsolv', 'runcon', 'fixwpps', 'eqn', 'lndir', 'termprobes', 'eu-stack', 'ppmtowinicon', 'lsipc', 'rcp', 'irb', 'luac5.2', 'lslogins', 'linux-boot-prober', 'ppmtopjxl', 'setsid', 'sha1pass', 'busctl', 'xsltproc', 'systemd-delta', 'pnmtosgi', 'roff2dvi', 'ccmake', 'dijkstra', 'qdbus', 'extractres', 'deallocvt', 'formail', 'hostid', 'cyrus_sasl_sample_server', 'asciitopgm', 'nohup', 'sndAppleSingle', 'bootctl', 'uic3', 'systemd-notify', 'ftlint', 'chsh', 'objdump', 'sg_rbuf', 'eu-size', 'pngtopam', 'grub2-mkstandalone', 'sg_sanitize', 'convmv', 'prune', 'dplace', 'ppmtomitsu', 'mkfifo', 'renice', 'rlogin', 'pnmarith', 'strace-log-merge', 'gsdj', 'pamcomp', 'makedepend', '[', 'lefty', 'rawtopgm', 'md5sum', 'fdupes', 'cpp', 'getopt', 'snmptrap', 'gcc-nm', 'python-config', 'pnmtorast', 'dbus-run-session', 'tack', 'lessopen.sh', 'snmpconf', 'zipcloak', 'anytopnm', 'ipmi_ui', 'mm2gv', 'lslocks', 'raw_ethernet_burst_lat', 'infocmp', 'soelim', 'pbmtolps', 'geteltorito', 'lksh', 'smicache', 'telnet', 'xxd', 'instmodsh', 'mformat', 'pnmcomp', 'eu-readelf', 'ppmtoapplevol', 'ppmhist', 'cdda2mp3', 'p11-kit', 'repo2solv', 'chown', 'kmp-install', 'sg_dd', 'ibdiagpath', 'cpan', 'fgrep', 'run_perftest_multi_devices', 'mstflint', 'gresource', 'sudoreplay', 'pnmenlarge', 'akill', 'nice', 'gobject-query', 'sg_persist', 'pnmsplit', 'smbcquotas', 'rsync', 'udisksctl', 'cdrskin', 'dd', 'sg_readcap', 'ldapadd', 'showchar', 'head', 'zfgrep', 'file', 'pbmto4425', 'asn1Parser', 'sun2mime', 'icu-config', 'rpmkeys', 'sg_sat_phy_event', 'db48_stat', 'rpm2cpio', 'groff', 'susetags2solv', 'pgmnoise', 'pbmtoybm', 'refer', 'pdbedit', 'test', 'syncqt.pl', 'psnup', 'pkcheck', 'qdbusviewer', 'gtk-query-immodules-2.0-64', 'pamtosvg', 'autoexpect', 'finger', 'logger', 'combinedeltarpm', 'lss16toppm', 'xvminitoppm', 'rlatopam', 'smbinfo', 'w', 'pamlookup', 'mdig', 'pbmtoepson', 'gcc-ranlib-4.8', 'pambackground', 'mrftopbm', 'bznew', 'lwp-mirror', 'json_pp', 'lscpu', 'omshell', 'procmail', 'esdcompat', 'ps2ps', 'mimebzip', 'sqlite3_analyzer', 'pkcs11-tool', 'pydoc3', 'cpp-4.8', 'ibv_devices', 'grog', 'pngtopnm', 'libpng-config', 'ncursesw6-config', 'gdparttopng', 'pgmenhance', 'gvpr', 'zless', 'mrd', 'pnmfile', 'gvfs-info', 'pamcut', 'zmore', 'mailserver', 'qlalr', 'pnmquant', 'pamtogif', 'dump2psl.pl'] ['chown', 'setvesablank', 'mkdir', 'lksh', 'showconsolefont', 'echo', 'setvtrgb', 'setkeycodes', 'mount', 'pidof', 'zsh', 'false', 'loadunimap', 'clrunimap', 'rm', 'setfont', 'setlogcons', 'fgrep', 'kbd_mode', 'gzip', 'dd', 'find', 'logger', 'awk', 'ping6', 'cp', 'stat', 'psfstriptable', 'fgconsole', 'df', 'initviocons', 'systemd-ask-password', 'setpalette', 'uname', 'plymouth', 'unicode_stop', 'gawk', 'kill', 'readlink', 'touch', 'arch', 'ed', 'su', 'ls', 'unicode_start', 'sync', 'fillup', 'sort', 'sed', 'mail', 'psfaddtable', 'screendump', 'mktemp', 'loadkeys', 'ping', 'ln', 'mksh', 'guess_encoding', 'dmesg', 'more', 'deallocvt', 'kbdrate', 'systemctl', 'spawn_console', 'openvt', 'systemd', 'date', 'resizecons', 'pwd', 'rmdir', 'chmod', 'setmetamode', 'mv', 'chgrp', 'grep', 'findmnt', 'showkey', 'gunzip', 'zcat', 'dumpkeys', 'ksh', 'mknod', 'spawn_login', 'psfxtable', 'umount', 'chvt', 'lsblk', 'ypdomainname', 'psfgettable', 'ip', 'nisdomainname', 'cat', 'getkeycodes', 'setleds', 'basename', 'lsmod', 'ipg', 'sleep', 'egrep', 'stty', 'kbdinfo', 'cpio', 'true', 'outpsfheader', 'getunimap', 'mapscrn', 'md5sum', 'blkparse', 'dbus-cleanup-sockets', 'dbus-monitor', 'fsync', 'tar', 'ex', 'fuser', 'csh', 'rpm', 'pdksh', 'login', 'pgrep', 'btrace', 'dbus-send', 'netstat', 'tcsh', 'domainname', 'ps', 'dbus-uuidgen', 'hostname', 'vi', 'vim', 'pkill', 'dnsdomainname', 'bash', 'blktrace', 'usleep', 'dbus-daemon', 'keyctl', 'sh'] [] ['cshutdown', 'cpush', 'ckill', 'c3_sock.py', 'clist', 'cexecs', 'cpushimage', 'c3.csh', 'cname', 'cget', 'c3.sh', 'ckillnode', 'c3_except.py', 'crm', 'c3cmd-filter', 'c3_file_obj.py', 'cnum', 'c3_com_obj.py', 'c3_version.py', 'cexec', 'c3_config.py'] ['pbs_rdel', 'pbs_migrate_users', 'qdisable', 'pbs_python', 'qselect', 'pbsdsh', 'pbs_remsh', 'qenable', 'nqs2pbs', 'pbsrun', 'pbs_mpilam', 'qstat', 'qmgr', 'tracejob', 'qrerun', 'pbs_tmrsh', 'qsig', 'pclm', 'qdel', 'pbs_lamboot', 'pbs_password', 'pbsrun_wrap', 'pbs_mpirun', 'pbs_mpihp', 'qalter', 'qmove', 'pbs_wish', 'pbs_sleep', 'qstop', 'qrun', 'qsub', 'qorder', 'pbs_rstat', 'printjob', 'qhold', 'qrls', 'mpiexec', 'qmsg', 'pbs_attach', 'pbs_release_nodes', 'pbsnodes', 'pbsrun_unwrap', 'printjob.bin', 'pbs_tclsh', 'qterm', 'pbs_rsub', 'qstart', 'pbs_ralter', 'pbs_topologyinfo', 'pbs_hostn'] ['btrfs-debug-tree', 'mkdosfs', 'audispd', 'ifdown', 'fsck.ext3', 'mkfs.cramfs', 'raw', 'agetty', 'pivot_root', 'ifstatus', 'blkid', 'fsck.minix', 'hwclock', 'OneClickInstallUrlHandler', 'mke2fs', 'modprobe', 'fstrim', 'debugfs', 'hdparm', 'insserv', 'arping', 'reboot', 'crda', 'tracepath', 'btrfs', 'fsck.fat', 'badblocks', 'btrfsck', 'kdump', 'vconfig', 'e2undo', 'mkfs.minix', 'shutdown', 'regdbdump', 'adjtimex', 'auditctl', 'in.rdisc', 'fbtest', 'mkfs.msdos', 'dosfsck', 'swapon', 'telinit', 'mkfs.btrfs', 'mkfs.vfat', 'OneClickInstallUI', 'yast', 'fsck.ext2', 'dumpe2fs', 'smart_agetty', 'aureport', 'swaplabel', 'ctrlaltdel', 'btrfs-convert', 'OneClickInstallCLI', 'mkfs.ext3', 'chkconfig', 'btrfstune', 'rmmod', 'fsck.msdos', 'fdisk', 'SuSEfirewall2', 'logsave', 'mkfs.fat', 'e2image', 'rcipmi', 'mkfs.bfs', 'blockdev', 'switch_root', 'OCICLI', 'fsck', 'mkfs.ext4', 'sfdisk', 'insmod', 'fsck.cramfs', 'tracepath6', 'udevadm', 'lsmod', 'chcpu', 'btrfs-image', 'ip', 'refresh_initrd', 'ifprobe', 'depmod', 'fsck.vfat', 'modinfo', 'resize2fs', 'findfs', 'auditd', 'nologin', 'yast2', 'btrfs-show-super', 'kexec', 'poweroff', 'e2fsck', 'autrace', 'fsfreeze', 'fsck.ext4', 'halt', 'mkfs.ext2', 'rcSuSEfirewall2', 'service', 'wiper.sh', 'wipefs', 'dosfslabel', 'ifenslave', 'udevd', 'cryptsetup', 'fsck.btrfs', 'e2label', 'swapoff', 'augenrules', 'tune2fs', 'mkswap', 'ifup', 'cfdisk', 'runlevel', 'mount.fuse', 'losetup', 'rsyslogd', 'btrfs-zero-log', 'clockdiff', 'ausearch', 'mkfs', 'init', 'slattach', 'chkstat-polkit', 'getappcore', 'mdadm', 'ipmaddr', 'update-bootloader', 'install-info', 'fsck.xfs', 'purge-kernels', 'tunefs.reiserfs', 'vgremove', 'mount.cifs', 'fstab-decode', 'iscsi-iname', 'pmap_set2', 'lvscan', 'vgexport', 'vgreduce', 'rcnetwork', 'mount.crypt_LUKS', 'mkill', 'dmraid', 'iscsiuio', 'SUSEfirewall2', 'bootcpuset', 'vgcreate', 'lvmconf', 'umount.nfs4', 'multipathd', 'rcsubdomain', 'installkernel', 'route', 'dmeventd', 'pvremove', 'pvscan', 'lvmsar', 'unix2_chkpwd', 'multipath', 'vgcfgrestore', 'umount.crypt', 'pvchange', 'start_daemon', 'lvrename', 'mount.crypto_LUKS', 'lvmetad', 'unix_update', 'mount.nfs', 'ldconfig', 'mingetty', 'pbl', 'umount.nfs', 'setconsole', 'blkdeactivate', 'lvs', 'apparmor_parser', 'iscsi_discovery', 'mount.lustre', 'lvmdump', 'cpuset_release_agent', 'killproc', 'isserial', 'umount.crypto_LUKS', 'analyzevmcore', 'mkinitrd_setup', 'showconsole', 'lvmpolld', 'rvmtab', 'request-key', 'pam_tally2', 'plipconfig', 'dhclient-script', 'lvmconfig', 'vgchange', 'resize_reiserfs', 'ifuser', 'unix_chkpwd', 'umount.crypt_LUKS', 'mdmon', 'fsadm', 'rarp', 'lvdisplay', 'rcapparmor', 'ether-wake', 'lvreduce', 'lspci', 'vgimportclone', 'lvm', 'pvcreate', 'blogger', 'key.dns_resolver', 'vgdisplay', 'pvdisplay', 'dmevent_tool', 'pvs', 'nfsdcltrack', 'rpcbind', 'vgscan', 'blogd', 'iptunnel', 'setpci', 'brcm_iscsiuio', 'chkbin', 'supportconfig', 'lvremove', 'ifconfig', 'vgck', 'nameif', 'vgrename', 'mkfs.reiserfs', 'pvmove', 'lvmsadc', 'vgconvert', 'dhclient', 'lvchange', 'mount.crypt', 'lvconvert', 'vgextend', 'pvresize', 'mkinitrd', 'mkfs.xfs', 'pidofproc', 'startproc', 'vgsplit', 'update-pciids', 'iscsi_fw_login', 'dmsetup', 'pidof', 'reiserfsck', 'debugfs.reiserfs', 'vgmknodes', 'lvmdiskscan', 'sysctl', 'startpar', 'lvextend', 'netconfig', 'vgs', 'blogctl', 'debugreiserfs', 'rpcinfo', 'lvresize', 'vgcfgbackup', 'fsck.reiserfs', 'mkhomedir_helper', 'checkproc', 'set_polkit_default_privs', 'rmt', 'dmstats', 'vhangup', 'lvcreate', 'vgmerge', 'mount.nfs4', 'xfs_repair', 'mkreiserfs', 'vgimport', 'pvck', 'reiserfstune', 'pam_timestamp_check', 'osd_login', 'arp', 'killall5', 'rcsyslog', 'iscsid', 'kpartx', 'iscsi_offload', 'iscsistart', 'dhclient6', 'mpathpersist', 'iscsi-gen-initiatorname', 'iscsiadm'] ['chown', 'setvesablank', 'mkdir', 'lksh', 'showconsolefont', 'echo', 'setvtrgb', 'setkeycodes', 'mount', 'pidof', 'zsh', 'false', 'loadunimap', 'clrunimap', 'rm', 'setfont', 'setlogcons', 'fgrep', 'kbd_mode', 'gzip', 'dd', 'find', 'logger', 'awk', 'ping6', 'cp', 'stat', 'psfstriptable', 'fgconsole', 'df', 'initviocons', 'systemd-ask-password', 'setpalette', 'uname', 'plymouth', 'unicode_stop', 'gawk', 'kill', 'readlink', 'touch', 'arch', 'ed', 'su', 'ls', 'unicode_start', 'sync', 'fillup', 'sort', 'sed', 'mail', 'psfaddtable', 'screendump', 'mktemp', 'loadkeys', 'ping', 'ln', 'mksh', 'guess_encoding', 'dmesg', 'more', 'deallocvt', 'kbdrate', 'systemctl', 'spawn_console', 'openvt', 'systemd', 'date', 'resizecons', 'pwd', 'rmdir', 'chmod', 'setmetamode', 'mv', 'chgrp', 'grep', 'findmnt', 'showkey', 'gunzip', 'zcat', 'dumpkeys', 'ksh', 'mknod', 'spawn_login', 'psfxtable', 'umount', 'chvt', 'lsblk', 'ypdomainname', 'psfgettable', 'ip', 'nisdomainname', 'cat', 'getkeycodes', 'setleds', 'basename', 'lsmod', 'ipg', 'sleep', 'egrep', 'stty', 'kbdinfo', 'cpio', 'true', 'outpsfheader', 'getunimap', 'mapscrn', 'md5sum', 'blkparse', 'dbus-cleanup-sockets', 'dbus-monitor', 'fsync', 'tar', 'ex', 'fuser', 'csh', 'rpm', 'pdksh', 'login', 'pgrep', 'btrace', 'dbus-send', 'netstat', 'tcsh', 'domainname', 'ps', 'dbus-uuidgen', 'hostname', 'vi', 'vim', 'pkill', 'dnsdomainname', 'bash', 'blktrace', 'usleep', 'dbus-daemon', 'keyctl', 'sh'] **** Configure header /p/work2/tmondrag/petsc-N5i8ny/confdefs.h **** #if !defined(INCLUDED_UNKNOWN) #define INCLUDED_UNKNOWN #define IS_COLORING_MAX USHRT_MAX #define MPIU_COLORING_VALUE MPI_UNSIGNED_SHORT #define PETSC_CLANGUAGE_C 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_CXX_INLINE inline #define PETSC_CXX_RESTRICT __restrict #define PETSC_C_INLINE inline #define PETSC_C_RESTRICT __restrict #define PETSC_FORTRAN_CHARLEN_T int #define PETSC_HAVE_C99 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_PTHREAD_H 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_SYS_SYSINFO_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_REAL___FLOAT128 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_STRUCT_SIGACTION 1 #define PETSC_SIZEOF_ENUM 4 #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 8 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SHORT 2 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_USE_FORTRANKIND 1 #define PETSC_USE_VISIBILITY_C 1 #define PETSC_USE_VISIBILITY_CXX 1 #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_BZERO 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_GETWD 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MEMALIGN 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_MMAP 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_SYSINFO 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_SIGNAL_CAST #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_USE_SHARED_LIBRARIES 1 #define PETSC_USE_DEBUGGER "gdb" #define PETSC_VERSION_BRANCH_GIT "(HEAD detached from 5ea3abf)" #define PETSC_VERSION_DATE_GIT "2020-01-24 13:29:59 -0600" #define PETSC_VERSION_GIT "v3.12.3-632-gaf591a4" #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_COMBINER_CONTIGUOUS 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_COMBINER_NAMED 1 #define PETSC_HAVE_MPI_C_DOUBLE_COMPLEX 1 #define PETSC_HAVE_MPI_EXSCAN 1 #define PETSC_HAVE_MPI_FINALIZED 1 #define PETSC_HAVE_MPI_GET_ACCUMULATE 1 #define PETSC_HAVE_MPI_GET_LIBRARY_VERSION 1 #define PETSC_HAVE_MPI_IALLREDUCE 1 #define PETSC_HAVE_MPI_IBARRIER 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_INT64_T 1 #define PETSC_HAVE_MPI_IN_PLACE 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_NEIGHBORHOOD_COLLECTIVES 1 #define PETSC_HAVE_MPI_NONBLOCKING_COLLECTIVES 1 #define PETSC_HAVE_MPI_ONE_SIDED 1 #define PETSC_HAVE_MPI_PROCESS_SHARED_MEMORY 1 #define PETSC_HAVE_MPI_REDUCE_LOCAL 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER 1 #define PETSC_HAVE_MPI_REDUCE_SCATTER_BLOCK 1 #define PETSC_HAVE_MPI_RGET 1 #define PETSC_HAVE_MPI_TYPE_DUP 1 #define PETSC_HAVE_MPI_TYPE_GET_ENVELOPE 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_MISSING_SIGTERM 1 #define PETSC_Alignx(a,b) #define PETSC_IS_COLOR_VALUE_TYPE short #define PETSC_IS_COLOR_VALUE_TYPE_F integer2 #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_INFO 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MALLOC_COALESCED 1 #define PETSC_MEMALIGN 16 #define PETSC_LEVEL1_DCACHE_LINESIZE 32 #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define PETSC__GNU_SOURCE 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_PYTHON_EXE "/usr/bin/python" #endif **** C specific Configure header /p/work2/tmondrag/petsc-N5i8ny/conffix.h **** #if !defined(INCLUDED_UNKNOWN) #define INCLUDED_UNKNOWN #if defined(__cplusplus) extern "C" { } #else #endif #endif ******************************************************************************* OSError while running ./configure ------------------------------------------------------------------------------- [Errno 13] Permission denied: '/pbs/SLB' ******************************************************************************* File "/p/work2/tmondrag/moose/petsc/config/configure.py", line 447, in petsc_configure framework.configure(out = sys.stdout) File "/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/framework.py", line 1219, in configure self.processChildren() File "/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/framework.py", line 1208, in processChildren self.serialEvaluation(self.childGraph) File "/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/framework.py", line 1183, in serialEvaluation child.configure() File "/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/packages/lgrind.py", line 41, in configure self.getExecutable('lgrind', getFullPath = 1) File "/p/work2/tmondrag/moose/petsc/config/BuildSystem/config/base.py", line 307, in getExecutable self.logWrite(' '+str(os.listdir(d))+'\n') ================================================================================ Finishing configure run at Fri, 24 Jan 2020 17:01:49 -0600 ================================================================================ From jeremy at seamplex.com Thu Jan 30 15:02:58 2020 From: jeremy at seamplex.com (Jeremy Theler) Date: Thu, 30 Jan 2020 18:02:58 -0300 Subject: [petsc-users] Product of matrix row times a vector Message-ID: Sorry if this is basic, but I cannot figure out how to do it in parallel and I'd rather not say how I do it in single-processor mode because I would be ashamed. Say I have a matrix and I want to multiply a row times a vector to obtain a scalar. Actually I would like to choose some rows, multiply each of them by the vector and then add the scalars up. Or, conversely, sum all the rows colmunwise and then multiply the sum by a vector. How can I do this? Regards -- jeremy theler www.seamplex.com From bsmith at mcs.anl.gov Thu Jan 30 15:07:55 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 30 Jan 2020 21:07:55 +0000 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: <6A2A9878-5B56-4B07-99F0-9B9625531AEC@anl.gov> As Jed would say --with-lgrind=0 > On Jan 30, 2020, at 2:49 PM, Fande Kong wrote: > > > Hi All, > > It looks like a bug for me. > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". The configuration log is attached. Any way to disable lgrind detection. > > Thanks, > > Fande > > > > > > ---------- Forwarded message --------- > From: Tomas Mondragon > Date: Thu, Jan 30, 2020 at 9:54 AM > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC > To: moose-users > > > Configuration log is attached > > -- > You received this message because you are subscribed to the Google Groups "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/1a976f38-4944-425f-af72-f5ce7ce3ac85%40googlegroups.com. > From bsmith at mcs.anl.gov Thu Jan 30 15:10:21 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 30 Jan 2020 21:10:21 +0000 Subject: [petsc-users] Product of matrix row times a vector In-Reply-To: References: Message-ID: <1EF5ED14-4D20-495A-AF76-5BA19CDD5A97@anl.gov> MatGetSubMatrix() and then do the product on the sub matrix then VecSum Barry > On Jan 30, 2020, at 3:02 PM, Jeremy Theler wrote: > > Sorry if this is basic, but I cannot figure out how to do it in > parallel and I'd rather not say how I do it in single-processor mode > because I would be ashamed. > > Say I have a matrix and I want to multiply a row times a vector to > obtain a scalar. Actually I would like to choose some rows, multiply > each of them by the vector and then add the scalars up. Or, conversely, > sum all the rows colmunwise and then multiply the sum by a vector. > > How can I do this? > > > Regards > -- > jeremy theler > www.seamplex.com > > From balay at mcs.anl.gov Thu Jan 30 15:13:55 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 30 Jan 2020 15:13:55 -0600 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: The issue is: >>> [Errno 13] Permission denied: '/pbs/SLB' <<< Try removing this from PATH - and rerun configure. This part of configure code should be fixed.. [or protected with 'try'] Satish On Thu, 30 Jan 2020, Fande Kong wrote: > Hi All, > > It looks like a bug for me. > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". The > configuration log is attached. Any way to disable lgrind detection. > > Thanks, > > Fande > > > > > > ---------- Forwarded message --------- > From: Tomas Mondragon > Date: Thu, Jan 30, 2020 at 9:54 AM > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC > To: moose-users > > > Configuration log is attached > > From balay at mcs.anl.gov Thu Jan 30 15:47:08 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 30 Jan 2020 15:47:08 -0600 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: I pushed a fix to branch balay/fix-check-files-in-path - please give it a try. https://gitlab.com/petsc/petsc/-/merge_requests/2490 Satish On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > The issue is: > > >>> > [Errno 13] Permission denied: '/pbs/SLB' > <<< > > Try removing this from PATH - and rerun configure. > > This part of configure code should be fixed.. [or protected with 'try'] > > Satish > > On Thu, 30 Jan 2020, Fande Kong wrote: > > > Hi All, > > > > It looks like a bug for me. > > > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". The > > configuration log is attached. Any way to disable lgrind detection. > > > > Thanks, > > > > Fande > > > > > > > > > > > > ---------- Forwarded message --------- > > From: Tomas Mondragon > > Date: Thu, Jan 30, 2020 at 9:54 AM > > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC > > To: moose-users > > > > > > Configuration log is attached > > > > > From alexlindsay239 at gmail.com Thu Jan 30 15:57:58 2020 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Thu, 30 Jan 2020 13:57:58 -0800 Subject: [petsc-users] Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: <4d3bfd39-d68b-40d1-af4a-536f24860035@googlegroups.com> References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> <4d3bfd39-d68b-40d1-af4a-536f24860035@googlegroups.com> Message-ID: Tomas, make sure you're always using "reply all" when the PETSc users list is involved... On Thu, Jan 30, 2020 at 1:52 PM Tomas Mondragon < tom.alex.mondragon at gmail.com> wrote: > I altered the problematic part of the getExecutable method in > petsc/config/BuildSystem/config/base.py just to get over this obstacle. > > def getExecutable(self, names, path = [], getFullPath = 0, > useDefaultPath = 0, resultName = '', setMakeMacro = 1): > > '''Search for an executable in the list names > > - Each name in the list is tried for each entry in the path until a > name is located, then it stops > > - If found, the path is stored in the variable "name", or > "resultName" if given > > - By default, a make macro "resultName" will hold the path''' > > found = 0 > > if isinstance(names,str) and names.startswith('/'): > > path = os.path.dirname(names) > > names = os.path.basename(names) > > > if isinstance(names, str): > > names = [names] > > if isinstance(path, str): > > path = path.split(os.path.pathsep) > > if not len(path): > > useDefaultPath = 1 > > > def getNames(name, resultName): > > import re > > prog = re.match(r'(.*?)(? > if prog: > > name = prog.group(1) > > options = prog.group(2) > > else: > > options = '' > > if not resultName: > > varName = name > > else: > > varName = resultName > > return name, options, varName > > > varName = names[0] > > varPath = '' > > for d in path: > > for name in names: > > name, options, varName = getNames(name, resultName) > > if self.checkExecutable(d, name): > > found = 1 > > getFullPath = 1 > > varPath = d > > break > > if found: break > > if useDefaultPath and not found: > > for d in os.environ['PATH'].split(os.path.pathsep): > > for name in names: > > name, options, varName = getNames(name, resultName) > > if self.checkExecutable(d, name): > > found = 1 > > varPath = d > > break > > if found: break > > if not found: > > dirs = self.argDB['with-executables-search-path'] > > if not isinstance(dirs, list): dirs = [dirs] > > for d in dirs: > > for name in names: > > name, options, varName = getNames(name, resultName) > > if self.checkExecutable(d, name): > > found = 1 > > getFullPath = 1 > > varPath = d > > break > > if found: break > > > if found: > > if getFullPath: > > setattr(self, varName, os.path.abspath(os.path.join(varPath, > name))+options) > > else: > > setattr(self, varName, name+options) > > if setMakeMacro: > > self.addMakeMacro(varName.upper(), getattr(self, varName)) > > else: > > self.logWrite(' Unable to find programs '+str(names)+' providing > listing of each search directory to help debug\n') > > self.logWrite(' Path provided in Python program\n') > > for d in path: > > if os.path.isdir(d): > > try: > > self.logWrite(' '+str(os.listdir(d))+'\n') > > except OSError as e: > > self.logWrite(' '+e.strerror+'\n') > > else: > > self.logWrite(' Warning '+d+' is not a directory\n') > > if useDefaultPath: > > if os.environ['PATH'].split(os.path.pathsep): > > self.logWrite(' Path provided by default path\n') > > for d in os.environ['PATH'].split(os.path.pathsep): > > if os.path.isdir(d): > > try: > > self.logWrite(' '+str(os.listdir(d))+'\n') > > except OSError as e: > > self.logWrite(' '+e.strerror+'\n') > > else: > > self.logWrite(' Warning '+d+' is not a directory\n') > > dirs = self.argDB['with-executables-search-path'] > > if not isinstance(dirs, list): dirs = [dirs] > > if dirs: > > self.logWrite(' Path provided by --with-executables-search-path > \n') > > for d in dirs: > > if os.path.isdir(d): > > try: > > self.logWrite(' '+str(os.listdir(d))+'\n') > > except OSError as e: > > self.logWrite(' '+e.strerror+'\n') > > else: > > self.logWrite(' Warning '+d+' is not a directory\n') > > return found > > I wasn't able to figure out why lgrind was being searched for, but running > configure that wasn't just lgrind that sets of this bit of code, c2html > does as well. > > Anyhow, with this fix, I was able to get configure to continue on until it > failed to compile parmetis. Looking at configure.log it looks like it's > because of a bad path to libmetis. > > On Thursday, January 30, 2020 at 10:54:21 AM UTC-6, Tomas Mondragon wrote: >> >> Configuration log is attached >> > -- > You received this message because you are subscribed to the Google Groups > "moose-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/4d3bfd39-d68b-40d1-af4a-536f24860035%40googlegroups.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.alex.mondragon at gmail.com Thu Jan 30 16:03:00 2020 From: tom.alex.mondragon at gmail.com (Tomas Mondragon) Date: Thu, 30 Jan 2020 16:03:00 -0600 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: Just to be extra safe, that fix should also be applied to the 'with-executables-search-path' section as well, but your fix did help me get past the checks for lgrind and c2html. On Thu, Jan 30, 2020, 3:47 PM Satish Balay wrote: > I pushed a fix to branch balay/fix-check-files-in-path - please give it a > try. > > https://gitlab.com/petsc/petsc/-/merge_requests/2490 > > Satish > > On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > > > The issue is: > > > > >>> > > [Errno 13] Permission denied: '/pbs/SLB' > > <<< > > > > Try removing this from PATH - and rerun configure. > > > > This part of configure code should be fixed.. [or protected with 'try'] > > > > Satish > > > > On Thu, 30 Jan 2020, Fande Kong wrote: > > > > > Hi All, > > > > > > It looks like a bug for me. > > > > > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". > The > > > configuration log is attached. Any way to disable lgrind detection. > > > > > > Thanks, > > > > > > Fande > > > > > > > > > > > > > > > > > > ---------- Forwarded message --------- > > > From: Tomas Mondragon > > > Date: Thu, Jan 30, 2020 at 9:54 AM > > > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC > > > To: moose-users > > > > > > > > > Configuration log is attached > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Jan 30 16:09:21 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 30 Jan 2020 16:09:21 -0600 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: Ah - missed that part. I've updated the branch/MR. Thanks! Satish On Thu, 30 Jan 2020, Tomas Mondragon wrote: > Just to be extra safe, that fix should also be applied to the > 'with-executables-search-path' section as well, but your fix did help me > get past the checks for lgrind and c2html. > > On Thu, Jan 30, 2020, 3:47 PM Satish Balay wrote: > > > I pushed a fix to branch balay/fix-check-files-in-path - please give it a > > try. > > > > https://gitlab.com/petsc/petsc/-/merge_requests/2490 > > > > Satish > > > > On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > > > > > The issue is: > > > > > > >>> > > > [Errno 13] Permission denied: '/pbs/SLB' > > > <<< > > > > > > Try removing this from PATH - and rerun configure. > > > > > > This part of configure code should be fixed.. [or protected with 'try'] > > > > > > Satish > > > > > > On Thu, 30 Jan 2020, Fande Kong wrote: > > > > > > > Hi All, > > > > > > > > It looks like a bug for me. > > > > > > > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". > > The > > > > configuration log is attached. Any way to disable lgrind detection. > > > > > > > > Thanks, > > > > > > > > Fande > > > > > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message --------- > > > > From: Tomas Mondragon > > > > Date: Thu, Jan 30, 2020 at 9:54 AM > > > > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC > > > > To: moose-users > > > > > > > > > > > > Configuration log is attached > > > > > > > > > > > > > > > > From jed at jedbrown.org Thu Jan 30 16:26:01 2020 From: jed at jedbrown.org (Jed Brown) Date: Thu, 30 Jan 2020 15:26:01 -0700 Subject: [petsc-users] Product of matrix row times a vector In-Reply-To: <1EF5ED14-4D20-495A-AF76-5BA19CDD5A97@anl.gov> References: <1EF5ED14-4D20-495A-AF76-5BA19CDD5A97@anl.gov> Message-ID: <87wo98k2di.fsf@jedbrown.org> "Smith, Barry F. via petsc-users" writes: > MatGetSubMatrix() and then do the product on the sub matrix then VecSum If you're only doing it once or the relevant rows are changing, it may be cheaper to multiply the whole matrix by a vector instead of creating a submatrix. If you have many of these to do at the same time, I'd suggest creating a sparse matrix and doing the sparse matrix product. From balay at mcs.anl.gov Thu Jan 30 16:26:26 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 30 Jan 2020 16:26:26 -0600 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: Pushed one more change - move duplicate/similar code into a function. Satish On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > Ah - missed that part. I've updated the branch/MR. > > Thanks! > Satish > > On Thu, 30 Jan 2020, Tomas Mondragon wrote: > > > Just to be extra safe, that fix should also be applied to the > > 'with-executables-search-path' section as well, but your fix did help me > > get past the checks for lgrind and c2html. > > > > On Thu, Jan 30, 2020, 3:47 PM Satish Balay wrote: > > > > > I pushed a fix to branch balay/fix-check-files-in-path - please give it a > > > try. > > > > > > https://gitlab.com/petsc/petsc/-/merge_requests/2490 > > > > > > Satish > > > > > > On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > > > > > > > The issue is: > > > > > > > > >>> > > > > [Errno 13] Permission denied: '/pbs/SLB' > > > > <<< > > > > > > > > Try removing this from PATH - and rerun configure. > > > > > > > > This part of configure code should be fixed.. [or protected with 'try'] > > > > > > > > Satish > > > > > > > > On Thu, 30 Jan 2020, Fande Kong wrote: > > > > > > > > > Hi All, > > > > > > > > > > It looks like a bug for me. > > > > > > > > > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". > > > The > > > > > configuration log is attached. Any way to disable lgrind detection. > > > > > > > > > > Thanks, > > > > > > > > > > Fande > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message --------- > > > > > From: Tomas Mondragon > > > > > Date: Thu, Jan 30, 2020 at 9:54 AM > > > > > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC > > > > > To: moose-users > > > > > > > > > > > > > > > Configuration log is attached > > > > > > > > > > > > > > > > > > > > > > > From fdkong.jd at gmail.com Thu Jan 30 16:28:44 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Thu, 30 Jan 2020 15:28:44 -0700 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: Bring conversation to the MOOSE list as well. Fande, On Thu, Jan 30, 2020 at 3:26 PM Satish Balay via petsc-users < petsc-users at mcs.anl.gov> wrote: > Pushed one more change - move duplicate/similar code into a function. > > Satish > > On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > > > Ah - missed that part. I've updated the branch/MR. > > > > Thanks! > > Satish > > > > On Thu, 30 Jan 2020, Tomas Mondragon wrote: > > > > > Just to be extra safe, that fix should also be applied to the > > > 'with-executables-search-path' section as well, but your fix did help > me > > > get past the checks for lgrind and c2html. > > > > > > On Thu, Jan 30, 2020, 3:47 PM Satish Balay wrote: > > > > > > > I pushed a fix to branch balay/fix-check-files-in-path - please give > it a > > > > try. > > > > > > > > https://gitlab.com/petsc/petsc/-/merge_requests/2490 > > > > > > > > Satish > > > > > > > > On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote: > > > > > > > > > The issue is: > > > > > > > > > > >>> > > > > > [Errno 13] Permission denied: '/pbs/SLB' > > > > > <<< > > > > > > > > > > Try removing this from PATH - and rerun configure. > > > > > > > > > > This part of configure code should be fixed.. [or protected with > 'try'] > > > > > > > > > > Satish > > > > > > > > > > On Thu, 30 Jan 2020, Fande Kong wrote: > > > > > > > > > > > Hi All, > > > > > > > > > > > > It looks like a bug for me. > > > > > > > > > > > > PETSc was still trying to detect lgrind even we set > "--with-lgrind=0". > > > > The > > > > > > configuration log is attached. Any way to disable lgrind > detection. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Fande > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ---------- Forwarded message --------- > > > > > > From: Tomas Mondragon > > > > > > Date: Thu, Jan 30, 2020 at 9:54 AM > > > > > > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh > on HPC > > > > > > To: moose-users > > > > > > > > > > > > > > > > > > Configuration log is attached > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at seamplex.com Thu Jan 30 17:02:57 2020 From: jeremy at seamplex.com (Jeremy Theler) Date: Thu, 30 Jan 2020 20:02:57 -0300 Subject: [petsc-users] Product of matrix row times a vector References: <50c1bb34dadeb87e015ba84f1c1e9198eafd3c0d.camel@seamplex.com> Message-ID: On Thu, 2020-01-30 at 21:10 +0000, Smith, Barry F. wrote: > MatGetSubMatrix() and then do the product on the sub matrix then > VecSum > Ok, I have it working in a single-processor and throws the expected value. Yet I have a segfault in parallel when I ask for the IS corresponding to the rows. When I call this instruction in parallel I get a segfault (I can post the full debug output if needed): ISCreateStride(PETSC_COMM_WORLD, size_local, first_row, 1, &set_cols); I tried also ISCreateStride(PETSC_COMM_WORLD, size_global, 0, 1, &set_cols)); but it also fails with the same segfault. What am I getting wrong? -- jeremy From knepley at gmail.com Thu Jan 30 17:05:57 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Jan 2020 18:05:57 -0500 Subject: [petsc-users] Product of matrix row times a vector In-Reply-To: References: <50c1bb34dadeb87e015ba84f1c1e9198eafd3c0d.camel@seamplex.com> Message-ID: On Thu, Jan 30, 2020 at 6:04 PM Jeremy Theler wrote: > > On Thu, 2020-01-30 at 21:10 +0000, Smith, Barry F. wrote: > > > MatGetSubMatrix() and then do the product on the sub matrix then > > VecSum > > > > Ok, I have it working in a single-processor and throws the expected > value. Yet I have a segfault in parallel when I ask for the IS > corresponding to the rows. When I call this instruction in parallel I > get a segfault (I can post the full debug output if needed): > > ISCreateStride(PETSC_COMM_WORLD, size_local, first_row, 1, &set_cols); > > I tried also > > ISCreateStride(PETSC_COMM_WORLD, size_global, 0, 1, &set_cols)); > > but it also fails with the same segfault. > > What am I getting wrong? > Show the entire error, including stack. Thanks, Matt > -- > jeremy > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at seamplex.com Fri Jan 31 03:18:09 2020 From: jeremy at seamplex.com (Jeremy Theler) Date: Fri, 31 Jan 2020 06:18:09 -0300 Subject: [petsc-users] Product of matrix row times a vector In-Reply-To: References: <50c1bb34dadeb87e015ba84f1c1e9198eafd3c0d.camel@seamplex.com> Message-ID: <8c8d90c3f16475fd147fab9ede81936877743b04.camel@seamplex.com> On Thu, 2020-01-30 at 18:05 -0500, Matthew Knepley wrote: > On Thu, Jan 30, 2020 at 6:04 PM Jeremy Theler > wrote: > > On Thu, 2020-01-30 at 21:10 +0000, Smith, Barry F. wrote: > > > > > MatGetSubMatrix() and then do the product on the sub matrix > > then > > > VecSum > > > > > > > Ok, I have it working in a single-processor and throws the expected > > value. Yet I have a segfault in parallel when I ask for the IS > > corresponding to the rows. When I call this instruction in parallel > > I > > get a segfault (I can post the full debug output if needed): > > > > ISCreateStride(PETSC_COMM_WORLD, size_local, first_row, 1, > > &set_cols); > > > > I tried also > > > > ISCreateStride(PETSC_COMM_WORLD, size_global, 0, 1, &set_cols)); > > > > but it also fails with the same segfault. > > > > What am I getting wrong? > > Show the entire error, including stack. > I run in two processes with start_on_debugger. Main terminal says malloc(): corrupted top size and the gdb output with the stack trace is Attaching to program: /home/gtheler/codigos/wasora-suite/fino/examples/fino, process 31192 [New LWP 31196] [New LWP 31198] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 0x00007fd236019720 in __GI___nanosleep ( requested_time=requested_time at entry=0x7ffdf9838190, remaining=remaining at entry=0x7ffdf9838190) at ../sysdeps/unix/sysv/linux/nanosleep.c:28 28 ../sysdeps/unix/sysv/linux/nanosleep.c: No such file or directory. (gdb) c Continuing. [New Thread 0x7fd233034700 (LWP 31212)] Thread 1 "fino" received signal SIGABRT, Aborted. __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) where #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x00007fd235f75535 in __GI_abort () at abort.c:79 #2 0x00007fd235fcc508 in __libc_message (action=action at entry=do_abort, fmt=fmt at entry=0x7fd2360d728d "%s\n") at ../sysdeps/posix/libc_fatal.c:181 #3 0x00007fd235fd2c1a in malloc_printerr ( str=str at entry=0x7fd2360d5518 "malloc(): corrupted top size") at malloc.c:5341 #4 0x00007fd235fd620d in _int_malloc ( av=av at entry=0x7fd23610ec40 , bytes=bytes at entry=2372) at malloc.c:4099 #5 0x00007fd235fd756a in __GI___libc_malloc (bytes=2372) at malloc.c:3057 #6 0x00007fd2377dd42d in PetscMallocAlign (mem=2372, clear=PETSC_TRUE, line=37, func=0x7fd239450408 <__func__.14861> "ISCreate", file=0x7fd239450298 "/home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c", result=0x7ffdf983c618) at /home/gtheler/libs/petsc-3.12.3/src/sys/memory/mal.c:49 #7 0x00007fd2377e0875 in PetscTrMallocDefault (a=768, clear=PETSC_TRUE, lineno=37, function=0x7fd239450408 <__func__.14861> "ISCreate", filename=0x7fd239450298 "/home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c", result=0x7ffdf983c8b0) at /home/gtheler/libs/petsc-3.12.3/src/sys/memory/mtr.c:164 #8 0x00007fd2377dedf0 in PetscMallocA (n=1, clear=PETSC_TRUE, lineno=37, function=0x7fd239450408 <__func__.14861> "ISCreate", --Type for more, q to quit, c to continue without paging-- filename=0x7fd239450298 "/home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c", bytes0=768, ptr0=0x7ffdf983c8b0) at /home/gtheler/libs/petsc-3.12.3/src/sys/memory/mal.c:422 #9 0x00007fd237c67760 in ISCreate (comm=0x556dbe8c9ee0 , is=0x7ffdf983c8b0) at /home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c:37 #10 0x00007fd237c4589a in ISCreateStride ( comm=0x556dbe8c9ee0 , n=70188, first=70191, step=1, is=0x7ffdf983c8b0) #11 0x0000556dbe82aab4 in fino_instruction_reaction (arg=0x556dbf8b4460) at ./reactions.c:72 #12 0x0000556dbe8a8aa2 in wasora_step (whence=0) at ../wasora/src/wasora.c:406 #13 0x0000556dbe8a81b3 in wasora_standard_run () at ../wasora/src/wasora.c:206 #14 0x0000556dbe8a80a5 in main (argc=3, argv=0x7ffdf983cbd8) at ../wasora/src/wasora.c:166 (gdb) From knepley at gmail.com Fri Jan 31 07:59:37 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 31 Jan 2020 08:59:37 -0500 Subject: [petsc-users] Product of matrix row times a vector In-Reply-To: <8c8d90c3f16475fd147fab9ede81936877743b04.camel@seamplex.com> References: <50c1bb34dadeb87e015ba84f1c1e9198eafd3c0d.camel@seamplex.com> <8c8d90c3f16475fd147fab9ede81936877743b04.camel@seamplex.com> Message-ID: On Fri, Jan 31, 2020 at 4:19 AM Jeremy Theler wrote: > > On Thu, 2020-01-30 at 18:05 -0500, Matthew Knepley wrote: > > On Thu, Jan 30, 2020 at 6:04 PM Jeremy Theler > > wrote: > > > On Thu, 2020-01-30 at 21:10 +0000, Smith, Barry F. wrote: > > > > > > > MatGetSubMatrix() and then do the product on the sub matrix > > > then > > > > VecSum > > > > > > > > > > Ok, I have it working in a single-processor and throws the expected > > > value. Yet I have a segfault in parallel when I ask for the IS > > > corresponding to the rows. When I call this instruction in parallel > > > I > > > get a segfault (I can post the full debug output if needed): > > > > > > ISCreateStride(PETSC_COMM_WORLD, size_local, first_row, 1, > > > &set_cols); > > > > > > I tried also > > > > > > ISCreateStride(PETSC_COMM_WORLD, size_global, 0, 1, &set_cols)); > > > > > > but it also fails with the same segfault. > > > > > > What am I getting wrong? > > > > Show the entire error, including stack. > > > > I run in two processes with start_on_debugger. Main terminal says > > malloc(): corrupted top size > The arguments look fine. I would run in valgrind, since it seems like you have memory corruption somewhere else in the code. Thanks, Matt > and the gdb output with the stack trace is > > > Attaching to program: > /home/gtheler/codigos/wasora-suite/fino/examples/fino, process 31192 > [New LWP 31196] > [New LWP 31198] > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". > 0x00007fd236019720 in __GI___nanosleep ( > requested_time=requested_time at entry=0x7ffdf9838190, > remaining=remaining at entry=0x7ffdf9838190) > at ../sysdeps/unix/sysv/linux/nanosleep.c:28 > 28 ../sysdeps/unix/sysv/linux/nanosleep.c: No such file or directory. > (gdb) c > Continuing. > [New Thread 0x7fd233034700 (LWP 31212)] > Thread 1 "fino" received signal SIGABRT, Aborted. > __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 > 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. > (gdb) where > #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 > #1 0x00007fd235f75535 in __GI_abort () at abort.c:79 > #2 0x00007fd235fcc508 in __libc_message (action=action at entry=do_abort, > fmt=fmt at entry=0x7fd2360d728d "%s\n") at > ../sysdeps/posix/libc_fatal.c:181 > #3 0x00007fd235fd2c1a in malloc_printerr ( > str=str at entry=0x7fd2360d5518 "malloc(): corrupted top size") > at malloc.c:5341 > #4 0x00007fd235fd620d in _int_malloc ( > av=av at entry=0x7fd23610ec40 , bytes=bytes at entry=2372) > at malloc.c:4099 > #5 0x00007fd235fd756a in __GI___libc_malloc (bytes=2372) at malloc.c:3057 > #6 0x00007fd2377dd42d in PetscMallocAlign (mem=2372, clear=PETSC_TRUE, > line=37, func=0x7fd239450408 <__func__.14861> "ISCreate", > file=0x7fd239450298 > "/home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c", > result=0x7ffdf983c618) > at /home/gtheler/libs/petsc-3.12.3/src/sys/memory/mal.c:49 > #7 0x00007fd2377e0875 in PetscTrMallocDefault (a=768, clear=PETSC_TRUE, > lineno=37, function=0x7fd239450408 <__func__.14861> "ISCreate", > filename=0x7fd239450298 > "/home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c", > result=0x7ffdf983c8b0) > at /home/gtheler/libs/petsc-3.12.3/src/sys/memory/mtr.c:164 > #8 0x00007fd2377dedf0 in PetscMallocA (n=1, clear=PETSC_TRUE, lineno=37, > function=0x7fd239450408 <__func__.14861> "ISCreate", > --Type for more, q to quit, c to continue without paging-- > filename=0x7fd239450298 > "/home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c", > bytes0=768, ptr0=0x7ffdf983c8b0) > at /home/gtheler/libs/petsc-3.12.3/src/sys/memory/mal.c:422 > #9 0x00007fd237c67760 in ISCreate (comm=0x556dbe8c9ee0 > , > is=0x7ffdf983c8b0) > at /home/gtheler/libs/petsc-3.12.3/src/vec/is/is/interface/isreg.c:37 > #10 0x00007fd237c4589a in ISCreateStride ( > comm=0x556dbe8c9ee0 , n=70188, first=70191, > step=1, > is=0x7ffdf983c8b0) > #11 0x0000556dbe82aab4 in fino_instruction_reaction (arg=0x556dbf8b4460) > at ./reactions.c:72 > #12 0x0000556dbe8a8aa2 in wasora_step (whence=0) at > ../wasora/src/wasora.c:406 > #13 0x0000556dbe8a81b3 in wasora_standard_run () at > ../wasora/src/wasora.c:206 > #14 0x0000556dbe8a80a5 in main (argc=3, argv=0x7ffdf983cbd8) > at ../wasora/src/wasora.c:166 > (gdb) > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.alex.mondragon at gmail.com Fri Jan 31 11:58:34 2020 From: tom.alex.mondragon at gmail.com (Tomas Mondragon) Date: Fri, 31 Jan 2020 11:58:34 -0600 Subject: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: Hypre problem resolved. PETSc commit 05f86fb made in August 05, 2019 added the line 'self.installwithbatch = 0' to the __init__ method of the Configure class in the file petsc/config/BuildSystem/config/packages/hypre.py to fix a bug with hypre installation on Cray KNL systems. Since the machine I was installing os was an SGI system, I decided to try switching to 'self.installwithbatch = 1' and it worked! The configure script was finally able to run to completion. Perhaps there can be a Cray flag for configure that can control this, since it is only Cray's that have this problem with Hypre? For my benefit when I have to do this again - To get moose/petsc/scripts/update_and_rebuild_petsc.sh to run on an SGI system as a batch job, I had to: Make sure the git (gnu version) module was loaded git clone moose cd to the petsc directory and git clone the petsc submodule, but make sure to pull the latest commit. The commit that the moose repo refers to is outdated. cd back to the moose directory, git add petsc and git commit so that the newest petsc commit gets used by the update script. otherwise the old commit will be used. download the tarballs for fblaspack, hypre, metis, mumps, parmetis, scalapack, (PT)scotch, slepc, and superLU_dist. The URLS are in the __init__ methods of the relevant files inmost/petsc/config/BuildSystem/config/packages/ alter moose/scripts/update_and_rebuild_petsc.sh script so that it is a working PBS batch job. Be sure to module swap to the gcc compiler and module load git (gnu version) and alter the ./configure command arguments adding --with-cudac=0 --with-batch=1 changing --download-=/path/to/thirdparty/package/tarball If the supercomputer is *not* a Cray KNL system, change line 26 of moose/petsc/config/BuildSystem/config/packages/hypre.py from 'self.installwithbath = 0' to 'self.installwithbatch = 1', otherwise, install hypre on its own and use --with-hypre-dir=/path/to/hypre in the ./configure command On Fri, Jan 31, 2020 at 10:06 AM Tomas Mondragon < tom.alex.mondragon at gmail.com> wrote: > Thanks for the change to base.py. Pulling the commit, confirm was able to > skip over lgrind and c2html. I did have a problem with Parmetis, but that > was because I was using an old ParMetis commit accidentally. Fixed by > downloading the right commit of ParMetis. > > My current problem is with Hypre. Apparently --download-hypre cannot be > used with --with-batch=1 even if the download URL is on the local machine. > The configuration.log that resulted is attached for anyone who may be > interested. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "moose-users" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/moose-users/2xZsBpG-DtY/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/moose-users/a34fa09e-a4f5-4225-8933-34eb36759260%40googlegroups.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Jan 31 14:13:17 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Fri, 31 Jan 2020 20:13:17 +0000 Subject: [petsc-users] Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: https://gitlab.com/petsc/petsc/-/merge_requests/2494 Will only turn off the hyper batch build if it is a KNL system. Will be added to maint branch Baryr > On Jan 31, 2020, at 11:58 AM, Tomas Mondragon wrote: > > Hypre problem resolved. PETSc commit 05f86fb made in August 05, 2019 added the line 'self.installwithbatch = 0' to the __init__ method of the Configure class in the file petsc/config/BuildSystem/config/packages/hypre.py to fix a bug with hypre installation on Cray KNL systems. Since the machine I was installing os was an SGI system, I decided to try switching to 'self.installwithbatch = 1' and it worked! The configure script was finally able to run to completion. > > Perhaps there can be a Cray flag for configure that can control this, since it is only Cray's that have this problem with Hypre? > > For my benefit when I have to do this again - > To get moose/petsc/scripts/update_and_rebuild_petsc.sh to run on an SGI system as a batch job, I had to: > > Make sure the git (gnu version) module was loaded > git clone moose > cd to the petsc directory and git clone the petsc submodule, but make sure to pull the latest commit. The commit that the moose repo refers to is outdated. > cd back to the moose directory, git add petsc and git commit so that the newest petsc commit gets used by the update script. otherwise the old commit will be used. > download the tarballs for fblaspack, hypre, metis, mumps, parmetis, scalapack, (PT)scotch, slepc, and superLU_dist. The URLS are in the __init__ methods of the relevant files inmost/petsc/config/BuildSystem/config/packages/ > alter moose/scripts/update_and_rebuild_petsc.sh script so that it is a working PBS batch job. Be sure to module swap to the gcc compiler and module load git (gnu version) and alter the ./configure command arguments > adding > --with-cudac=0 > --with-batch=1 > changing > --download-=/path/to/thirdparty/package/tarball > If the supercomputer is not a Cray KNL system, change line 26 of moose/petsc/config/BuildSystem/config/packages/hypre.py from 'self.installwithbath = 0' to 'self.installwithbatch = 1', otherwise, install hypre on its own and use --with-hypre-dir=/path/to/hypre in the ./configure command > > On Fri, Jan 31, 2020 at 10:06 AM Tomas Mondragon wrote: > Thanks for the change to base.py. Pulling the commit, confirm was able to skip over lgrind and c2html. I did have a problem with Parmetis, but that was because I was using an old ParMetis commit accidentally. Fixed by downloading the right commit of ParMetis. > > My current problem is with Hypre. Apparently --download-hypre cannot be used with --with-batch=1 even if the download URL is on the local machine. The configuration.log that resulted is attached for anyone who may be interested. > > -- > You received this message because you are subscribed to a topic in the Google Groups "moose-users" group. > To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/2xZsBpG-DtY/unsubscribe. > To unsubscribe from this group and all its topics, send an email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/a34fa09e-a4f5-4225-8933-34eb36759260%40googlegroups.com. From bsmith at mcs.anl.gov Fri Jan 31 18:23:36 2020 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Sat, 1 Feb 2020 00:23:36 +0000 Subject: [petsc-users] Running moose/scripts/update_and_rebuild_petsc.sh on HPC In-Reply-To: References: <277eb13a-0590-4b1a-a089-09a7c35efd83@googlegroups.com> <641b2c64-0e88-47e7-b33a-e2528287095a@googlegroups.com> <2bf174ba-a994-45f8-a661-454458a6ffa3@googlegroups.com> <0a8315b7-185e-44c9-b1d3-d3b8f52939d4@googlegroups.com> <2c9e5abd-bd4f-4b95-b2ea-8aa6a993d5fb@googlegroups.com> <0b4c29ac-2261-404a-84f6-5e8e28e1c51f@googlegroups.com> <095881e4-592d-427a-ad84-6cbe5fb8fe2e@googlegroups.com> <1a976f38-4944-425f-af72-f5ce7ce3ac85@googlegroups.com> Message-ID: You might find this option useful. --with-packages-download-dir= Skip network download of package tarballs and locate them in specified dir. If not found in dir, print package URL - so it can be obtained manually. This generates a list of URLs to download so you don't need to look through the xxx.py files for that information. Conceivably a script could gather this information from the run of configure and get the tarballs for you. Barry > On Jan 31, 2020, at 11:58 AM, Tomas Mondragon wrote: > > Hypre problem resolved. PETSc commit 05f86fb made in August 05, 2019 added the line 'self.installwithbatch = 0' to the __init__ method of the Configure class in the file petsc/config/BuildSystem/config/packages/hypre.py to fix a bug with hypre installation on Cray KNL systems. Since the machine I was installing os was an SGI system, I decided to try switching to 'self.installwithbatch = 1' and it worked! The configure script was finally able to run to completion. > > Perhaps there can be a Cray flag for configure that can control this, since it is only Cray's that have this problem with Hypre? > > For my benefit when I have to do this again - > To get moose/petsc/scripts/update_and_rebuild_petsc.sh to run on an SGI system as a batch job, I had to: > > Make sure the git (gnu version) module was loaded > git clone moose > cd to the petsc directory and git clone the petsc submodule, but make sure to pull the latest commit. The commit that the moose repo refers to is outdated. > cd back to the moose directory, git add petsc and git commit so that the newest petsc commit gets used by the update script. otherwise the old commit will be used. > download the tarballs for fblaspack, hypre, metis, mumps, parmetis, scalapack, (PT)scotch, slepc, and superLU_dist. The URLS are in the __init__ methods of the relevant files inmost/petsc/config/BuildSystem/config/packages/ > alter moose/scripts/update_and_rebuild_petsc.sh script so that it is a working PBS batch job. Be sure to module swap to the gcc compiler and module load git (gnu version) and alter the ./configure command arguments > adding > --with-cudac=0 > --with-batch=1 > changing > --download-=/path/to/thirdparty/package/tarball > If the supercomputer is not a Cray KNL system, change line 26 of moose/petsc/config/BuildSystem/config/packages/hypre.py from 'self.installwithbath = 0' to 'self.installwithbatch = 1', otherwise, install hypre on its own and use --with-hypre-dir=/path/to/hypre in the ./configure command > > On Fri, Jan 31, 2020 at 10:06 AM Tomas Mondragon wrote: > Thanks for the change to base.py. Pulling the commit, confirm was able to skip over lgrind and c2html. I did have a problem with Parmetis, but that was because I was using an old ParMetis commit accidentally. Fixed by downloading the right commit of ParMetis. > > My current problem is with Hypre. Apparently --download-hypre cannot be used with --with-batch=1 even if the download URL is on the local machine. The configuration.log that resulted is attached for anyone who may be interested. > > -- > You received this message because you are subscribed to a topic in the Google Groups "moose-users" group. > To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/2xZsBpG-DtY/unsubscribe. > To unsubscribe from this group and all its topics, send an email to moose-users+unsubscribe at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/a34fa09e-a4f5-4225-8933-34eb36759260%40googlegroups.com.