From gpau at lbl.gov Mon Jul 1 19:05:52 2013 From: gpau at lbl.gov (George Pau) Date: Mon, 1 Jul 2013 17:05:52 -0700 Subject: [petsc-users] configuration on hopper at nersc Message-ID: I was having trouble configuring petsc 3.4.1 on hopper. The same configuration that use for 3.3.6 failed due to "Mismatched single quotes in C library string". Attached (configure_without_additional_flag.log) is the result of the configuration script. I read in a previous thread ( http://lists.mcs.anl.gov/pipermail/petsc-users/2011-July/009310.html) that I need to add --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-fortranlib-autodetect=0 for that error. But it failed later with the following error: "C++ error! mpi.h could not be located at: []". configure_with_additional_flag.log is the result of the configuration script. I am using the gnu compiler and it is of version 4.1.40, if that is relevant. Please let me know if you need other configuration information. Thanks, George -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at lbl.gov Mon Jul 1 19:11:36 2013 From: gpau at lbl.gov (George Pau) Date: Mon, 1 Jul 2013 17:11:36 -0700 Subject: [petsc-users] configuration on hopper at nersc In-Reply-To: References: Message-ID: As usual, I forgot the attachments. George On Mon, Jul 1, 2013 at 5:05 PM, George Pau wrote: > I was having trouble configuring petsc 3.4.1 on hopper. The same > configuration that use for 3.3.6 failed due to "Mismatched single quotes in > C library string". Attached (configure_without_additional_flag.log) is the > result of the configuration script. > > I read in a previous thread ( > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-July/009310.html) > that I need to add --with-clib-autodetect=0 --with-cxxlib-autodetect=0 > --with-fortranlib-autodetect=0 for that error. But it failed later with > the following error: "C++ error! mpi.h could not be located at: []". > configure_with_additional_flag.log is the result of the configuration > script. > > I am using the gnu compiler and it is of version 4.1.40, if that is > relevant. Please let me know if you need other configuration information. > > Thanks, > George > > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ > -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure_without_additional_flag.log Type: application/octet-stream Size: 430571 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure_with_additional_flag.log Type: application/octet-stream Size: 1470687 bytes Desc: not available URL: From Franck.Houssen at cea.fr Tue Jul 2 08:38:33 2013 From: Franck.Houssen at cea.fr (HOUSSEN Franck) Date: Tue, 2 Jul 2013 13:38:33 +0000 Subject: [petsc-users] PETSc : how to get back d_nnz and o_nnz ? Message-ID: Hello, >From a Mat object (already preallocated with MatMPIAIJSetPreallocation), can I get back the d_nnz and o_nnz arrays ? The MatGetInfo methods seems to allow only to get back global informations like nz_allocated. Is there a way to get d_nnz and o_nnz ? Regards, FH -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 2 08:53:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 02 Jul 2013 08:53:24 -0500 Subject: [petsc-users] PETSc : how to get back d_nnz and o_nnz ? In-Reply-To: References: Message-ID: <877gh93w6j.fsf@mcs.anl.gov> HOUSSEN Franck writes: > Hello, > > From a Mat object (already preallocated with > MatMPIAIJSetPreallocation), can I get back the d_nnz and o_nnz arrays? MatMPIAIJGetSeqAIJ(), then MatGetRowIJ(). This returns the offsets array, so nnz[i] = ai[i+1] - ai[i]; Why do you want this low-level information? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Jul 2 10:11:59 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 2 Jul 2013 10:11:59 -0500 (CDT) Subject: [petsc-users] petsc-3.4.2.tar.gz now available Message-ID: Dear PETSc users, The patch release petsc-3.4.2 is now available for download. http://www.mcs.anl.gov/petsc/download/index.html Some of the changes include: * plog: Fixed Fortran binding for PetscLogPrintDetailed() * fixed src/dm/examples/tutorials/ex7.c to use replacement for PetscViewerBinaryMatlabOpen() also needed to fix bin/matlab/PetscBagRead.m for new location of bagimpl.h and put bagimpl.h into the appropriate public place * badly formatted /*I "include" I*/, unused variables fixed that compile with MATLAB * sundials: disable f77 properly * exodus: cleanup build process [make.inc is unused & fix postInstall() to use the correct file] * exodus: do 'make clean' between repeat builds * mpicc: disable 'wrapper override check' when user provides --with-mpi-compilers=0 * Fortran: PCMGResidual_Default needs to handle double-underscore * version: fix PETSC_VERSION_LT() to require PETSC_VERSION_RELEASE=1 * Escape quotes in paths in petscmachineinfo.h * configure: handle paths with spaces in getExecutable(). Now the following works: '--with-mpiexec=/cygdrive/c/Program\ Files/MPICH2/bin/mpiexec -localonly', * tgamma: icc on windows requires mathimf.h for tgamma. [but configure doesn't use any includes in tgamma test] Satish From mpovolot at purdue.edu Tue Jul 2 10:51:47 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 02 Jul 2013 11:51:47 -0400 Subject: [petsc-users] question about MatMatMult Message-ID: <51D2F713.4060008@purdue.edu> Dear Petsc developers, I'm gradually moving from the version 3.2 to the version 3.4. I had to skip the version 3.3 because of the bug in MatMatMult that has been reported to you. What I see is that the MatMatMult works well but there is a difference: in the version 3.2 if a sparse matrix was multiplied by a dense matrix the result was dense. In the version 3.4 the result is sparse. Is my observation correct? Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From balay at mcs.anl.gov Tue Jul 2 11:18:12 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 2 Jul 2013 11:18:12 -0500 (CDT) Subject: [petsc-users] configuration on hopper at nersc In-Reply-To: References: Message-ID: Configure Options: --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-debugging=0 --with-cc=/opt/cray/xt-asyncpe/5.19/bin/cc --with-fc=/opt/cray/xt-asyncpe/5.19/bin/ftn --with-mpiexec=aprun --download-metis=1 --download-parmetis=1 --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-fortranlib-autodetect=0 --prefix=/global/homes/g/gpau/tough_codes/esd-tough3/build/Linux-x86_64-MPI-eco2n-Release/toughlib Can you use the additional option --with-cxx=0 ? [or perhaps --with-cxx=/opt/cray/xt-asyncpe/5.19/bin/CC] Satish On Mon, 1 Jul 2013, George Pau wrote: > As usual, I forgot the attachments. George > > > On Mon, Jul 1, 2013 at 5:05 PM, George Pau wrote: > > > I was having trouble configuring petsc 3.4.1 on hopper. The same > > configuration that use for 3.3.6 failed due to "Mismatched single quotes in > > C library string". Attached (configure_without_additional_flag.log) is the > > result of the configuration script. > > > > I read in a previous thread ( > > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-July/009310.html) > > that I need to add --with-clib-autodetect=0 --with-cxxlib-autodetect=0 > > --with-fortranlib-autodetect=0 for that error. But it failed later with > > the following error: "C++ error! mpi.h could not be located at: []". > > configure_with_additional_flag.log is the result of the configuration > > script. > > > > I am using the gnu compiler and it is of version 4.1.40, if that is > > relevant. Please let me know if you need other configuration information. > > > > Thanks, > > George > > > > > > -- > > George Pau > > Earth Sciences Division > > Lawrence Berkeley National Laboratory > > One Cyclotron, MS 74-120 > > Berkeley, CA 94720 > > > > (510) 486-7196 > > gpau at lbl.gov > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > From hzhang at mcs.anl.gov Tue Jul 2 11:46:44 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 2 Jul 2013 11:46:44 -0500 Subject: [petsc-users] question about MatMatMult In-Reply-To: <51D2F713.4060008@purdue.edu> References: <51D2F713.4060008@purdue.edu> Message-ID: Michael : > Dear Petsc developers, > I'm gradually moving from the version 3.2 to the version 3.4. > I had to skip the version 3.3 because of the bug in MatMatMult that has > been reported to you. > > What I see is that the MatMatMult works well but there is a difference: > in the version 3.2 if a sparse matrix was multiplied by a dense matrix the > result was dense. > In the version 3.4 the result is sparse. > A sparse matrix multiplied by a dense matrix results in a dense matrix for all petsc versions. What makes you conclude "In the version 3.4 the result is sparse"? Hong > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at lbl.gov Tue Jul 2 11:54:31 2013 From: gpau at lbl.gov (George Pau) Date: Tue, 2 Jul 2013 09:54:31 -0700 Subject: [petsc-users] configuration on hopper at nersc In-Reply-To: References: Message-ID: Satish, Thanks, -with-cxx=0 works. George On Tue, Jul 2, 2013 at 9:18 AM, Satish Balay wrote: > Configure Options: --configModules=PETSc.Configure > --optionsModule=PETSc.compilerOptions --with-debugging=0 > --with-cc=/opt/cray/xt-asyncpe/5.19/bin/cc > --with-fc=/opt/cray/xt-asyncpe/5.19/bin/ftn --with-mpiexec=aprun > --download-metis=1 --download-parmetis=1 --with-clib-autodetect=0 > --with-cxxlib-autodetect=0 --with-fortranlib-autodetect=0 > --prefix=/global/homes/g/gpau/tough_codes/esd-tough3/build/Linux-x86_64-MPI-eco2n-Release/toughlib > > > Can you use the additional option --with-cxx=0 ? > [or perhaps --with-cxx=/opt/cray/xt-asyncpe/5.19/bin/CC] > > Satish > > On Mon, 1 Jul 2013, George Pau wrote: > > > As usual, I forgot the attachments. George > > > > > > On Mon, Jul 1, 2013 at 5:05 PM, George Pau wrote: > > > > > I was having trouble configuring petsc 3.4.1 on hopper. The same > > > configuration that use for 3.3.6 failed due to "Mismatched single > quotes in > > > C library string". Attached (configure_without_additional_flag.log) > is the > > > result of the configuration script. > > > > > > I read in a previous thread ( > > > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-July/009310.html) > > > that I need to add --with-clib-autodetect=0 --with-cxxlib-autodetect=0 > > > --with-fortranlib-autodetect=0 for that error. But it failed later > with > > > the following error: "C++ error! mpi.h could not be located at: []". > > > configure_with_additional_flag.log is the result of the configuration > > > script. > > > > > > I am using the gnu compiler and it is of version 4.1.40, if that is > > > relevant. Please let me know if you need other configuration > information. > > > > > > Thanks, > > > George > > > > > > > > > -- > > > George Pau > > > Earth Sciences Division > > > Lawrence Berkeley National Laboratory > > > One Cyclotron, MS 74-120 > > > Berkeley, CA 94720 > > > > > > (510) 486-7196 > > > gpau at lbl.gov > > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > > > > > > > > -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Tue Jul 2 12:19:08 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 02 Jul 2013 13:19:08 -0400 Subject: [petsc-users] question about MatMatMult In-Reply-To: References: <51D2F713.4060008@purdue.edu> Message-ID: <51D30B8C.2050002@purdue.edu> On 07/02/2013 12:46 PM, Hong Zhang wrote: > Michael : > > Dear Petsc developers, > I'm gradually moving from the version 3.2 to the version 3.4. > I had to skip the version 3.3 because of the bug in MatMatMult > that has been reported to you. > > What I see is that the MatMatMult works well but there is a > difference: > in the version 3.2 if a sparse matrix was multiplied by a dense > matrix the result was dense. > In the version 3.4 the result is sparse. > > > A sparse matrix multiplied by a dense matrix results in a dense > matrix for all petsc versions. > What makes you conclude "In the version 3.4 the result is sparse"? > > Hong > > > > I use MatMatMult with MAT_INITIAL_MATRIX Then I call MatGetType(A, &type) for the product matrix Then I have an if statement: if (string(type) == "seqdense" || string(type) == "mpidense") Michael. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 2 12:41:35 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 2 Jul 2013 12:41:35 -0500 Subject: [petsc-users] question about MatMatMult In-Reply-To: <51D30B8C.2050002@purdue.edu> References: <51D2F713.4060008@purdue.edu> <51D30B8C.2050002@purdue.edu> Message-ID: On Tue, Jul 2, 2013 at 12:19 PM, Michael Povolotskyi wrote: > On 07/02/2013 12:46 PM, Hong Zhang wrote: > > Michael : > >> Dear Petsc developers, >> I'm gradually moving from the version 3.2 to the version 3.4. >> I had to skip the version 3.3 because of the bug in MatMatMult that has >> been reported to you. >> >> What I see is that the MatMatMult works well but there is a difference: >> in the version 3.2 if a sparse matrix was multiplied by a dense matrix >> the result was dense. >> In the version 3.4 the result is sparse. >> > > A sparse matrix multiplied by a dense matrix results in a dense matrix > for all petsc versions. > What makes you conclude "In the version 3.4 the result is sparse"? > > Hong > >> >> >> > I use MatMatMult with MAT_INITIAL_MATRIX > Then I call MatGetType(A, &type) for the product matrix > Then I have an if statement: > if (string(type) == "seqdense" || string(type) == "mpidense") > What is the type? M->hdr.type_name Matt > > Michael. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 2 13:19:09 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 02 Jul 2013 13:19:09 -0500 Subject: [petsc-users] configuration on hopper at nersc In-Reply-To: References: Message-ID: <871u7g4yg2.fsf@mcs.anl.gov> George Pau writes: > I am using the gnu compiler and it is of version 4.1.40, if that is > relevant. No, you are not. Executing: /opt/cray/xt-asyncpe/5.19/bin/cc --version sh: gcc (GCC) 4.7.2 20120920 (Cray Inc.) Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. In the past, --with-batch has been needed on Hopper. Here's a configure script that you could use: #!/usr/common/usg/python/2.7.1/bin/python if __name__ == '__main__': import sys import os sys.path.insert(0, os.path.abspath('config')) import configure configure_options = [ '--download-metis', '--download-ml', '--download-mumps', '--download-parmetis', '--download-scalapack', '--download-superlu_dist', '--known-bits-per-byte=8', '--known-level1-dcache-assoc=2', '--known-level1-dcache-linesize=64', '--known-level1-dcache-size=65536', '--known-memcmp-ok=1', '--known-mpi-c-double-complex=1', '--known-mpi-long-double=1', '--known-mpi-shared-libraries', '--known-sizeof-MPI_Comm=4', '--known-sizeof-MPI_Fint=4', '--known-sizeof-char=1', '--known-sizeof-double=8', '--known-sizeof-float=4', '--known-sizeof-int=4', '--known-sizeof-long-long=8', '--known-sizeof-long=8', '--known-sizeof-short=2', '--known-sizeof-size_t=8', '--known-sizeof-void-p=8', '--with-batch', '--with-debugging=0', 'CC=/opt/cray/xt-asyncpe/5.19/bin/cc', 'COPTFLAGS=-O -g', 'CXX=/opt/cray/xt-asyncpe/5.19/bin/CC', 'FC=/opt/cray/xt-asyncpe/5.19/bin/ftn', 'PETSC_ARCH=gnu-optg', ] configure.petsc_configure(configure_options) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From Franck.Houssen at cea.fr Wed Jul 3 01:18:47 2013 From: Franck.Houssen at cea.fr (HOUSSEN Franck) Date: Wed, 3 Jul 2013 06:18:47 +0000 Subject: [petsc-users] RE : PETSc : how to get back d_nnz and o_nnz ? In-Reply-To: References: , <877gh93w6j.fsf@mcs.anl.gov>, Message-ID: Is this working also for MPI matrices ? (the code create Seq matrice with 1 proc, and, MPI matrice with n procs) Seems that MatGetRow don't work neither. So I still have to try to use MatGetValues FH ________________________________________ De : Jed Brown [five9a2 at gmail.com] de la part de Jed Brown [jedbrown at mcs.anl.gov] Date d'envoi : mardi 2 juillet 2013 15:53 ? : HOUSSEN Franck; petsc-users at mcs.anl.gov Objet : Re: [petsc-users] PETSc : how to get back d_nnz and o_nnz ? HOUSSEN Franck writes: > Hello, > > From a Mat object (already preallocated with > MatMPIAIJSetPreallocation), can I get back the d_nnz and o_nnz arrays? MatMPIAIJGetSeqAIJ(), then MatGetRowIJ(). This returns the offsets array, so nnz[i] = ai[i+1] - ai[i]; Why do you want this low-level information? From jedbrown at mcs.anl.gov Wed Jul 3 06:13:16 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 03 Jul 2013 06:13:16 -0500 Subject: [petsc-users] RE : PETSc : how to get back d_nnz and o_nnz ? In-Reply-To: References: <877gh93w6j.fsf@mcs.anl.gov> Message-ID: <87hagb28xf.fsf@mcs.anl.gov> HOUSSEN Franck writes: > Is this working also for MPI matrices ? The functions I suggested are for MPI matrices, as should be clear since they start with MatMPI... > (the code create Seq matrice with 1 proc, and, MPI matrice with n > procs) Seems that MatGetRow don't work neither. "doesn't work" is not helpful. I asked you to resend your _other_ message because it is the one that states your actual problem and I think it indicates a misunderstanding about what will overflow an existing preallocation. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From stali at geology.wisc.edu Wed Jul 3 09:57:48 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Wed, 03 Jul 2013 09:57:48 -0500 Subject: [petsc-users] Example for GAMG Message-ID: <51D43BEC.8040604@geology.wisc.edu> Hello Is there a PETSc example that really demonstrates the advantage of using a multigrid solver (in that that the number of iterations are relatively constant as the problem size increases)? The command line options used to run it would also be helpful. Thanks in advance. Tabrez From knepley at gmail.com Wed Jul 3 10:03:44 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 3 Jul 2013 10:03:44 -0500 Subject: [petsc-users] Example for GAMG In-Reply-To: <51D43BEC.8040604@geology.wisc.edu> References: <51D43BEC.8040604@geology.wisc.edu> Message-ID: On Wed, Jul 3, 2013 at 9:57 AM, Tabrez Ali wrote: > Hello > > Is there a PETSc example that really demonstrates the advantage of using a > multigrid solver (in that that the number of iterations are relatively > constant as the problem size increases)? > > The command line options used to run it would also be helpful. > You can run both GMG and GAMG on any structured example to compare. For example, ./ex5 -da_grid_x 21 -da_grid_y 21 -da_refine 6 -ksp_rtol 1.0e-9 -pc_type mg -pc_mg_levels 4 -snes_monitor -snes_view or ./ex5 -da_grid_x 21 -da_grid_y 21 -da_refine 6 -ksp_rtol 1.0e-9 -pc_type gamg -snes_monitor -snes_view and just adjust the da_refine to see the constant number of iterates. Same thing in 3D ./ex48 -M 5 -N 5 -da_refine 5 -ksp_rtol 1.0e-9 -thi_mat_type baij -pc_type mg -pc_mg_levels 4 -snes_monitor -snes_view -log_summary I use these examples in my latest tutorial, which I will put up today. Matt > Thanks in advance. > > Tabrez > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 3 10:11:03 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 03 Jul 2013 10:11:03 -0500 Subject: [petsc-users] Example for GAMG In-Reply-To: <51D43BEC.8040604@geology.wisc.edu> References: <51D43BEC.8040604@geology.wisc.edu> Message-ID: <87y59nznjs.fsf@mcs.anl.gov> Tabrez Ali writes: > Hello > > Is there a PETSc example that really demonstrates the advantage of using > a multigrid solver (in that that the number of iterations are relatively > constant as the problem size increases)? src/ksp/ksp/examples/tutorials/ex56.c is a 3D elasticity example that we have used to test performance, but you should be able to see the behavior on pretty much any elliptic problem. You can see command-line options with 'make -n runex56'. There are a number of other GAMG tests. This is src/snes/examples/tutorials/ex5.c: $ ./ex5 -da_refine 2 -pc_type gamg -ksp_converged_reason Linear solve converged due to CONVERGED_RTOL iterations 5 Linear solve converged due to CONVERGED_RTOL iterations 5 Linear solve converged due to CONVERGED_RTOL iterations 5 $ ./ex5 -da_refine 3 -pc_type gamg -ksp_converged_reason Linear solve converged due to CONVERGED_RTOL iterations 5 Linear solve converged due to CONVERGED_RTOL iterations 5 Linear solve converged due to CONVERGED_RTOL iterations 5 $ ./ex5 -da_refine 4 -pc_type gamg -ksp_converged_reason Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 5 Linear solve converged due to CONVERGED_RTOL iterations 5 Linear solve converged due to CONVERGED_RTOL iterations 5 $ ./ex5 -da_refine 5 -pc_type gamg -ksp_converged_reason Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 6 Linear solve converged due to CONVERGED_RTOL iterations 7 $ ./ex5 -da_refine 6 -pc_type gamg -ksp_converged_reason Linear solve converged due to CONVERGED_RTOL iterations 8 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 7 Linear solve converged due to CONVERGED_RTOL iterations 7 Running with -pc_type hypre or -pc_type ml should produce something similar (likely better because the defaults are better for 2D scalar), but -pc_type ilu and similar will approximately double the number of iterations with each refinement. Note that smoothed aggregation is known to be weakly sub-optimal, so degrading from 5 to 7 iterations across this range of problem sizes is not surprising. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Wed Jul 3 10:15:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 03 Jul 2013 10:15:07 -0500 Subject: [petsc-users] Example for GAMG In-Reply-To: References: <51D43BEC.8040604@geology.wisc.edu> Message-ID: <87vc4rznd0.fsf@mcs.anl.gov> Matthew Knepley writes: > and just adjust the da_refine to see the constant number of iterates. Same > thing in 3D > > ./ex48 -M 5 -N 5 -da_refine 5 -ksp_rtol 1.0e-9 -thi_mat_type baij > -pc_type mg -pc_mg_levels 4 -snes_monitor -snes_view -log_summary This needs -thi_mat_type aij to be used with GAMG. Also, this problem is difficult to globalize without grid sequencing (use -snes_grid_sequence instead of -da_refine). And it's not a great test for GAMG because it (currently) does not set the near-null space so rotations are not used by smoothed aggregation. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mmnasr at gmail.com Wed Jul 3 12:16:47 2013 From: mmnasr at gmail.com (Mohamad M. Nasr-Azadani) Date: Wed, 3 Jul 2013 10:16:47 -0700 Subject: [petsc-users] Performance RH5 vs RH6 Message-ID: Hi guys, Recently, a supercomputer I had been using for the past year, upgraded their OS from RH5 to RH6. After recompiling PETSc along with Hypre with various compilers (gcc and intel) and mpi packages (openmpi and mvapich2), the performance I observe on RH6 is significantly worse, e.g. my code is close to 30-40% slower. My code is a finite-difference Navier-Stokes solver and uses BoomerAMG to precondition the pressure Poisson equation. I have not done a thorough profiling yet, but I was wondering if you have encountered a similar experience before. Thanks, Mohamad -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 3 12:34:42 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 3 Jul 2013 12:34:42 -0500 Subject: [petsc-users] Performance RH5 vs RH6 In-Reply-To: References: Message-ID: On Wed, Jul 3, 2013 at 12:16 PM, Mohamad M. Nasr-Azadani wrote: > Hi guys, > > Recently, a supercomputer I had been using for the past year, upgraded > their OS from RH5 to RH6. After recompiling PETSc along with Hypre with > various compilers (gcc and intel) and mpi packages (openmpi and mvapich2), > the performance I observe on RH6 is significantly worse, e.g. my code is > close to 30-40% slower. > > My code is a finite-difference Navier-Stokes solver and uses BoomerAMG to > precondition the pressure Poisson equation. > I have not done a thorough profiling yet, but I was wondering if you have > encountered a similar experience before. > Nope. The right thing to do is look at -log_summary output. Matt > Thanks, > Mohamad > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From stali at geology.wisc.edu Wed Jul 3 13:17:20 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Wed, 03 Jul 2013 13:17:20 -0500 Subject: [petsc-users] Performance RH5 vs RH6 In-Reply-To: References: Message-ID: <51D46AB0.1030201@geology.wisc.edu> Did you by chance turn debugging on while configuring PETSc? Tabrez On 07/03/2013 12:16 PM, Mohamad M. Nasr-Azadani wrote: > Hi guys, > > Recently, a supercomputer I had been using for the past year, upgraded > their OS from RH5 to RH6. After recompiling PETSc along with Hypre > with various compilers (gcc and intel) and mpi packages (openmpi and > mvapich2), the performance I observe on RH6 is significantly worse, > e.g. my code is close to 30-40% slower. > > My code is a finite-difference Navier-Stokes solver and uses BoomerAMG > to precondition the pressure Poisson equation. > I have not done a thorough profiling yet, but I was wondering if you > have encountered a similar experience before. > > Thanks, > Mohamad > From mmnasr at gmail.com Wed Jul 3 13:24:37 2013 From: mmnasr at gmail.com (Mohamad M. Nasr-Azadani) Date: Wed, 3 Jul 2013 11:24:37 -0700 Subject: [petsc-users] Fwd: Performance RH5 vs RH6 In-Reply-To: References: <51D46AB0.1030201@geology.wisc.edu> Message-ID: ---------- Forwarded message ---------- From: Mohamad M. Nasr-Azadani Date: Wed, Jul 3, 2013 at 11:24 AM Subject: Re: [petsc-users] Performance RH5 vs RH6 To: Tabrez Ali @ Matt. Nope. The right thing to do is look at -log_summary output. Thanks. I will try this later. @Boyce. FWIW, I recently upgraded an OpenMPI installation from 1.4.x to 1.6.x and observed very significant slow-downs. You might want to try reverting your MPI library. That sounds a potential reason as I used to use openmpi 1.4 and now had to recompile PETSc with 1.6. I will definitely give it a try to see if I see any difference. Thank you! @Tabrez Did you by chance turn debugging on while configuring PETSc? No. Thanks, Mohamad On Wed, Jul 3, 2013 at 11:17 AM, Tabrez Ali wrote: > Did you by chance turn debugging on while configuring PETSc? > > Tabrez > > > On 07/03/2013 12:16 PM, Mohamad M. Nasr-Azadani wrote: > >> Hi guys, >> >> Recently, a supercomputer I had been using for the past year, upgraded >> their OS from RH5 to RH6. After recompiling PETSc along with Hypre with >> various compilers (gcc and intel) and mpi packages (openmpi and mvapich2), >> the performance I observe on RH6 is significantly worse, e.g. my code is >> close to 30-40% slower. >> >> My code is a finite-difference Navier-Stokes solver and uses BoomerAMG to >> precondition the pressure Poisson equation. >> I have not done a thorough profiling yet, but I was wondering if you have >> encountered a similar experience before. >> >> Thanks, >> Mohamad >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Franck.Houssen at cea.fr Wed Jul 3 15:38:33 2013 From: Franck.Houssen at cea.fr (HOUSSEN Franck) Date: Wed, 3 Jul 2013 20:38:33 +0000 Subject: [petsc-users] RE : RE : PETSc : how to get back d_nnz and o_nnz ? In-Reply-To: <87hagb28xf.fsf@mcs.anl.gov> References: <877gh93w6j.fsf@mcs.anl.gov> , <87hagb28xf.fsf@mcs.anl.gov> Message-ID: I iterate on solving a AX=B system. I can not know in advance the maximum non zero values in A (at least, I am not sure to know how to do this). So I wanted to try to check if the existing A matrix is OK for the new solve to come : if so I keep it as it is, if not so I wanted to re-preallocate (increasing the number of non zero value in a suitable way). "doesn't work" means, even using the CHCKERR marco, I do not get any stack in the error traces but just a message saying something like "a sig sev error occured" (occuring somewhere but no info about where - my install of petsc is a debug one). When I commented MatGetRow (that I tried to use instead of MatMPIxxx to avoid handling on one hand MPI matrices and on the other hand Seq matrices - the code handles both MPI or Seq matrices depending on the number of MPI procs), the code does something "wrong" (or say no what I want) but it does not crash : so I know the crash is du to the MatGetRow call. FH ________________________________________ De : Jed Brown [five9a2 at gmail.com] de la part de Jed Brown [jedbrown at mcs.anl.gov] Date d'envoi : mercredi 3 juillet 2013 13:13 ? : HOUSSEN Franck; petsc-users at mcs.anl.gov Objet : Re: RE : [petsc-users] PETSc : how to get back d_nnz and o_nnz ? HOUSSEN Franck writes: > Is this working also for MPI matrices ? The functions I suggested are for MPI matrices, as should be clear since they start with MatMPI... > (the code create Seq matrice with 1 proc, and, MPI matrice with n > procs) Seems that MatGetRow don't work neither. "doesn't work" is not helpful. I asked you to resend your _other_ message because it is the one that states your actual problem and I think it indicates a misunderstanding about what will overflow an existing preallocation. From mpovolot at purdue.edu Wed Jul 3 15:43:01 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 03 Jul 2013 16:43:01 -0400 Subject: [petsc-users] question about MatMatMult In-Reply-To: References: <51D2F713.4060008@purdue.edu> <51D30B8C.2050002@purdue.edu> Message-ID: <51D48CD5.9010307@purdue.edu> On 07/02/2013 01:41 PM, Matthew Knepley wrote: > On Tue, Jul 2, 2013 at 12:19 PM, Michael Povolotskyi > > wrote: > > On 07/02/2013 12:46 PM, Hong Zhang wrote: >> Michael : >> >> Dear Petsc developers, >> I'm gradually moving from the version 3.2 to the version 3.4. >> I had to skip the version 3.3 because of the bug in >> MatMatMult that has been reported to you. >> >> What I see is that the MatMatMult works well but there is a >> difference: >> in the version 3.2 if a sparse matrix was multiplied by a >> dense matrix the result was dense. >> In the version 3.4 the result is sparse. >> >> >> A sparse matrix multiplied by a dense matrix results in a dense >> matrix for all petsc versions. >> What makes you conclude "In the version 3.4 the result is sparse"? >> >> Hong >> >> >> >> > I use MatMatMult with MAT_INITIAL_MATRIX > Then I call MatGetType(A, &type) for the product matrix > Then I have an if statement: > if (string(type) == "seqdense" || string(type) == "mpidense") > > > What is the type? M->hdr.type_name > > Matt Looks like everything is fine, I could not reproduce the error any more. Michael. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 3 15:47:14 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 03 Jul 2013 15:47:14 -0500 Subject: [petsc-users] RE : RE : PETSc : how to get back d_nnz and o_nnz ? In-Reply-To: References: <877gh93w6j.fsf@mcs.anl.gov> <87hagb28xf.fsf@mcs.anl.gov> Message-ID: <87r4ffz7zh.fsf@mcs.anl.gov> HOUSSEN Franck writes: > I iterate on solving a AX=B system. I can not know in advance the > maximum non zero values in A (at least, I am not sure to know how to > do this). So I wanted to try to check if the existing A matrix is OK > for the new solve to come : if so I keep it as it is, if not so I > wanted to re-preallocate (increasing the number of non zero value in a > suitable way). The problem is that you need to know whether the entries changed, not just the total number of entries. That is, once you insert values in one place, those column indices are permanent until your re-allocate. If your nonzero pattern is changing, then you can call MatXAIJSetPreallocation (or MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation) every time you assemble. Re-allocation will not be a performance issue, but it requires you to call the function you have that figures out how many nonzeros per row (it sounds like you are doing this part anyway). > "doesn't work" means, even using the CHCKERR marco, I do not get any > stack in the error traces but just a message saying something like "a > sig sev error occured" (occuring somewhere but no info about where - > my install of petsc is a debug one). Does your code call PetscPushSignalHandler? > When I commented MatGetRow (that I tried to use instead of MatMPIxxx > to avoid handling on one hand MPI matrices and on the other hand Seq > matrices - the code handles both MPI or Seq matrices depending on the > number of MPI procs), the code does something "wrong" (or say no what > I want) but it does not crash : so I know the crash is du to the > MatGetRow call. You can run in a debugger to see where the crash is. If you called MatMPIAIJGetSeqAIJ on a SeqAIJ matrix, you would have gotten nonsense back. But your approach of comparing the number of nonzeros is not correct unless the nonzero locations are guaranteed to be nested. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bisheshkh at gmail.com Thu Jul 4 07:32:15 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Thu, 4 Jul 2013 14:32:15 +0200 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil Message-ID: Hi all, I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that results from a finite difference discretization of a PDE (objective is to solve the resulting linear system in the form of Ax=b). However there are some errors and I'm not sure about the issue! A short demo: A 2D mXn grid, with two variables at each node (dof=2), so the resulting A would be 2mn X 2mn. Let's say the two variables are vx and vy, and the two associated equations discretized are x-eq and y-eq. Here is the relevant part of the code: (mat1 variable for the A matrix): PetscInt m = 10, n=10; ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); Mat mat1; ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); MatStencil row, col[4]; //let's test 4 non-zeros in one row. PetscScalar val[4]; PetscScalar coeff = 2.; //just a constant coefficient for testing. PetscInt i,j; DMDALocalInfo info; ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); //Now fill up the matrix: for(i = info.ys; i < info.ys+info.ym; ++i){ for(j = info.xs; j < info.xs+info.xm; ++j){ row.i = i; row.j = j; //one node at a time if (j == 0 || j == info.mx-1){ //left and right borders //vx(i,j) = 0; row.c = 0; col[0].c = 0; val[0] = 1; col[0].i = i; col[0].j = j; ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); //vy: //vy(i,j) - c*vy(i,j+-1) = 0; row.c = 1; col[0].c = 1; val[1] = coeff; col[1].c = 1; col[1].i = i; if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; col[1].j = j+1; else //vy(i,j) - c*vy(i,j-1) = 0; col[1].j = j-1; ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); } else if (i == 0 || i == info.my-1){ //top and bottom borders //vx: vx(i,j) - c* vx(i+-1,j) = 0; row.c = 0; col[0].c = 0; val[0] = 1; col[0].i = i; col[0].j = j; col[1].c = 0; val[1] = coeff; if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; col[1].i = i+1; else //vx(i,j) - c*vx(i-1,j) = 0; col[1].i = i-1; col[1].j = j; ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); //vy(i,j) = 0; row.c = 1; col[0].c = 1; ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); } else { //Interior points: row.c = 0;//x-eq col[0].c = 0; val[0] = 2*coeff; col[0].i = i; col[0].j = j+1; col[1].c = 0; val[1] = -val[0] - coeff; col[1].i = i; col[1].j = j; col[2].c = 1; val[2] = 4*coeff; col[2].i = i+1; col[2].j = j; col[3].c = 1; val[3] = 4*coeff; col[3].i = i; col[3].j = j; ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); row.c = 1; //y-eq col[0].c = 1; val[0] = 2*coeff; col[0].i = i; col[0].j = j+1; col[1].c = 1; val[1] = -val[0] - coeff; col[1].i = i; col[1].j = j; col[2].c = 0; val[2] = 4*coeff; col[2].i = i+1; col[2].j = j; col[3].c = 0; val[3] = 4*coeff; col[3].i = i; col[3].j = j; ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); } } } MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); However I get the following error when run with 2 processors. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Argument out of range! [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named edwards by bkhanal Thu Jul 4 14:20:11 2013 [0]PETSC ERROR: Libraries linked from /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 -with-clanguage=cxx --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c [0]PETSC ERROR: MatSetValuesLocal() line 1967 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c [0]PETSC ERROR: MatSetValuesStencil() line 1339 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c [0]PETSC ERROR: main() line 75 in "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 63 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== I do not understand how the index (i,j) are out of range when set to corresponding fields in row and col variables. What could be the possible problem ? And any suggestions on the way to debug this sort of issues ? Thanks, Bishesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 4 07:59:23 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 4 Jul 2013 07:59:23 -0500 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal wrote: > Hi all, > I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that > results from a finite difference discretization of a PDE (objective is to > solve the resulting linear system in the form of Ax=b). > > However there are some errors and I'm not sure about the issue! > A short demo: > A 2D mXn grid, with two variables at each node (dof=2), > so the resulting A would be 2mn X 2mn. > Let's say the two variables are vx and vy, and the two associated > equations discretized are x-eq and y-eq. > > Here is the relevant part of the code: (mat1 variable for the A matrix): > Just sending the whole code is better. I suspect the logic is wrong for selecting the boundary. Matt > PetscInt m = 10, n=10; > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, > DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > Mat mat1; > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > PetscScalar val[4]; > PetscScalar coeff = 2.; //just a constant coefficient for testing. > PetscInt i,j; > DMDALocalInfo info; > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > //Now fill up the matrix: > for(i = info.ys; i < info.ys+info.ym; ++i){ > for(j = info.xs; j < info.xs+info.xm; ++j){ > row.i = i; row.j = j; //one node at a time > if (j == 0 || j == info.mx-1){ //left and right borders > //vx(i,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > row.c = 1; > col[0].c = 1; val[1] = coeff; > col[1].c = 1; > col[1].i = i; > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > col[1].j = j+1; > else //vy(i,j) - c*vy(i,j-1) = 0; > col[1].j = j-1; > > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else if (i == 0 || i == info.my-1){ //top and bottom borders > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > col[1].c = 0; val[1] = coeff; > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > col[1].i = i+1; > else //vx(i,j) - c*vx(i-1,j) = 0; > col[1].i = i-1; > col[1].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy(i,j) = 0; > row.c = 1; > col[0].c = 1; > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else { //Interior points: > row.c = 0;//x-eq > col[0].c = 0; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 0; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 1; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 1; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > row.c = 1; //y-eq > col[0].c = 1; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 1; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 0; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 0; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > } > } > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > > However I get the following error when run with 2 processors. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named > edwards by bkhanal Thu Jul 4 14:20:11 2013 > [0]PETSC ERROR: Libraries linked from > /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib > [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 > --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 > -with-clanguage=cxx --download-hypre=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c > [0]PETSC ERROR: MatSetValuesLocal() line 1967 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > [0]PETSC ERROR: MatSetValuesStencil() line 1339 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > [0]PETSC ERROR: main() line 75 in > "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 63 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > > =================================================================================== > > I do not understand how the index (i,j) are out of range when set to > corresponding fields in row and col variables. What could be the possible > problem ? And any suggestions on the way to debug this sort of issues ? > > Thanks, > Bishesh > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Thu Jul 4 08:11:39 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Thu, 4 Jul 2013 15:11:39 +0200 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley wrote: > On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal wrote: > >> Hi all, >> I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that >> results from a finite difference discretization of a PDE (objective is to >> solve the resulting linear system in the form of Ax=b). >> >> However there are some errors and I'm not sure about the issue! >> A short demo: >> A 2D mXn grid, with two variables at each node (dof=2), >> so the resulting A would be 2mn X 2mn. >> Let's say the two variables are vx and vy, and the two associated >> equations discretized are x-eq and y-eq. >> >> Here is the relevant part of the code: (mat1 variable for the A matrix): >> > > Just sending the whole code is better. I suspect the logic is wrong for > selecting the boundary. > > Matt > Thanks, here is the complete code: #include #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **argv) { PetscErrorCode ierr; DM da; PetscInt m=10,n=10; ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); /* Create a DMDA and an associated vector */ ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); Mat mat1; ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); MatStencil row, col[4]; //let's test 4 non-zeros in one row. PetscScalar val[4]; PetscScalar coeff = 2.; //just a constant coefficient for testing. PetscInt i,j; DMDALocalInfo info; ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); for(i = info.ys; i < info.ys+info.ym; ++i){ for(j = info.xs; j < info.xs+info.xm; ++j){ row.i = i; row.j = j; //one node at a time if (j == 0 || j == info.mx-1){ //left and right borders //vx(i,j) = 0; row.c = 0; col[0].c = 0; val[0] = 1; col[0].i = i; col[0].j = j; ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); //vy: //vy(i,j) - c*vy(i,j+-1) = 0; row.c = 1; col[0].c = 1; val[1] = coeff; col[1].c = 1; col[1].i = i; if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; col[1].j = j+1; else //vy(i,j) - c*vy(i,j-1) = 0; col[1].j = j-1; ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); } else if (i == 0 || i == info.my-1){ //top and bottom borders //vx: vx(i,j) - c* vx(i+-1,j) = 0; row.c = 0; col[0].c = 0; val[0] = 1; col[0].i = i; col[0].j = j; col[1].c = 0; val[1] = coeff; if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; col[1].i = i+1; else //vx(i,j) - c*vx(i-1,j) = 0; col[1].i = i-1; col[1].j = j; ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); //vy(i,j) = 0; row.c = 1; col[0].c = 1; ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); } else { //Interior points: row.c = 0;//x-eq col[0].c = 0; val[0] = 2*coeff; col[0].i = i; col[0].j = j+1; col[1].c = 0; val[1] = -val[0] - coeff; col[1].i = i; col[1].j = j; col[2].c = 1; val[2] = 4*coeff; col[2].i = i+1; col[2].j = j; col[3].c = 1; val[3] = 4*coeff; col[3].i = i; col[3].j = j; ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); row.c = 1; //y-eq col[0].c = 1; val[0] = 2*coeff; col[0].i = i; col[0].j = j+1; col[1].c = 1; val[1] = -val[0] - coeff; col[1].i = i; col[1].j = j; col[2].c = 0; val[2] = 4*coeff; col[2].i = i+1; col[2].j = j; col[3].c = 0; val[3] = 4*coeff; col[3].i = i; col[3].j = j; ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); } } } MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); ierr = MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr); /* clean up and exit */ ierr = DMDestroy(&da);CHKERRQ(ierr); ierr = MatDestroy(&mat1); ierr = PetscFinalize(); return 0; } > > >> PetscInt m = 10, n=10; >> ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, >> DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, >> >> PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); >> >> Mat mat1; >> ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); >> MatStencil row, col[4]; //let's test 4 non-zeros in one row. >> PetscScalar val[4]; >> PetscScalar coeff = 2.; //just a constant coefficient for testing. >> PetscInt i,j; >> DMDALocalInfo info; >> ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); >> //Now fill up the matrix: >> for(i = info.ys; i < info.ys+info.ym; ++i){ >> for(j = info.xs; j < info.xs+info.xm; ++j){ >> row.i = i; row.j = j; //one node at a time >> if (j == 0 || j == info.mx-1){ //left and right borders >> //vx(i,j) = 0; >> row.c = 0; >> col[0].c = 0; val[0] = 1; >> col[0].i = i; col[0].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >> >> //vy: //vy(i,j) - c*vy(i,j+-1) = 0; >> row.c = 1; >> col[0].c = 1; val[1] = coeff; >> col[1].c = 1; >> col[1].i = i; >> if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; >> col[1].j = j+1; >> else //vy(i,j) - c*vy(i,j-1) = 0; >> col[1].j = j-1; >> >> ierr = >> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >> } >> else if (i == 0 || i == info.my-1){ //top and bottom borders >> //vx: vx(i,j) - c* vx(i+-1,j) = 0; >> row.c = 0; >> col[0].c = 0; val[0] = 1; >> col[0].i = i; col[0].j = j; >> col[1].c = 0; val[1] = coeff; >> if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; >> col[1].i = i+1; >> else //vx(i,j) - c*vx(i-1,j) = 0; >> col[1].i = i-1; >> col[1].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >> >> //vy(i,j) = 0; >> row.c = 1; >> col[0].c = 1; >> ierr = >> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >> } >> else { //Interior points: >> row.c = 0;//x-eq >> col[0].c = 0; val[0] = 2*coeff; >> col[0].i = i; col[0].j = j+1; >> col[1].c = 0; val[1] = -val[0] - coeff; >> col[1].i = i; col[1].j = j; >> col[2].c = 1; val[2] = 4*coeff; >> col[2].i = i+1; col[2].j = j; >> col[3].c = 1; val[3] = 4*coeff; >> col[3].i = i; col[3].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >> >> row.c = 1; //y-eq >> col[0].c = 1; val[0] = 2*coeff; >> col[0].i = i; col[0].j = j+1; >> col[1].c = 1; val[1] = -val[0] - coeff; >> col[1].i = i; col[1].j = j; >> col[2].c = 0; val[2] = 4*coeff; >> col[2].i = i+1; col[2].j = j; >> col[3].c = 0; val[3] = 4*coeff; >> col[3].i = i; col[3].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >> } >> } >> } >> >> MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); >> MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); >> >> However I get the following error when run with 2 processors. >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Argument out of range! >> [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named >> edwards by bkhanal Thu Jul 4 14:20:11 2013 >> [0]PETSC ERROR: Libraries linked from >> /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib >> [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 >> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >> -with-clanguage=cxx --download-hypre=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c >> [0]PETSC ERROR: MatSetValuesLocal() line 1967 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c >> [0]PETSC ERROR: MatSetValuesStencil() line 1339 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c >> [0]PETSC ERROR: main() line 75 in >> "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx >> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >> [cli_0]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = EXIT CODE: 63 >> = CLEANING UP REMAINING PROCESSES >> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >> >> =================================================================================== >> >> I do not understand how the index (i,j) are out of range when set to >> corresponding fields in row and col variables. What could be the possible >> problem ? And any suggestions on the way to debug this sort of issues ? >> >> Thanks, >> Bishesh >> >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Thu Jul 4 10:39:14 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Thu, 4 Jul 2013 17:39:14 +0200 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: On Thu, Jul 4, 2013 at 3:11 PM, Bishesh Khanal wrote: > > On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley wrote: > >> On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal wrote: >> >>> Hi all, >>> I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that >>> results from a finite difference discretization of a PDE (objective is to >>> solve the resulting linear system in the form of Ax=b). >>> >>> However there are some errors and I'm not sure about the issue! >>> A short demo: >>> A 2D mXn grid, with two variables at each node (dof=2), >>> so the resulting A would be 2mn X 2mn. >>> Let's say the two variables are vx and vy, and the two associated >>> equations discretized are x-eq and y-eq. >>> >>> Here is the relevant part of the code: (mat1 variable for the A matrix): >>> >> >> Just sending the whole code is better. I suspect the logic is wrong for >> selecting the boundary. >> >> Matt >> > > Thanks, here is the complete code: > > #include > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **argv) > { > PetscErrorCode ierr; > DM da; > PetscInt m=10,n=10; > > ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > /* Create a DMDA and an associated vector */ > > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, > DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > Mat mat1; > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > PetscScalar val[4]; > PetscScalar coeff = 2.; //just a constant coefficient for testing. > PetscInt i,j; > DMDALocalInfo info; > > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > > for(i = info.ys; i < info.ys+info.ym; ++i){ > for(j = info.xs; j < info.xs+info.xm; ++j){ > row.i = i; row.j = j; //one node at a time > if (j == 0 || j == info.mx-1){ //left and right borders > //vx(i,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > row.c = 1; > col[0].c = 1; val[1] = coeff; > col[1].c = 1; > col[1].i = i; > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > col[1].j = j+1; > else //vy(i,j) - c*vy(i,j-1) = 0; > col[1].j = j-1; > > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else if (i == 0 || i == info.my-1){ //top and bottom borders > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > col[1].c = 0; val[1] = coeff; > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > col[1].i = i+1; > else //vx(i,j) - c*vx(i-1,j) = 0; > col[1].i = i-1; > col[1].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy(i,j) = 0; > row.c = 1; > col[0].c = 1; > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else { //Interior points: > row.c = 0;//x-eq > col[0].c = 0; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > It seems to me that the problem is caused because of this kind of assignment col[1].j = j+1 when j takes the value "info.xs+info.xm-1" and for similar places below with "i+1" when "i = info.ys+info.ym". My guess is that the each processor contains only the local portion of the matrix "mat1" corresponding to the grid area it is taking care of. It probably does not store the ghost values. But when for e.g. j = info.xs+info.xm-1, we have col[0].j = j+1 and MatSetValuesStencil needs accessing the region of another processor. Am I thinking correctly ? Iis this the case where I need each processor allocating a space for ghost values too? If so, how do I do it? > col[1].c = 0; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 1; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 1; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > row.c = 1; //y-eq > col[0].c = 1; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 1; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 0; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 0; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > } > } > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > ierr = > MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); > ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr); > > /* clean up and exit */ > ierr = DMDestroy(&da);CHKERRQ(ierr); > ierr = MatDestroy(&mat1); > ierr = PetscFinalize(); > return 0; > } > > > > >> >> >>> PetscInt m = 10, n=10; >>> ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, >>> DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, >>> >>> PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); >>> >>> Mat mat1; >>> ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); >>> MatStencil row, col[4]; //let's test 4 non-zeros in one row. >>> PetscScalar val[4]; >>> PetscScalar coeff = 2.; //just a constant coefficient for testing. >>> PetscInt i,j; >>> DMDALocalInfo info; >>> ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); >>> //Now fill up the matrix: >>> for(i = info.ys; i < info.ys+info.ym; ++i){ >>> for(j = info.xs; j < info.xs+info.xm; ++j){ >>> row.i = i; row.j = j; //one node at a time >>> if (j == 0 || j == info.mx-1){ //left and right borders >>> //vx(i,j) = 0; >>> row.c = 0; >>> col[0].c = 0; val[0] = 1; >>> col[0].i = i; col[0].j = j; >>> ierr = >>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >>> >>> //vy: //vy(i,j) - c*vy(i,j+-1) = 0; >>> row.c = 1; >>> col[0].c = 1; val[1] = coeff; >>> col[1].c = 1; >>> col[1].i = i; >>> if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; >>> col[1].j = j+1; >>> else //vy(i,j) - c*vy(i,j-1) = 0; >>> col[1].j = j-1; >>> >>> ierr = >>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >>> } >>> else if (i == 0 || i == info.my-1){ //top and bottom borders >>> //vx: vx(i,j) - c* vx(i+-1,j) = 0; >>> row.c = 0; >>> col[0].c = 0; val[0] = 1; >>> col[0].i = i; col[0].j = j; >>> col[1].c = 0; val[1] = coeff; >>> if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; >>> col[1].i = i+1; >>> else //vx(i,j) - c*vx(i-1,j) = 0; >>> col[1].i = i-1; >>> col[1].j = j; >>> ierr = >>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >>> >>> //vy(i,j) = 0; >>> row.c = 1; >>> col[0].c = 1; >>> ierr = >>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >>> } >>> else { //Interior points: >>> row.c = 0;//x-eq >>> col[0].c = 0; val[0] = 2*coeff; >>> col[0].i = i; col[0].j = j+1; >>> col[1].c = 0; val[1] = -val[0] - coeff; >>> col[1].i = i; col[1].j = j; >>> col[2].c = 1; val[2] = 4*coeff; >>> col[2].i = i+1; col[2].j = j; >>> col[3].c = 1; val[3] = 4*coeff; >>> col[3].i = i; col[3].j = j; >>> ierr = >>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >>> >>> row.c = 1; //y-eq >>> col[0].c = 1; val[0] = 2*coeff; >>> col[0].i = i; col[0].j = j+1; >>> col[1].c = 1; val[1] = -val[0] - coeff; >>> col[1].i = i; col[1].j = j; >>> col[2].c = 0; val[2] = 4*coeff; >>> col[2].i = i+1; col[2].j = j; >>> col[3].c = 0; val[3] = 4*coeff; >>> col[3].i = i; col[3].j = j; >>> ierr = >>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >>> } >>> } >>> } >>> >>> MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); >>> MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); >>> >>> However I get the following error when run with 2 processors. >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Argument out of range! >>> [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named >>> edwards by bkhanal Thu Jul 4 14:20:11 2013 >>> [0]PETSC ERROR: Libraries linked from >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib >>> [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 >>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>> -with-clanguage=cxx --download-hypre=1 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c >>> [0]PETSC ERROR: MatSetValuesLocal() line 1967 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c >>> [0]PETSC ERROR: MatSetValuesStencil() line 1339 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c >>> [0]PETSC ERROR: main() line 75 in >>> "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx >>> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >>> [cli_0]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = EXIT CODE: 63 >>> = CLEANING UP REMAINING PROCESSES >>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >>> >>> =================================================================================== >>> >>> I do not understand how the index (i,j) are out of range when set to >>> corresponding fields in row and col variables. What could be the possible >>> problem ? And any suggestions on the way to debug this sort of issues ? >>> >>> Thanks, >>> Bishesh >>> >>> >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 4 11:43:49 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 4 Jul 2013 11:43:49 -0500 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: On Thu, Jul 4, 2013 at 10:39 AM, Bishesh Khanal wrote: > > > > On Thu, Jul 4, 2013 at 3:11 PM, Bishesh Khanal wrote: > >> >> On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley wrote: >> >>> On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal wrote: >>> >>>> Hi all, >>>> I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix >>>> that results from a finite difference discretization of a PDE (objective is >>>> to solve the resulting linear system in the form of Ax=b). >>>> >>>> However there are some errors and I'm not sure about the issue! >>>> A short demo: >>>> A 2D mXn grid, with two variables at each node (dof=2), >>>> so the resulting A would be 2mn X 2mn. >>>> Let's say the two variables are vx and vy, and the two associated >>>> equations discretized are x-eq and y-eq. >>>> >>>> Here is the relevant part of the code: (mat1 variable for the A matrix): >>>> >>> >>> Just sending the whole code is better. I suspect the logic is wrong for >>> selecting the boundary. >>> >>> Matt >>> >> >> Thanks, here is the complete code: >> >> #include >> #include >> >> #undef __FUNCT__ >> #define __FUNCT__ "main" >> int main(int argc,char **argv) >> { >> PetscErrorCode ierr; >> DM da; >> PetscInt m=10,n=10; >> >> ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); >> >> /* Create a DMDA and an associated vector */ >> >> ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, >> DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, >> >> PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); >> >> Mat mat1; >> ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); >> MatStencil row, col[4]; //let's test 4 non-zeros in one row. >> PetscScalar val[4]; >> PetscScalar coeff = 2.; //just a constant coefficient for testing. >> PetscInt i,j; >> DMDALocalInfo info; >> >> ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); >> >> for(i = info.ys; i < info.ys+info.ym; ++i){ >> for(j = info.xs; j < info.xs+info.xm; ++j){ >> > row.i = i; row.j = j; //one node at a time >> if (j == 0 || j == info.mx-1){ //left and right borders >> //vx(i,j) = 0; >> row.c = 0; >> col[0].c = 0; val[0] = 1; >> col[0].i = i; col[0].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >> >> //vy: //vy(i,j) - c*vy(i,j+-1) = 0; >> row.c = 1; >> col[0].c = 1; val[1] = coeff; >> col[1].c = 1; >> col[1].i = i; >> if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; >> col[1].j = j+1; >> > else //vy(i,j) - c*vy(i,j-1) = 0; >> col[1].j = j-1; >> >> ierr = >> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >> } >> else if (i == 0 || i == info.my-1){ //top and bottom borders >> //vx: vx(i,j) - c* vx(i+-1,j) = 0; >> row.c = 0; >> col[0].c = 0; val[0] = 1; >> col[0].i = i; col[0].j = j; >> col[1].c = 0; val[1] = coeff; >> if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; >> col[1].i = i+1; >> else //vx(i,j) - c*vx(i-1,j) = 0; >> col[1].i = i-1; >> col[1].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >> >> //vy(i,j) = 0; >> row.c = 1; >> col[0].c = 1; >> ierr = >> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >> } >> else { //Interior points: >> row.c = 0;//x-eq >> col[0].c = 0; val[0] = 2*coeff; >> col[0].i = i; col[0].j = j+1; >> > > It seems to me that the problem is caused because of this kind of > assignment col[1].j = j+1 when j takes the value "info.xs+info.xm-1" and > for similar places below with "i+1" when "i = info.ys+info.ym". > My guess is that the each processor contains only the local portion of the > matrix "mat1" corresponding to the grid area it is taking care of. It > probably does not store the ghost values. But when for e.g. > j = info.xs+info.xm-1, we have col[0].j = j+1 and MatSetValuesStencil > needs accessing the region of another processor. Am I thinking correctly ? > Iis this the case where I need each processor allocating a space for ghost > values too? > If so, how do I do it? > If you want to step outside the mesh, you either need DMDA_BOUNDARY_PERIODIC or DMDA_BOUNDARY_GHOSTED. Matt > col[1].c = 0; val[1] = -val[0] - coeff; >> col[1].i = i; col[1].j = j; >> col[2].c = 1; val[2] = 4*coeff; >> col[2].i = i+1; col[2].j = j; >> col[3].c = 1; val[3] = 4*coeff; >> col[3].i = i; col[3].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >> >> row.c = 1; //y-eq >> col[0].c = 1; val[0] = 2*coeff; >> col[0].i = i; col[0].j = j+1; >> col[1].c = 1; val[1] = -val[0] - coeff; >> col[1].i = i; col[1].j = j; >> col[2].c = 0; val[2] = 4*coeff; >> col[2].i = i+1; col[2].j = j; >> col[3].c = 0; val[3] = 4*coeff; >> col[3].i = i; col[3].j = j; >> ierr = >> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >> } >> } >> } >> >> MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); >> MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); >> ierr = >> MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); >> ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr); >> >> /* clean up and exit */ >> ierr = DMDestroy(&da);CHKERRQ(ierr); >> ierr = MatDestroy(&mat1); >> ierr = PetscFinalize(); >> return 0; >> } >> >> >> >> >>> >>> >>>> PetscInt m = 10, n=10; >>>> ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, >>>> DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, >>>> >>>> PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); >>>> >>>> Mat mat1; >>>> ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); >>>> MatStencil row, col[4]; //let's test 4 non-zeros in one row. >>>> PetscScalar val[4]; >>>> PetscScalar coeff = 2.; //just a constant coefficient for testing. >>>> PetscInt i,j; >>>> DMDALocalInfo info; >>>> ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); >>>> //Now fill up the matrix: >>>> for(i = info.ys; i < info.ys+info.ym; ++i){ >>>> for(j = info.xs; j < info.xs+info.xm; ++j){ >>>> row.i = i; row.j = j; //one node at a time >>>> if (j == 0 || j == info.mx-1){ //left and right borders >>>> //vx(i,j) = 0; >>>> row.c = 0; >>>> col[0].c = 0; val[0] = 1; >>>> col[0].i = i; col[0].j = j; >>>> ierr = >>>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >>>> >>>> //vy: //vy(i,j) - c*vy(i,j+-1) = 0; >>>> row.c = 1; >>>> col[0].c = 1; val[1] = coeff; >>>> col[1].c = 1; >>>> col[1].i = i; >>>> if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; >>>> col[1].j = j+1; >>>> else //vy(i,j) - c*vy(i,j-1) = 0; >>>> col[1].j = j-1; >>>> >>>> ierr = >>>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >>>> } >>>> else if (i == 0 || i == info.my-1){ //top and bottom >>>> borders >>>> //vx: vx(i,j) - c* vx(i+-1,j) = 0; >>>> row.c = 0; >>>> col[0].c = 0; val[0] = 1; >>>> col[0].i = i; col[0].j = j; >>>> col[1].c = 0; val[1] = coeff; >>>> if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; >>>> col[1].i = i+1; >>>> else //vx(i,j) - c*vx(i-1,j) = 0; >>>> col[1].i = i-1; >>>> col[1].j = j; >>>> ierr = >>>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); >>>> >>>> //vy(i,j) = 0; >>>> row.c = 1; >>>> col[0].c = 1; >>>> ierr = >>>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); >>>> } >>>> else { //Interior points: >>>> row.c = 0;//x-eq >>>> col[0].c = 0; val[0] = 2*coeff; >>>> col[0].i = i; col[0].j = j+1; >>>> col[1].c = 0; val[1] = -val[0] - coeff; >>>> col[1].i = i; col[1].j = j; >>>> col[2].c = 1; val[2] = 4*coeff; >>>> col[2].i = i+1; col[2].j = j; >>>> col[3].c = 1; val[3] = 4*coeff; >>>> col[3].i = i; col[3].j = j; >>>> ierr = >>>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >>>> >>>> row.c = 1; //y-eq >>>> col[0].c = 1; val[0] = 2*coeff; >>>> col[0].i = i; col[0].j = j+1; >>>> col[1].c = 1; val[1] = -val[0] - coeff; >>>> col[1].i = i; col[1].j = j; >>>> col[2].c = 0; val[2] = 4*coeff; >>>> col[2].i = i+1; col[2].j = j; >>>> col[3].c = 0; val[3] = 4*coeff; >>>> col[3].i = i; col[3].j = j; >>>> ierr = >>>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); >>>> } >>>> } >>>> } >>>> >>>> MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); >>>> MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); >>>> >>>> However I get the following error when run with 2 processors. >>>> [0]PETSC ERROR: --------------------- Error Message >>>> ------------------------------------ >>>> [0]PETSC ERROR: Argument out of range! >>>> [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named >>>> edwards by bkhanal Thu Jul 4 14:20:11 2013 >>>> [0]PETSC ERROR: Libraries linked from >>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib >>>> [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 >>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>> -with-clanguage=cxx --download-hypre=1 >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c >>>> [0]PETSC ERROR: MatSetValuesLocal() line 1967 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: MatSetValuesStencil() line 1339 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: main() line 75 in >>>> "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx >>>> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >>>> [cli_0]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = EXIT CODE: 63 >>>> = CLEANING UP REMAINING PROCESSES >>>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES >>>> >>>> =================================================================================== >>>> >>>> I do not understand how the index (i,j) are out of range when set to >>>> corresponding fields in row and col variables. What could be the possible >>>> problem ? And any suggestions on the way to debug this sort of issues ? >>>> >>>> Thanks, >>>> Bishesh >>>> >>>> >>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jul 4 15:34:23 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Jul 2013 15:34:23 -0500 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: The j should be tracking the y variable and the i the x variable. See for example src/ksp/ksp/examples/tutorials/ex29.c for (j=ys; j wrote: > > On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley wrote: > On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal wrote: > Hi all, > I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that results from a finite difference discretization of a PDE (objective is to solve the resulting linear system in the form of Ax=b). > > However there are some errors and I'm not sure about the issue! > A short demo: > A 2D mXn grid, with two variables at each node (dof=2), > so the resulting A would be 2mn X 2mn. > Let's say the two variables are vx and vy, and the two associated > equations discretized are x-eq and y-eq. > > Here is the relevant part of the code: (mat1 variable for the A matrix): > > Just sending the whole code is better. I suspect the logic is wrong for selecting the boundary. > > Matt > > Thanks, here is the complete code: > > #include > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **argv) > { > PetscErrorCode ierr; > DM da; > PetscInt m=10,n=10; > > ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > /* Create a DMDA and an associated vector */ > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > Mat mat1; > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > PetscScalar val[4]; > PetscScalar coeff = 2.; //just a constant coefficient for testing. > PetscInt i,j; > DMDALocalInfo info; > > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > > for(i = info.ys; i < info.ys+info.ym; ++i){ > for(j = info.xs; j < info.xs+info.xm; ++j){ > row.i = i; row.j = j; //one node at a time > if (j == 0 || j == info.mx-1){ //left and right borders > //vx(i,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > row.c = 1; > col[0].c = 1; val[1] = coeff; > col[1].c = 1; > col[1].i = i; > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > col[1].j = j+1; > else //vy(i,j) - c*vy(i,j-1) = 0; > col[1].j = j-1; > > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else if (i == 0 || i == info.my-1){ //top and bottom borders > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > col[1].c = 0; val[1] = coeff; > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > col[1].i = i+1; > else //vx(i,j) - c*vx(i-1,j) = 0; > col[1].i = i-1; > col[1].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy(i,j) = 0; > row.c = 1; > col[0].c = 1; > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else { //Interior points: > row.c = 0;//x-eq > col[0].c = 0; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 0; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 1; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 1; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > row.c = 1; //y-eq > col[0].c = 1; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 1; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 0; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 0; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > } > } > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > ierr = MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); > ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr); > > /* clean up and exit */ > ierr = DMDestroy(&da);CHKERRQ(ierr); > ierr = MatDestroy(&mat1); > ierr = PetscFinalize(); > return 0; > } > > > > > PetscInt m = 10, n=10; > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > Mat mat1; > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > PetscScalar val[4]; > PetscScalar coeff = 2.; //just a constant coefficient for testing. > PetscInt i,j; > DMDALocalInfo info; > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > //Now fill up the matrix: > for(i = info.ys; i < info.ys+info.ym; ++i){ > for(j = info.xs; j < info.xs+info.xm; ++j){ > row.i = i; row.j = j; //one node at a time > if (j == 0 || j == info.mx-1){ //left and right borders > //vx(i,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > row.c = 1; > col[0].c = 1; val[1] = coeff; > col[1].c = 1; > col[1].i = i; > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > col[1].j = j+1; > else //vy(i,j) - c*vy(i,j-1) = 0; > col[1].j = j-1; > > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else if (i == 0 || i == info.my-1){ //top and bottom borders > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > row.c = 0; > col[0].c = 0; val[0] = 1; > col[0].i = i; col[0].j = j; > col[1].c = 0; val[1] = coeff; > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > col[1].i = i+1; > else //vx(i,j) - c*vx(i-1,j) = 0; > col[1].i = i-1; > col[1].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > //vy(i,j) = 0; > row.c = 1; > col[0].c = 1; > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > else { //Interior points: > row.c = 0;//x-eq > col[0].c = 0; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 0; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 1; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 1; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > row.c = 1; //y-eq > col[0].c = 1; val[0] = 2*coeff; > col[0].i = i; col[0].j = j+1; > col[1].c = 1; val[1] = -val[0] - coeff; > col[1].i = i; col[1].j = j; > col[2].c = 0; val[2] = 4*coeff; > col[2].i = i+1; col[2].j = j; > col[3].c = 0; val[3] = 4*coeff; > col[3].i = i; col[3].j = j; > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > } > } > } > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > > However I get the following error when run with 2 processors. > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named edwards by bkhanal Thu Jul 4 14:20:11 2013 > [0]PETSC ERROR: Libraries linked from /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib > [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 -with-clanguage=cxx --download-hypre=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c > [0]PETSC ERROR: MatSetValuesLocal() line 1967 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > [0]PETSC ERROR: MatSetValuesStencil() line 1339 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > [0]PETSC ERROR: main() line 75 in "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 63 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > > I do not understand how the index (i,j) are out of range when set to corresponding fields in row and col variables. What could be the possible problem ? And any suggestions on the way to debug this sort of issues ? > > Thanks, > Bishesh > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From bisheshkh at gmail.com Fri Jul 5 04:37:06 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Fri, 5 Jul 2013 11:37:06 +0200 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: On Thu, Jul 4, 2013 at 10:34 PM, Barry Smith wrote: > > The j should be tracking the y variable and the i the x variable. See > for example src/ksp/ksp/examples/tutorials/ex29.c > > for (j=ys; j for (i=xs; i row.i = i; row.j = j; > > Thanks Barry, tracking y variable with i, and x with j was the problem! Interchanging them worked. > After you fix this run with a 3 by 4 mesh and check the line number > printed indicating the problem. Then look at the row and column values for > that problem and see in the code how it could get out of bounds. > I'm afraid I did not understand exactly which line number that gets printed are you talking about ? Is it this one in the error message ? [0]PETSC ERROR: main() line 75 in "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx where there is a call to the function MatSetValuesStencil ? Could you please clarify a bit more ? > Barry > > On Jul 4, 2013, at 8:11 AM, Bishesh Khanal wrote: > > > > > On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley > wrote: > > On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal > wrote: > > Hi all, > > I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that > results from a finite difference discretization of a PDE (objective is to > solve the resulting linear system in the form of Ax=b). > > > > However there are some errors and I'm not sure about the issue! > > A short demo: > > A 2D mXn grid, with two variables at each node (dof=2), > > so the resulting A would be 2mn X 2mn. > > Let's say the two variables are vx and vy, and the two associated > > equations discretized are x-eq and y-eq. > > > > Here is the relevant part of the code: (mat1 variable for the A matrix): > > > > Just sending the whole code is better. I suspect the logic is wrong for > selecting the boundary. > > > > Matt > > > > Thanks, here is the complete code: > > > > #include > > #include > > > > #undef __FUNCT__ > > #define __FUNCT__ "main" > > int main(int argc,char **argv) > > { > > PetscErrorCode ierr; > > DM da; > > PetscInt m=10,n=10; > > > > ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > > > /* Create a DMDA and an associated vector */ > > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, > DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > > > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > > > Mat mat1; > > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > > PetscScalar val[4]; > > PetscScalar coeff = 2.; //just a constant coefficient for testing. > > PetscInt i,j; > > DMDALocalInfo info; > > > > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > > > > for(i = info.ys; i < info.ys+info.ym; ++i){ > > for(j = info.xs; j < info.xs+info.xm; ++j){ > > row.i = i; row.j = j; //one node at a time > > if (j == 0 || j == info.mx-1){ //left and right borders > > //vx(i,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > > row.c = 1; > > col[0].c = 1; val[1] = coeff; > > col[1].c = 1; > > col[1].i = i; > > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > > col[1].j = j+1; > > else //vy(i,j) - c*vy(i,j-1) = 0; > > col[1].j = j-1; > > > > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else if (i == 0 || i == info.my-1){ //top and bottom borders > > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > col[1].c = 0; val[1] = coeff; > > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > > col[1].i = i+1; > > else //vx(i,j) - c*vx(i-1,j) = 0; > > col[1].i = i-1; > > col[1].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy(i,j) = 0; > > row.c = 1; > > col[0].c = 1; > > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else { //Interior points: > > row.c = 0;//x-eq > > col[0].c = 0; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 0; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 1; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 1; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > row.c = 1; //y-eq > > col[0].c = 1; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 1; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 0; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 0; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > } > > } > > > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > > ierr = > MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); > > ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr); > > > > /* clean up and exit */ > > ierr = DMDestroy(&da);CHKERRQ(ierr); > > ierr = MatDestroy(&mat1); > > ierr = PetscFinalize(); > > return 0; > > } > > > > > > > > > > PetscInt m = 10, n=10; > > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, > DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > > > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > > > Mat mat1; > > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > > PetscScalar val[4]; > > PetscScalar coeff = 2.; //just a constant coefficient for testing. > > PetscInt i,j; > > DMDALocalInfo info; > > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > > //Now fill up the matrix: > > for(i = info.ys; i < info.ys+info.ym; ++i){ > > for(j = info.xs; j < info.xs+info.xm; ++j){ > > row.i = i; row.j = j; //one node at a time > > if (j == 0 || j == info.mx-1){ //left and right borders > > //vx(i,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > > row.c = 1; > > col[0].c = 1; val[1] = coeff; > > col[1].c = 1; > > col[1].i = i; > > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > > col[1].j = j+1; > > else //vy(i,j) - c*vy(i,j-1) = 0; > > col[1].j = j-1; > > > > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else if (i == 0 || i == info.my-1){ //top and bottom borders > > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > col[1].c = 0; val[1] = coeff; > > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > > col[1].i = i+1; > > else //vx(i,j) - c*vx(i-1,j) = 0; > > col[1].i = i-1; > > col[1].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy(i,j) = 0; > > row.c = 1; > > col[0].c = 1; > > ierr = > MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else { //Interior points: > > row.c = 0;//x-eq > > col[0].c = 0; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 0; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 1; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 1; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > row.c = 1; //y-eq > > col[0].c = 1; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 1; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 0; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 0; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = > MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > } > > } > > > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > > > > However I get the following error when run with 2 processors. > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Argument out of range! > > [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named > edwards by bkhanal Thu Jul 4 14:20:11 2013 > > [0]PETSC ERROR: Libraries linked from > /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib > > [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 > --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 > -with-clanguage=cxx --download-hypre=1 > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c > > [0]PETSC ERROR: MatSetValuesLocal() line 1967 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > > [0]PETSC ERROR: MatSetValuesStencil() line 1339 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > > [0]PETSC ERROR: main() line 75 in > "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx > > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > [cli_0]: aborting job: > > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > > > > =================================================================================== > > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > > = EXIT CODE: 63 > > = CLEANING UP REMAINING PROCESSES > > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > > > =================================================================================== > > > > I do not understand how the index (i,j) are out of range when set to > corresponding fields in row and col variables. What could be the possible > problem ? And any suggestions on the way to debug this sort of issues ? > > > > Thanks, > > Bishesh > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Jul 5 10:32:03 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 5 Jul 2013 10:32:03 -0500 Subject: [petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil In-Reply-To: References: Message-ID: <1EAC9F04-A869-4BA2-BC6C-60A57EAE963D@mcs.anl.gov> > [0]PETSC ERROR: main() line 75 in "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx This is the line 75 that calls MatSetValuesStencil() with a bad value. Barry On Jul 5, 2013, at 4:37 AM, Bishesh Khanal wrote: > > > > On Thu, Jul 4, 2013 at 10:34 PM, Barry Smith wrote: > > The j should be tracking the y variable and the i the x variable. See for example src/ksp/ksp/examples/tutorials/ex29.c > > for (j=ys; j for (i=xs; i row.i = i; row.j = j; > > Thanks Barry, tracking y variable with i, and x with j was the problem! Interchanging them worked. > > After you fix this run with a 3 by 4 mesh and check the line number printed indicating the problem. Then look at the row and column values for that problem and see in the code how it could get out of bounds. > > I'm afraid I did not understand exactly which line number that gets printed are you talking about ? Is it this one in the error message ? > [0]PETSC ERROR: main() line 75 in "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx > where there is a call to the function MatSetValuesStencil ? > Could you please clarify a bit more ? > > > > Barry > > On Jul 4, 2013, at 8:11 AM, Bishesh Khanal wrote: > > > > > On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley wrote: > > On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal wrote: > > Hi all, > > I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix that results from a finite difference discretization of a PDE (objective is to solve the resulting linear system in the form of Ax=b). > > > > However there are some errors and I'm not sure about the issue! > > A short demo: > > A 2D mXn grid, with two variables at each node (dof=2), > > so the resulting A would be 2mn X 2mn. > > Let's say the two variables are vx and vy, and the two associated > > equations discretized are x-eq and y-eq. > > > > Here is the relevant part of the code: (mat1 variable for the A matrix): > > > > Just sending the whole code is better. I suspect the logic is wrong for selecting the boundary. > > > > Matt > > > > Thanks, here is the complete code: > > > > #include > > #include > > > > #undef __FUNCT__ > > #define __FUNCT__ "main" > > int main(int argc,char **argv) > > { > > PetscErrorCode ierr; > > DM da; > > PetscInt m=10,n=10; > > > > ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr); > > > > /* Create a DMDA and an associated vector */ > > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > > > Mat mat1; > > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > > PetscScalar val[4]; > > PetscScalar coeff = 2.; //just a constant coefficient for testing. > > PetscInt i,j; > > DMDALocalInfo info; > > > > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > > > > for(i = info.ys; i < info.ys+info.ym; ++i){ > > for(j = info.xs; j < info.xs+info.xm; ++j){ > > row.i = i; row.j = j; //one node at a time > > if (j == 0 || j == info.mx-1){ //left and right borders > > //vx(i,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > > row.c = 1; > > col[0].c = 1; val[1] = coeff; > > col[1].c = 1; > > col[1].i = i; > > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > > col[1].j = j+1; > > else //vy(i,j) - c*vy(i,j-1) = 0; > > col[1].j = j-1; > > > > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else if (i == 0 || i == info.my-1){ //top and bottom borders > > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > col[1].c = 0; val[1] = coeff; > > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > > col[1].i = i+1; > > else //vx(i,j) - c*vx(i-1,j) = 0; > > col[1].i = i-1; > > col[1].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy(i,j) = 0; > > row.c = 1; > > col[0].c = 1; > > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else { //Interior points: > > row.c = 0;//x-eq > > col[0].c = 0; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 0; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 1; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 1; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > row.c = 1; //y-eq > > col[0].c = 1; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 1; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 0; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 0; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > } > > } > > > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > > ierr = MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr); > > ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr); > > > > /* clean up and exit */ > > ierr = DMDestroy(&da);CHKERRQ(ierr); > > ierr = MatDestroy(&mat1); > > ierr = PetscFinalize(); > > return 0; > > } > > > > > > > > > > PetscInt m = 10, n=10; > > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n, > > PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr); > > > > Mat mat1; > > ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr); > > MatStencil row, col[4]; //let's test 4 non-zeros in one row. > > PetscScalar val[4]; > > PetscScalar coeff = 2.; //just a constant coefficient for testing. > > PetscInt i,j; > > DMDALocalInfo info; > > ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr); > > //Now fill up the matrix: > > for(i = info.ys; i < info.ys+info.ym; ++i){ > > for(j = info.xs; j < info.xs+info.xm; ++j){ > > row.i = i; row.j = j; //one node at a time > > if (j == 0 || j == info.mx-1){ //left and right borders > > //vx(i,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy: //vy(i,j) - c*vy(i,j+-1) = 0; > > row.c = 1; > > col[0].c = 1; val[1] = coeff; > > col[1].c = 1; > > col[1].i = i; > > if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0; > > col[1].j = j+1; > > else //vy(i,j) - c*vy(i,j-1) = 0; > > col[1].j = j-1; > > > > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else if (i == 0 || i == info.my-1){ //top and bottom borders > > //vx: vx(i,j) - c* vx(i+-1,j) = 0; > > row.c = 0; > > col[0].c = 0; val[0] = 1; > > col[0].i = i; col[0].j = j; > > col[1].c = 0; val[1] = coeff; > > if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0; > > col[1].i = i+1; > > else //vx(i,j) - c*vx(i-1,j) = 0; > > col[1].i = i-1; > > col[1].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > //vy(i,j) = 0; > > row.c = 1; > > col[0].c = 1; > > ierr = MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > else { //Interior points: > > row.c = 0;//x-eq > > col[0].c = 0; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 0; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 1; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 1; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > > > row.c = 1; //y-eq > > col[0].c = 1; val[0] = 2*coeff; > > col[0].i = i; col[0].j = j+1; > > col[1].c = 1; val[1] = -val[0] - coeff; > > col[1].i = i; col[1].j = j; > > col[2].c = 0; val[2] = 4*coeff; > > col[2].i = i+1; col[2].j = j; > > col[3].c = 0; val[3] = 4*coeff; > > col[3].i = i; col[3].j = j; > > ierr = MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr); > > } > > } > > } > > > > MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY); > > MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY); > > > > However I get the following error when run with 2 processors. > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > > [0]PETSC ERROR: Argument out of range! > > [0]PETSC ERROR: Local index 120 too large 120 (max) at 0! > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named edwards by bkhanal Thu Jul 4 14:20:11 2013 > > [0]PETSC ERROR: Libraries linked from /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib > > [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013 > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 -with-clanguage=cxx --download-hypre=1 > > [0]PETSC ERROR: ------------------------------------------------------------------------ > > [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c > > [0]PETSC ERROR: MatSetValuesLocal() line 1967 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > > [0]PETSC ERROR: MatSetValuesStencil() line 1339 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c > > [0]PETSC ERROR: main() line 75 in "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx > > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > [cli_0]: aborting job: > > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > > > =================================================================================== > > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > > = EXIT CODE: 63 > > = CLEANING UP REMAINING PROCESSES > > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > > =================================================================================== > > > > I do not understand how the index (i,j) are out of range when set to corresponding fields in row and col variables. What could be the possible problem ? And any suggestions on the way to debug this sort of issues ? > > > > Thanks, > > Bishesh > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > From frtr at fysik.dtu.dk Mon Jul 8 06:59:53 2013 From: frtr at fysik.dtu.dk (Frederik Treue) Date: Mon, 8 Jul 2013 13:59:53 +0200 Subject: [petsc-users] creating a DM with fewer processors than available? Message-ID: <1373284793.8619.6.camel@frtr-laptop> Hi, I am solving 2D problems using petsc, but for some diagnostics I need 1D objects. This works fine, except if I use many processors for a relatively small grid: say gridsize 512x512, with 256 processors. In the 2D DM, this is not a problem, I get 32x32 blocks on each processor. But in the 1D object petsc allocates 2 points to each processor, and since I have a stencil with of 2, it croaks! Is there any way around this, ie. use less processors for a designated DM? /Frederik Treue From jedbrown at mcs.anl.gov Mon Jul 8 07:15:39 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 08 Jul 2013 07:15:39 -0500 Subject: [petsc-users] creating a DM with fewer processors than available? In-Reply-To: <1373284793.8619.6.camel@frtr-laptop> References: <1373284793.8619.6.camel@frtr-laptop> Message-ID: <87vc4l9ris.fsf@mcs.anl.gov> Frederik Treue writes: > Hi, > > I am solving 2D problems using petsc, but for some diagnostics I need 1D > objects. This works fine, except if I use many processors for a > relatively small grid: say gridsize 512x512, with 256 processors. In the > 2D DM, this is not a problem, I get 32x32 blocks on each processor. But > in the 1D object petsc allocates 2 points to each processor, and since I > have a stencil with of 2, it croaks! Is there any way around this, ie. > use less processors for a designated DM? No, you can create the DMDA on a subcommunicator, but then it can't be composed into the outer solver. How is it currently being used? Can it be eliminated from the solve and just used as an auxiliary component to solve the 2D problem? Unfortunately, there is currently no way to do a reduced distribution on the global communicator. If the number of points is less than 1000, you might just use DMRedundant. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From frtr at fysik.dtu.dk Mon Jul 8 09:17:23 2013 From: frtr at fysik.dtu.dk (Frederik Treue) Date: Mon, 8 Jul 2013 16:17:23 +0200 Subject: [petsc-users] creating a DM with fewer processors than available? In-Reply-To: <87vc4l9ris.fsf@mcs.anl.gov> References: <1373284793.8619.6.camel@frtr-laptop> <87vc4l9ris.fsf@mcs.anl.gov> Message-ID: <1373293043.8619.9.camel@frtr-laptop> On Mon, 2013-07-08 at 07:15 -0500, Jed Brown wrote: > Frederik Treue writes: > > > Hi, > > > > I am solving 2D problems using petsc, but for some diagnostics I need 1D > > objects. This works fine, except if I use many processors for a > > relatively small grid: say gridsize 512x512, with 256 processors. In the > > 2D DM, this is not a problem, I get 32x32 blocks on each processor. But > > in the 1D object petsc allocates 2 points to each processor, and since I > > have a stencil with of 2, it croaks! Is there any way around this, ie. > > use less processors for a designated DM? > > No, you can create the DMDA on a subcommunicator, but then it can't be > composed into the outer solver. How is it currently being used? Can it > be eliminated from the solve and just used as an auxiliary component to > solve the 2D problem? > > Unfortunately, there is currently no way to do a reduced distribution on > the global communicator. If the number of points is less than 1000, you > might just use DMRedundant. Nevermind, I found a workaround: I realized that I never actually used the stencil (it was a relic from a previous version of the code), so I just set the stencil size to 0, and no more problems. However, you might consider adding this functionality, it would be quite usable, in particular for diagnostics. /Frederik Treue From s.prabhakaran at grs-sim.de Mon Jul 8 11:43:50 2013 From: s.prabhakaran at grs-sim.de (Suraj Prabhakaran) Date: Mon, 08 Jul 2013 18:43:50 +0200 Subject: [petsc-users] Finalizing and Initializing during the program run Message-ID: Dear all, I am having a strange problem after re-initializing petsc in my program. Basically, I finalize petsc (petscFinalize()) and then initialize it again (petscInitialize() and petscInitializeFortran()) after some of the iterations of my problem. Soon after the re-initialization, I get an error in one of the AO functions (like AOCreateBasic or AOCreateMapping). Here is a sample output of the error [0]PETSC ERROR: [0] PetscStrcmp line 414 /root/petsc/petsc-3.3-p7/src/sys/utils/str.c [0]PETSC ERROR: [0] PetscFListFind line 353 /root/petsc/petsc-3.3-p7/src/sys/dll/reg.c [0]PETSC ERROR: [0] AOSetType line 35 /root/petsc/petsc-3.3-p7/src/dm/ao/interface/aoreg.c [0]PETSC ERROR: [0] AOCreateBasicIS line 381 /root/petsc/petsc-3.3-p7/src/dm/ao/impls/basic/aobasic.c [0]PETSC ERROR: [0] AOCreateBasic line 336 /root/petsc/petsc-3.3-p7/src/dm/ao/impls/basic/aobasic.c Can someone point out what could probably be going wrong here? I do the finalize and the initialize only after doing the corresponding AODestroys. However, variables malloced with PetscMalloc are reused. Could that be a problem? Any hints on this would help! Best regards, Suraj From jedbrown at mcs.anl.gov Mon Jul 8 11:50:02 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 08 Jul 2013 11:50:02 -0500 Subject: [petsc-users] Finalizing and Initializing during the program run In-Reply-To: References: Message-ID: <87pput8091.fsf@mcs.anl.gov> Suraj Prabhakaran writes: > Dear all, > > I am having a strange problem after re-initializing petsc in my > program. Basically, I finalize petsc (petscFinalize()) and then > initialize it again (petscInitialize() and petscInitializeFortran()) > after some of the iterations of my problem. Run with -malloc_test to confirm that all objects were destroyed before PetscFinalize. Objects cannot outlive PETSc. > Soon after the re-initialization, I get an error in one of the AO > functions (like AOCreateBasic or AOCreateMapping). Here is a sample > output of the error > > [0]PETSC ERROR: [0] PetscStrcmp line 414 /root/petsc/petsc-3.3-p7/src/sys/utils/str.c > [0]PETSC ERROR: [0] PetscFListFind line 353 /root/petsc/petsc-3.3-p7/src/sys/dll/reg.c > [0]PETSC ERROR: [0] AOSetType line 35 /root/petsc/petsc-3.3-p7/src/dm/ao/interface/aoreg.c > [0]PETSC ERROR: [0] AOCreateBasicIS line 381 /root/petsc/petsc-3.3-p7/src/dm/ao/impls/basic/aobasic.c > [0]PETSC ERROR: [0] AOCreateBasic line 336 /root/petsc/petsc-3.3-p7/src/dm/ao/impls/basic/aobasic.c Always send complete error messages. > Can someone point out what could probably be going wrong here? I do > the finalize and the initialize only after doing the corresponding > AODestroys. However, variables malloced with PetscMalloc are > reused. Could that be a problem? Yes, that is not allowed. Why are you calling PetscFinalize in the first place? It should be better to just call once when you're really done with PETSc, including freeing everything allocated using PETSc. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From shchen at www.phys.lsu.edu Mon Jul 8 12:38:24 2013 From: shchen at www.phys.lsu.edu (Shaohao Chen) Date: Mon, 8 Jul 2013 12:38:24 -0500 Subject: [petsc-users] petsc viewer Message-ID: <20130708171639.M21571@physics.lsu.edu> Dear all, I can use VecView(vec, viewer) and correctly output a distributed vector. But when I use it in an iteration, it doesn't work. My codes are like: PetscViewerCreate(PETSC_COMM_WORLD, &viewer1); ...... // modify values of vec VecView(vec, viewer1) ; // correct PetscViewerCreate(PETSC_COMM_WORLD, &viewer2); for(i=0;i References: <20130708171639.M21571@physics.lsu.edu> Message-ID: On Jul 8, 2013, at 12:38 PM, "Shaohao Chen" wrote: > Dear all, > > I can use VecView(vec, viewer) and correctly output a distributed vector. But when I use it in an > iteration, it doesn't work. My codes are like: > > PetscViewerCreate(PETSC_COMM_WORLD, &viewer1); > ...... // modify values of vec > VecView(vec, viewer1) ; // correct > > PetscViewerCreate(PETSC_COMM_WORLD, &viewer2); You need to set the type of viewer here. > for(i=0;i ...... // modify values of vec > VecView(vec, viewer2) ; // incorrect > } > > In viewer1, it outputs the whole correct vector. In viewer2, it just outputs part of the vector and > then stop at the first step of the iteration. What do you mean "stop"? Does it print an error message? > And it becomes much slower when I add the VecView line > in the iteration. The same happens to MatView. What is the problem here? calls to VecView and MatView (especially) will slow down a code since it involves IO. In particular if the file you are saving to is on a slow file server it could slow it down a great deal. Best to save to a local disk. Barry > > Thank you for your attention! > > -- > Shaohao Chen > Department of Physics & Astronomy, > Louisiana State University, > Baton Rouge, LA From mpovolot at purdue.edu Mon Jul 8 15:36:48 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Mon, 08 Jul 2013 16:36:48 -0400 Subject: [petsc-users] PetscMemoryGetMaximumUsage Message-ID: <51DB22E0.6010203@purdue.edu> Dear Petsc developers and users, is there any example that shows how to use the PetscMemoryGetMaximumUsage function? thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From jedbrown at mcs.anl.gov Mon Jul 8 15:47:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 08 Jul 2013 15:47:07 -0500 Subject: [petsc-users] PetscMemoryGetMaximumUsage In-Reply-To: <51DB22E0.6010203@purdue.edu> References: <51DB22E0.6010203@purdue.edu> Message-ID: <874nc493uc.fsf@mcs.anl.gov> Michael Povolotskyi writes: > Dear Petsc developers and users, > is there any example that shows how to use the > PetscMemoryGetMaximumUsage function? It works like this: PetscMemorySetGetMaximumUsage(); // somewhere early in your program, or run with -malloc_log // create and destroy some objects PetscLogDouble mem; PetscMemoryGetMaximumUsage(&mem); http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscMemoryGetMaximumUsage.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mpovolot at purdue.edu Mon Jul 8 15:49:42 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Mon, 08 Jul 2013 16:49:42 -0400 Subject: [petsc-users] PetscMemoryGetMaximumUsage In-Reply-To: <874nc493uc.fsf@mcs.anl.gov> References: <51DB22E0.6010203@purdue.edu> <874nc493uc.fsf@mcs.anl.gov> Message-ID: <51DB25E6.3080608@purdue.edu> On 07/08/2013 04:47 PM, Jed Brown wrote: > Michael Povolotskyi writes: > >> Dear Petsc developers and users, >> is there any example that shows how to use the >> PetscMemoryGetMaximumUsage function? > It works like this: > > PetscMemorySetGetMaximumUsage(); // somewhere early in your program, or run with -malloc_log > > // create and destroy some objects > > PetscLogDouble mem; > PetscMemoryGetMaximumUsage(&mem); > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscMemoryGetMaximumUsage.html Thank you. Just to clarify: PetscMemorySetGetMaximumUsage(); is called only once. Correct? Michael. From jedbrown at mcs.anl.gov Mon Jul 8 16:02:51 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 08 Jul 2013 16:02:51 -0500 Subject: [petsc-users] PetscMemoryGetMaximumUsage In-Reply-To: <51DB25E6.3080608@purdue.edu> References: <51DB22E0.6010203@purdue.edu> <874nc493uc.fsf@mcs.anl.gov> <51DB25E6.3080608@purdue.edu> Message-ID: <87sizo7ojo.fsf@mcs.anl.gov> Michael Povolotskyi writes: > Just to clarify: > > PetscMemorySetGetMaximumUsage(); > > is called only once. Correct? Yes, but it is idempotent. PetscErrorCode PetscMemorySetGetMaximumUsage(void) { PetscFunctionBegin; PetscMemoryCollectMaximumUsage = PETSC_TRUE; PetscFunctionReturn(0); } -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From thomas.de-soza at edf.fr Tue Jul 9 10:24:02 2013 From: thomas.de-soza at edf.fr (Thomas DE-SOZA) Date: Tue, 9 Jul 2013 17:24:02 +0200 Subject: [petsc-users] Non conforming object sizes Message-ID: Dear PETSc users, We're having difficulties troubleshooting some developments in our code that relies on PETSc. We're assembling a matrix of size 90 with the blocksize set to 6, the code is run on two processors so that processor #0 has 48 entries and #1 has 42 entries. During the MatAssemblyEnd, an error about non conforming object sizes is thrown ("Local scatter sizes don't match"). This is raised by VecScatterCreate inside MatAssemblyEnd. We're likely doing something wrong but what would be the way to debug this since the call to VecScatterCreate is not controlled directly by us ? Attached is the output of '-info' during the MatAssemblyEnd. Note : we're still using PETSc 3.2p7 and have not moved to 3.4. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: matassemblyend.output Type: application/octet-stream Size: 5227 bytes Desc: not available URL: -------------- next part -------------- Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Tue Jul 9 12:30:47 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 09 Jul 2013 13:30:47 -0400 Subject: [petsc-users] configuration question Message-ID: <51DC48C7.70103@purdue.edu> Dear Petsc developers and users, I have a problem with petsc configuration: I want to execute the following ./config/configure.py --with-x=0 --with-hdf5 --download-hdf5=1 --with-scalar-type=complex --with-single-library=0 --with-pic=1 --with-shared-libraries=0 --with-clanguage=C++ --with-fortran --with-debugging=1 --with-cc="/opt/intel/impi/4.1.0/intel64/bin/mpicc" --with-fc="/opt/intel/impi/4.1.0/intel64/bin/mpif90" --with-cxx="/opt/intel/impi/4.1.0/intel64/bin/mpicxx " COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" --LDFLAGS=-Wl,-rpath,/opt/intel/mkl//lib/intel64 -L/opt/intel/mkl//lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group --download-metis=1 --download-parmetis=1 --download-mumps=1 --download-scalapack=1 --with-blas-lapack-dir=/opt/intel/mkl/ --download-blacs=1; I get the following error: =============================================================================== Configuring PETSc to compile on your system =============================================================================== ******************************************************************************* ERROR in COMMAND LINE ARGUMENT to ./configure ------------------------------------------------------------------------------- The option -lmkl_intel_lp64 should probably be -lmkl-intel-lp64 ******************************************************************************* What is the meaning of that message? I checked in my file system and the file names contain "_" instead of "-" ls /opt/intel/mkl//lib/intel64/libmkl_intel_lp64.* /opt/intel/mkl//lib/intel64/libmkl_intel_lp64.a /opt/intel/mkl//lib/intel64/libmkl_intel_lp64.so Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From knepley at gmail.com Tue Jul 9 12:35:53 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Jul 2013 12:35:53 -0500 Subject: [petsc-users] configuration question In-Reply-To: <51DC48C7.70103@purdue.edu> References: <51DC48C7.70103@purdue.edu> Message-ID: On Tue, Jul 9, 2013 at 12:30 PM, Michael Povolotskyi wrote: > Dear Petsc developers and users, > I have a problem with petsc configuration: > > I want to execute the following > > ./config/configure.py --with-x=0 --with-hdf5 --download-hdf5=1 > --with-scalar-type=complex --with-single-library=0 --with-pic=1 > --with-shared-libraries=0 --with-clanguage=C++ --with-fortran > --with-debugging=1 --with-cc="/opt/intel/impi/4.**1.0/intel64/bin/mpicc" > --with-fc="/opt/intel/impi/4.**1.0/intel64/bin/mpif90" > --with-cxx="/opt/intel/impi/4.**1.0/intel64/bin/mpicxx " COPTFLAGS="-O3" > CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" --LDFLAGS=-Wl,-rpath,/opt/**intel/mkl//lib/intel64 > -L/opt/intel/mkl//lib/intel64 -Wl,--start-group -lmkl_intel_lp64 > -lmkl_sequential -lmkl_core -Wl,--end-group --download-metis=1 > --download-parmetis=1 --download-mumps=1 --download-scalapack=1 > --with-blas-lapack-dir=/opt/**intel/mkl/ --download-blacs=1; > You need "" around your LDFLAGS argument Matt > I get the following error: > ==============================**==============================** > =================== > Configuring PETSc to compile on your system > ==============================**==============================** > =================== > **************************************************************** > ******************* > ERROR in COMMAND LINE ARGUMENT to ./configure > ------------------------------**------------------------------** > ------------------- > The option -lmkl_intel_lp64 should probably be -lmkl-intel-lp64 > **************************************************************** > ******************* > > > What is the meaning of that message? > I checked in my file system and the file names contain "_" instead of "-" > > ls /opt/intel/mkl//lib/intel64/**libmkl_intel_lp64.* > /opt/intel/mkl//lib/intel64/**libmkl_intel_lp64.a > /opt/intel/mkl//lib/intel64/**libmkl_intel_lp64.so > > Thank you, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Tue Jul 9 12:53:58 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 09 Jul 2013 13:53:58 -0400 Subject: [petsc-users] configuration question In-Reply-To: References: <51DC48C7.70103@purdue.edu> Message-ID: <51DC4E36.6020101@purdue.edu> Thank you, this worked. I have another question: what does this option mean: --with-fortran-kernels? Michael. On 07/09/2013 01:35 PM, Matthew Knepley wrote: > On Tue, Jul 9, 2013 at 12:30 PM, Michael Povolotskyi > > wrote: > > Dear Petsc developers and users, > I have a problem with petsc configuration: > > I want to execute the following > > ./config/configure.py --with-x=0 --with-hdf5 --download-hdf5=1 > --with-scalar-type=complex --with-single-library=0 --with-pic=1 > --with-shared-libraries=0 --with-clanguage=C++ --with-fortran > --with-debugging=1 > --with-cc="/opt/intel/impi/4.1.0/intel64/bin/mpicc" > --with-fc="/opt/intel/impi/4.1.0/intel64/bin/mpif90" > --with-cxx="/opt/intel/impi/4.1.0/intel64/bin/mpicxx " > COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" > --LDFLAGS=-Wl,-rpath,/opt/intel/mkl//lib/intel64 > -L/opt/intel/mkl//lib/intel64 -Wl,--start-group -lmkl_intel_lp64 > -lmkl_sequential -lmkl_core -Wl,--end-group --download-metis=1 > --download-parmetis=1 --download-mumps=1 --download-scalapack=1 > --with-blas-lapack-dir=/opt/intel/mkl/ --download-blacs=1; > > > You need "" around your LDFLAGS argument > > Matt > > I get the following error: > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > ******************************************************************************* > ERROR in COMMAND LINE ARGUMENT to ./configure > ------------------------------------------------------------------------------- > The option -lmkl_intel_lp64 should probably be -lmkl-intel-lp64 > ******************************************************************************* > > > What is the meaning of that message? > I checked in my file system and the file names contain "_" instead > of "-" > > ls /opt/intel/mkl//lib/intel64/libmkl_intel_lp64.* > /opt/intel/mkl//lib/intel64/libmkl_intel_lp64.a > /opt/intel/mkl//lib/intel64/libmkl_intel_lp64.so > > Thank you, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 9 13:09:54 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 09 Jul 2013 13:09:54 -0500 Subject: [petsc-users] configuration question In-Reply-To: <51DC4E36.6020101@purdue.edu> References: <51DC48C7.70103@purdue.edu> <51DC4E36.6020101@purdue.edu> Message-ID: <87k3kz4nbh.fsf@mcs.anl.gov> Michael Povolotskyi writes: > Thank you, > this worked. > I have another question: > what does this option mean: --with-fortran-kernels? --with-fortran-kernels= Use Fortran for linear algebra kernels current: 0 It means to compile basic linear algebra kernels using the Fortran compiler instead of using the C compiler. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mpovolot at purdue.edu Tue Jul 9 15:11:04 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 09 Jul 2013 16:11:04 -0400 Subject: [petsc-users] compile petsc with intel compiler Message-ID: <51DC6E58.2060307@purdue.edu> Dear Petsc users and developers, I'm trying to build petsc with Intel compiler. The configuration process runs okay (I attach the log here), but I get an error when I build it: -- Configuring done -- Generating done -- Build files have been written to: /home/mpovolot/Code_intel/libs/petsc/build-real/linux Scanning dependencies of target petscsys [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/sys/info/verboseinfo.c.o Building Fortran object CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o Building CXX object CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o Building CXX object CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o Building CXX object CMakeFiles/petscsys.dir/src/sys/logging/plog.c.o /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/f90-mod/petscsysmod.F:6.11: use mpi 1 Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file make[6]: *** [CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o] Error 1 make[6]: *** Waiting for unfinished jobs.... make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 make[4]: *** [all] Error 2 make[3]: *** [ccmake] Error 2 make[2]: *** [cmake] Error 2 **************************ERROR************************************ Error during compile, check linux/conf/make.log Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** I attach here the make.log What is strange to me that it has something to do with Gfortran, while I want to build everything with Intel. Thank you for help, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 3085737 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: text/x-log Size: 18951 bytes Desc: not available URL: From knepley at gmail.com Tue Jul 9 15:18:00 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Jul 2013 15:18:00 -0500 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DC6E58.2060307@purdue.edu> References: <51DC6E58.2060307@purdue.edu> Message-ID: On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi wrote: > Dear Petsc users and developers, > I'm trying to build petsc with Intel compiler. > 1) First, ask yourself whether you really want to build with the Intel compiler. Then ask again. 2) Do you need Fortran? If not, turn it off --with-fc=0. 3) If you want Fortran and Intel (and have a hatred of free time), try the legacy build make all-legacy 4) If this is still broken, send the new make.log Thanks, Matt > The configuration process runs okay (I attach the log here), > but I get an error when I build it: > -- Configuring done > -- Generating done > -- Build files have been written to: /home/mpovolot/Code_intel/** > libs/petsc/build-real/linux > Scanning dependencies of target petscsys > [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/** > sys/info/verboseinfo.c.o > Building Fortran object CMakeFiles/petscsys.dir/src/** > sys/f90-mod/petscsysmod.F.o > Building CXX object CMakeFiles/petscsys.dir/src/** > sys/totalview/tv_data_display.**c.o > Building CXX object CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o > Building CXX object CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o > /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** > f90-mod/petscsysmod.F:6.11: > > use mpi > 1 > Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file > make[6]: *** [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] > Error 1 > make[6]: *** Waiting for unfinished jobs.... > make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 > make[4]: *** [all] Error 2 > make[3]: *** [ccmake] Error 2 > make[2]: *** [cmake] Error 2 > ****************************ERROR************************************** > Error during compile, check linux/conf/make.log > Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov > ************************************************************************ > > I attach here the make.log > What is strange to me that it has something to do with Gfortran, while I > want to build everything with Intel. > Thank you for help, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jul 9 15:25:51 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 9 Jul 2013 15:25:51 -0500 (CDT) Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DC6E58.2060307@purdue.edu> References: <51DC6E58.2060307@purdue.edu> Message-ID: On Tue, 9 Jul 2013, Michael Povolotskyi wrote: > Dear Petsc users and developers, > I'm trying to build petsc with Intel compiler. > > The configuration process runs okay (I attach the log here), > but I get an error when I build it: > -- Configuring done > -- Generating done > -- Build files have been written to: > /home/mpovolot/Code_intel/libs/petsc/build-real/linux > Scanning dependencies of target petscsys > [ 0%] [ 0%] [ 0%] Building CXX object > CMakeFiles/petscsys.dir/src/sys/info/verboseinfo.c.o > Building Fortran object > CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o > Building CXX object > CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o > Building CXX object CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o > Building CXX object CMakeFiles/petscsys.dir/src/sys/logging/plog.c.o > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/f90-mod/petscsysmod.F:6.11: > > use mpi > 1 > Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file > make[6]: *** [CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o] Error 1 This is strange. configure check passed - but the actual build failed :( you can workarround it by removing 'HAVE_MPI_F90MODULE' stuff from PETSC_DIR/linux/include/petscconf.h Perhaps the test code need more mpi calls to catch this error? program main use mpi integer ierr,rank call mpi_init(ierr) call mpi_comm_rank(MPI_COMM_WORLD,rank,ierr) end Satish > make[6]: *** Waiting for unfinished jobs.... > make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 > make[4]: *** [all] Error 2 > make[3]: *** [ccmake] Error 2 > make[2]: *** [cmake] Error 2 > **************************ERROR************************************ > Error during compile, check linux/conf/make.log > Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov > ******************************************************************** > > I attach here the make.log > What is strange to me that it has something to do with Gfortran, while I want > to build everything with Intel. > Thank you for help, > Michael. > > From balay at mcs.anl.gov Tue Jul 9 15:31:00 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 9 Jul 2013 15:31:00 -0500 (CDT) Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> Message-ID: For some reason this issue comes up with mpi.mod provided by intel mpi. We have a configure test for it - but looks like its not sufficient to catch this issue. satish On Tue, 9 Jul 2013, Matthew Knepley wrote: > On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi wrote: > > > Dear Petsc users and developers, > > I'm trying to build petsc with Intel compiler. > > > > 1) First, ask yourself whether you really want to build with the Intel > compiler. Then ask again. > > 2) Do you need Fortran? If not, turn it off --with-fc=0. > > 3) If you want Fortran and Intel (and have a hatred of free time), try the > legacy build > > make all-legacy > > 4) If this is still broken, send the new make.log > > Thanks, > > Matt > > > > The configuration process runs okay (I attach the log here), > > but I get an error when I build it: > > -- Configuring done > > -- Generating done > > -- Build files have been written to: /home/mpovolot/Code_intel/** > > libs/petsc/build-real/linux > > Scanning dependencies of target petscsys > > [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/** > > sys/info/verboseinfo.c.o > > Building Fortran object CMakeFiles/petscsys.dir/src/** > > sys/f90-mod/petscsysmod.F.o > > Building CXX object CMakeFiles/petscsys.dir/src/** > > sys/totalview/tv_data_display.**c.o > > Building CXX object CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o > > Building CXX object CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o > > /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** > > f90-mod/petscsysmod.F:6.11: > > > > use mpi > > 1 > > Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file > > make[6]: *** [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] > > Error 1 > > make[6]: *** Waiting for unfinished jobs.... > > make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 > > make[4]: *** [all] Error 2 > > make[3]: *** [ccmake] Error 2 > > make[2]: *** [cmake] Error 2 > > ****************************ERROR************************************** > > Error during compile, check linux/conf/make.log > > Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov > > ************************************************************************ > > > > I attach here the make.log > > What is strange to me that it has something to do with Gfortran, while I > > want to build everything with Intel. > > Thank you for help, > > Michael. > > > > -- > > Michael Povolotskyi, PhD > > Research Assistant Professor > > Network for Computational Nanotechnology > > 207 S Martin Jischke Drive > > Purdue University, DLR, room 441-10 > > West Lafayette, Indiana 47907 > > > > phone: +1-765-494-9396 > > fax: +1-765-496-6026 > > > > > > > From mpovolot at purdue.edu Tue Jul 9 15:32:48 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 09 Jul 2013 16:32:48 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> Message-ID: <51DC7370.9070603@purdue.edu> If I do not need to use petsc in fortran programs, can I build petsc without fortran and thus avoid this situation? Michael. On 07/09/2013 04:31 PM, Satish Balay wrote: > For some reason this issue comes up with mpi.mod provided by intel > mpi. > > We have a configure test for it - but looks like its not sufficient to > catch this issue. > > satish > > > On Tue, 9 Jul 2013, Matthew Knepley wrote: > >> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi wrote: >> >>> Dear Petsc users and developers, >>> I'm trying to build petsc with Intel compiler. >>> >> 1) First, ask yourself whether you really want to build with the Intel >> compiler. Then ask again. >> >> 2) Do you need Fortran? If not, turn it off --with-fc=0. >> >> 3) If you want Fortran and Intel (and have a hatred of free time), try the >> legacy build >> >> make all-legacy >> >> 4) If this is still broken, send the new make.log >> >> Thanks, >> >> Matt >> >> >>> The configuration process runs okay (I attach the log here), >>> but I get an error when I build it: >>> -- Configuring done >>> -- Generating done >>> -- Build files have been written to: /home/mpovolot/Code_intel/** >>> libs/petsc/build-real/linux >>> Scanning dependencies of target petscsys >>> [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/** >>> sys/info/verboseinfo.c.o >>> Building Fortran object CMakeFiles/petscsys.dir/src/** >>> sys/f90-mod/petscsysmod.F.o >>> Building CXX object CMakeFiles/petscsys.dir/src/** >>> sys/totalview/tv_data_display.**c.o >>> Building CXX object CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >>> Building CXX object CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >>> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >>> f90-mod/petscsysmod.F:6.11: >>> >>> use mpi >>> 1 >>> Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file >>> make[6]: *** [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >>> Error 1 >>> make[6]: *** Waiting for unfinished jobs.... >>> make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 >>> make[4]: *** [all] Error 2 >>> make[3]: *** [ccmake] Error 2 >>> make[2]: *** [cmake] Error 2 >>> ****************************ERROR************************************** >>> Error during compile, check linux/conf/make.log >>> Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov >>> ************************************************************************ >>> >>> I attach here the make.log >>> What is strange to me that it has something to do with Gfortran, while I >>> want to build everything with Intel. >>> Thank you for help, >>> Michael. >>> >>> -- >>> Michael Povolotskyi, PhD >>> Research Assistant Professor >>> Network for Computational Nanotechnology >>> 207 S Martin Jischke Drive >>> Purdue University, DLR, room 441-10 >>> West Lafayette, Indiana 47907 >>> >>> phone: +1-765-494-9396 >>> fax: +1-765-496-6026 >>> >>> >> >> From knepley at gmail.com Tue Jul 9 15:34:01 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Jul 2013 15:34:01 -0500 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DC7370.9070603@purdue.edu> References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> Message-ID: On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi wrote: > If I do not need to use petsc in fortran programs, can I build petsc > without fortran and thus avoid this situation? > Michael. > > > On 07/09/2013 04:31 PM, Satish Balay wrote: > >> For some reason this issue comes up with mpi.mod provided by intel >> mpi. >> >> We have a configure test for it - but looks like its not sufficient to >> catch this issue. >> >> satish >> >> >> On Tue, 9 Jul 2013, Matthew Knepley wrote: >> >> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi >> >wrote: >>> >>> Dear Petsc users and developers, >>>> I'm trying to build petsc with Intel compiler. >>>> >>>> 1) First, ask yourself whether you really want to build with the Intel >>> compiler. Then ask again. >>> >>> 2) Do you need Fortran? If not, turn it off --with-fc=0. >>> >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Matt > 3) If you want Fortran and Intel (and have a hatred of free time), try the >>> legacy build >>> >>> make all-legacy >>> >>> 4) If this is still broken, send the new make.log >>> >>> Thanks, >>> >>> Matt >>> >>> >>> The configuration process runs okay (I attach the log here), >>>> but I get an error when I build it: >>>> -- Configuring done >>>> -- Generating done >>>> -- Build files have been written to: /home/mpovolot/Code_intel/** >>>> libs/petsc/build-real/linux >>>> Scanning dependencies of target petscsys >>>> [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/** >>>> sys/info/verboseinfo.c.o >>>> Building Fortran object CMakeFiles/petscsys.dir/src/** >>>> sys/f90-mod/petscsysmod.F.o >>>> Building CXX object CMakeFiles/petscsys.dir/src/** >>>> sys/totalview/tv_data_display.****c.o >>>> Building CXX object CMakeFiles/petscsys.dir/src/**** >>>> sys/python/pythonsys.c.o >>>> Building CXX object CMakeFiles/petscsys.dir/src/**** >>>> sys/logging/plog.c.o >>>> /home/mpovolot/Code_intel/****libs/petsc/build-real/src/sys/**** >>>> f90-mod/petscsysmod.F:6.11: >>>> >>>> use mpi >>>> 1 >>>> Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file >>>> make[6]: *** [CMakeFiles/petscsys.dir/src/*** >>>> *sys/f90-mod/petscsysmod.F.o] >>>> Error 1 >>>> make[6]: *** Waiting for unfinished jobs.... >>>> make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 >>>> make[4]: *** [all] Error 2 >>>> make[3]: *** [ccmake] Error 2 >>>> make[2]: *** [cmake] Error 2 >>>> ******************************ERROR*************************** >>>> ************* >>>> Error during compile, check linux/conf/make.log >>>> Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov >>>> **************************************************************** >>>> ************ >>>> >>>> I attach here the make.log >>>> What is strange to me that it has something to do with Gfortran, while I >>>> want to build everything with Intel. >>>> Thank you for help, >>>> Michael. >>>> >>>> -- >>>> Michael Povolotskyi, PhD >>>> Research Assistant Professor >>>> Network for Computational Nanotechnology >>>> 207 S Martin Jischke Drive >>>> Purdue University, DLR, room 441-10 >>>> West Lafayette, Indiana 47907 >>>> >>>> phone: +1-765-494-9396 >>>> fax: +1-765-496-6026 >>>> >>>> >>>> >>> >>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jul 9 15:37:18 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 9 Jul 2013 15:37:18 -0500 (CDT) Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DC6E58.2060307@purdue.edu> References: <51DC6E58.2060307@purdue.edu> Message-ID: On Tue, 9 Jul 2013, Michael Povolotskyi wrote: > Dear Petsc users and developers, > I'm trying to build petsc with Intel compiler. > What is strange to me that it has something to do with Gfortran, while I want > to build everything with Intel. Looks like you are not using intel compilers - just intel mpi Satish. >>>>>>>> Executing: /opt/intel/impi/4.1.0/intel64/bin/mpicc -show sh: gcc -ldl -I/opt/intel/impi/4.1.0.024/intel64/include -L/opt/intel/impi/4.1.0.024/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/impi/4.1.0.024/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/4.1 -lmpi -lmpigf -lmpigi -lrt -lpthread Executing: /opt/intel/impi/4.1.0/intel64/bin/mpif90 -show sh: gfortran -ldl -I/opt/intel/impi/4.1.0.024/intel64/include/gfortran/4.7.0 -I/opt/intel/impi/4.1.0.024/intel64/include -L/opt/intel/impi/4.1.0.024/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel/impi/4.1.0.024/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/4.1 -lmpi -lmpigf -lmpigi -lrt -lpthread <<<<<<<< From mpovolot at purdue.edu Tue Jul 9 15:38:38 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 09 Jul 2013 16:38:38 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> Message-ID: <51DC74CE.20708@purdue.edu> On 07/09/2013 04:37 PM, Satish Balay wrote: > On Tue, 9 Jul 2013, Michael Povolotskyi wrote: > >> Dear Petsc users and developers, >> I'm trying to build petsc with Intel compiler. >> What is strange to me that it has something to do with Gfortran, while I want >> to build everything with Intel. > Looks like you are not using intel compilers - just intel mpi > > Satish. > > Yes, you are right. I used mpicc, instead of mpiicc. My colleague Veronica just pointed this out. Michael. From knepley at gmail.com Tue Jul 9 15:40:43 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Jul 2013 15:40:43 -0500 Subject: [petsc-users] Non conforming object sizes In-Reply-To: References: Message-ID: On Tue, Jul 9, 2013 at 10:24 AM, Thomas DE-SOZA wrote: > > Dear PETSc users, > > We're having difficulties troubleshooting some developments in our code > that relies on PETSc. We're assembling a matrix of size 90 with the > blocksize set to 6, the code is run on two processors so that processor #0 > has 48 entries and #1 has 42 entries. > During the MatAssemblyEnd, an error about non conforming object sizes is > thrown ("Local scatter sizes don't match"). This is raised by > VecScatterCreate inside MatAssemblyEnd. > > We're likely doing something wrong but what would be the way to debug this > since the call to VecScatterCreate is not controlled directly by us ? > Attached is the output of '-info' during the MatAssemblyEnd. > I think you are overwriting memory. This condition should be impossible with that code path, so some memory is likely getting garbaged. Please run under valgrind. > Note : we're still using PETSc 3.2p7 and have not moved to 3.4. > Change that. Matt > Thanks, > > Thomas > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Tue Jul 9 16:17:24 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 09 Jul 2013 17:17:24 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> Message-ID: <51DC7DE4.6070504@purdue.edu> On 07/09/2013 04:37 PM, Satish Balay wrote: > On Tue, 9 Jul 2013, Michael Povolotskyi wrote: > >> Dear Petsc users and developers, >> I'm trying to build petsc with Intel compiler. >> What is strange to me that it has something to do with Gfortran, while I want >> to build everything with Intel. > Looks like you are not using intel compilers - just intel mpi > > Satish. Hello Satish, I tried to specify exactly that I want to use Intel compilers: ./config/configure.py --with-x=0 --with-hdf5 --download-hdf5=1 --with-scalar-type=real --with-single-library=0 --with-pic=1 --with-shared-libraries=0 --with-mpi-dir=/opt/intel/impi/4.1.0/ \ --with-clanguage=C++ --with-fortran --with-debugging=1 --with-cc="/opt/intel/impi/4.1.0/intel64/bin/mpiicc -cc=/opt/intel/bin/icc" --with-fc="/opt/intel/impi/4.1.0/intel64/bin/mpiifort -fc=/opt/intel/bin/ifort" --with-cxx="/opt/intel/impi/4.1.0/intel64/bin/mpiicpc -cxx=/opt/intel/bin/icpc " COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" --download-metis=1 --download-parmetis=1 --download-mumps=1 --download-scalapack=/home/mpovolot/Code_intel/libs/petsc/scalapack-2.0.2.tgz --with-fortran-kernels=1 --with-blas-lapack-dir=/opt/intel/mkl/ --download-blacs=1; But I get the following error message: **************************************************************************** UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --with-cc=/opt/intel/impi/4.1.0/intel64/bin/mpiicc -cc=/opt/intel/bin/icc is specified with --with-mpi-dir=/opt/intel/impi/4.1.0/. However /opt/intel/impi/4.1.0/bin/mpicc exists and should be the prefered compiler! Suggest not specifying --with-cc option so that configure can use /opt/intel/impi/4.1.0/bin/mpicc instead. ******************************************************************************* File "./config/configure.py", line 293, in petsc_configure framework.configure(out = sys.stdout) File "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/framework.py", line 933, in configure child.configure() File "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/setCompilers.py", line 1524, in configure self.executeTest(self.checkMPICompilerOverride) File "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/base.py", line 115, in executeTest ret = apply(test, args,kargs) File "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/setCompilers.py", line 1492, in checkMPICompilerOverride raise RuntimeError(msg) Thank you, Michael. From balay at mcs.anl.gov Tue Jul 9 16:19:31 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 9 Jul 2013 16:19:31 -0500 (CDT) Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DC7DE4.6070504@purdue.edu> References: <51DC6E58.2060307@purdue.edu> <51DC7DE4.6070504@purdue.edu> Message-ID: since you want to specify --with-cc etc options to mpi wrappers - remove the option --with-mpi-dir=/opt/intel/impi/4.1.0/ Satish On Tue, 9 Jul 2013, Michael Povolotskyi wrote: > On 07/09/2013 04:37 PM, Satish Balay wrote: > > On Tue, 9 Jul 2013, Michael Povolotskyi wrote: > > > > > Dear Petsc users and developers, > > > I'm trying to build petsc with Intel compiler. > > > What is strange to me that it has something to do with Gfortran, while I > > > want > > > to build everything with Intel. > > Looks like you are not using intel compilers - just intel mpi > > > > Satish. > Hello Satish, > I tried to specify exactly that I want to use Intel compilers: > > ./config/configure.py --with-x=0 --with-hdf5 --download-hdf5=1 > --with-scalar-type=real --with-single-library=0 --with-pic=1 > --with-shared-libraries=0 --with-mpi-dir=/opt/intel/impi/4.1.0/ \ > --with-clanguage=C++ --with-fortran --with-debugging=1 > --with-cc="/opt/intel/impi/4.1.0/intel64/bin/mpiicc -cc=/opt/intel/bin/icc" > --with-fc="/opt/intel/impi/4.1.0/intel64/bin/mpiifort > -fc=/opt/intel/bin/ifort" > --with-cxx="/opt/intel/impi/4.1.0/intel64/bin/mpiicpc -cxx=/opt/intel/bin/icpc > " COPTFLAGS="-O3" CXXOPTFLAGS="-O3" FOPTFLAGS="-O3" --download-metis=1 > --download-parmetis=1 --download-mumps=1 > --download-scalapack=/home/mpovolot/Code_intel/libs/petsc/scalapack-2.0.2.tgz > --with-fortran-kernels=1 --with-blas-lapack-dir=/opt/intel/mkl/ > --download-blacs=1; > > > But I get the following error message: > > **************************************************************************** > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > --with-cc=/opt/intel/impi/4.1.0/intel64/bin/mpiicc -cc=/opt/intel/bin/icc is > specified with --with-mpi-dir=/opt/intel/impi/4.1.0/. However > /opt/intel/impi/4.1.0/bin/mpicc exists and should be the prefered compiler! > Suggest not specifying --with-cc option so that configure can use > /opt/intel/impi/4.1.0/bin/mpicc instead. > ******************************************************************************* > File "./config/configure.py", line 293, in petsc_configure > framework.configure(out = sys.stdout) > File > "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/framework.py", > line 933, in configure > child.configure() > File > "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/setCompilers.py", > line 1524, in configure > self.executeTest(self.checkMPICompilerOverride) > File > "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/base.py", > line 115, in executeTest > ret = apply(test, args,kargs) > File > "/home/mpovolot/Code_intel/libs/petsc/build-real/config/BuildSystem/config/setCompilers.py", > line 1492, in checkMPICompilerOverride > raise RuntimeError(msg) > > > Thank you, > Michael. > From garnet.vaz at gmail.com Tue Jul 9 17:09:45 2013 From: garnet.vaz at gmail.com (Garnet Vaz) Date: Tue, 9 Jul 2013 15:09:45 -0700 Subject: [petsc-users] Error message Exit Code 9 Message-ID: Dear all, My PETSc code crashes with the output " Number of lines in file is 5349000 #<----- Number of points Number of lines in file is 10695950 #<----- Number of triangles reading cell list successful reading vertex list successful Mesh distribution successful =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 9 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9) This typically refers to a problem with your application. Please see the FAQ page for debugging suggestions " I am using a debug version of PETSc. The same code has been working for smaller meshes (upto 3M triangles). I have run the code through valgrind and it reports no memory leaks for a smaller mesh. All my functions use PetscFunctionBegin()/End() which usually reports me the function causing the problem. In this case, the output does not help except for the exit code = 9. What does this mean? -- Regards, Garnet Vaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 9 17:12:18 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 09 Jul 2013 17:12:18 -0500 Subject: [petsc-users] Error message Exit Code 9 In-Reply-To: References: Message-ID: <8761wj2xj1.fsf@mcs.anl.gov> Garnet Vaz writes: > Dear all, > > My PETSc code crashes with the output > > " > Number of lines in file is 5349000 #<----- Number of points > Number of lines in file is 10695950 #<----- Number of triangles > reading cell list successful > reading vertex list successful > Mesh distribution successful > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 9 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9) Signal 9 is SIGKILL, which cannot be caught. This almost always means that some other process killed your job. Are there quotas on this machine? Maybe the machine ran out of memory? > This typically refers to a problem with your application. > Please see the FAQ page for debugging suggestions > " > > I am using a debug version of PETSc. The same code has been working > for smaller meshes (upto 3M triangles). I have run the code through valgrind > and it reports no memory leaks for a smaller mesh. > > All my functions use PetscFunctionBegin()/End() > which usually reports me the function causing the problem. In this case, > the output does not help except for the exit code = 9. What does this mean? > > -- > Regards, > Garnet Vaz -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From garnet.vaz at gmail.com Tue Jul 9 17:47:26 2013 From: garnet.vaz at gmail.com (Garnet Vaz) Date: Tue, 9 Jul 2013 15:47:26 -0700 Subject: [petsc-users] Error message Exit Code 9 In-Reply-To: <8761wj2xj1.fsf@mcs.anl.gov> References: <8761wj2xj1.fsf@mcs.anl.gov> Message-ID: Hi Jed, Thanks. The output of quota reads "unlimited". The system memory is 16GB. Doing "ulimit -m" gives 13938212 in kilobytes which corresponds to 13GB. I think this means that I should be able to use most of it. I am the only person running jobs on this machine right now. I can run with -memory_info for the smaller problem and try to extrapolate. Is this a good idea? - Garnet Vaz On Tue, Jul 9, 2013 at 3:12 PM, Jed Brown wrote: > Garnet Vaz writes: > > > Dear all, > > > > My PETSc code crashes with the output > > > > " > > Number of lines in file is 5349000 #<----- Number of points > > Number of lines in file is 10695950 #<----- Number of triangles > > reading cell list successful > > reading vertex list successful > > Mesh distribution successful > > > > > =================================================================================== > > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > > = EXIT CODE: 9 > > = CLEANING UP REMAINING PROCESSES > > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > > > =================================================================================== > > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9) > > Signal 9 is SIGKILL, which cannot be caught. This almost always means > that some other process killed your job. Are there quotas on this > machine? Maybe the machine ran out of memory? > > > This typically refers to a problem with your application. > > Please see the FAQ page for debugging suggestions > > " > > > > I am using a debug version of PETSc. The same code has been working > > for smaller meshes (upto 3M triangles). I have run the code through > valgrind > > and it reports no memory leaks for a smaller mesh. > > > > All my functions use PetscFunctionBegin()/End() > > which usually reports me the function causing the problem. In this case, > > the output does not help except for the exit code = 9. What does this > mean? > > > > -- > > Regards, > > Garnet Vaz > -- Regards, Garnet -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 9 17:57:49 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 09 Jul 2013 17:57:49 -0500 Subject: [petsc-users] Error message Exit Code 9 In-Reply-To: References: <8761wj2xj1.fsf@mcs.anl.gov> Message-ID: <87zjtv1guq.fsf@mcs.anl.gov> Garnet Vaz writes: > Hi Jed, > > Thanks. The output of quota reads "unlimited". > The system memory is 16GB. > > Doing "ulimit -m" gives 13938212 > in kilobytes which corresponds to 13GB. I think this means > that I should be able to use most of it. I am the only person > running jobs on this machine right now. > > I can run with -memory_info for the smaller problem and try > to extrapolate. Is this a good idea? You can. Does it run correctly for smaller problem sizes? Look at -log_summary for memory usage information. The OOM killer seems the most likely culprit. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From garnet.vaz at gmail.com Tue Jul 9 18:48:36 2013 From: garnet.vaz at gmail.com (Garnet Vaz) Date: Tue, 9 Jul 2013 16:48:36 -0700 Subject: [petsc-users] Error message Exit Code 9 In-Reply-To: <87zjtv1guq.fsf@mcs.anl.gov> References: <8761wj2xj1.fsf@mcs.anl.gov> <87zjtv1guq.fsf@mcs.anl.gov> Message-ID: Hi Jed, Yes. It has been running fine for problems up to 3M triangles. /var/log/messages does say that the process is killed. Changed the oom options to allow over-commiting. I think it should work now. Thanks. - Garnet On Tue, Jul 9, 2013 at 3:57 PM, Jed Brown wrote: > Garnet Vaz writes: > > > Hi Jed, > > > > Thanks. The output of quota reads "unlimited". > > The system memory is 16GB. > > > > Doing "ulimit -m" gives 13938212 > > in kilobytes which corresponds to 13GB. I think this means > > that I should be able to use most of it. I am the only person > > running jobs on this machine right now. > > > > I can run with -memory_info for the smaller problem and try > > to extrapolate. Is this a good idea? > > You can. Does it run correctly for smaller problem sizes? Look at > -log_summary for memory usage information. The OOM killer seems the > most likely culprit. > -- Regards, Garnet -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 9 19:25:00 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 09 Jul 2013 19:25:00 -0500 Subject: [petsc-users] Error message Exit Code 9 In-Reply-To: References: <8761wj2xj1.fsf@mcs.anl.gov> <87zjtv1guq.fsf@mcs.anl.gov> Message-ID: <87obab1ctf.fsf@mcs.anl.gov> Garnet Vaz writes: > Hi Jed, > > Yes. It has been running fine for problems up to 3M triangles. > /var/log/messages does say that the process is killed. > > Changed the oom options to allow over-commiting. I think it > should work now. Hmm, normally when over-commit is turned off, malloc will return NULL, but when turned on (default on most systems), _some_ process will be killed when you run out of memory. The process receiving SIGKILL is somehow evaluated to be a memory offender by the operating system, but may not be the process "responsible". -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From subramanya.g at gmail.com Tue Jul 9 20:41:08 2013 From: subramanya.g at gmail.com (subramanya sadasiva) Date: Tue, 9 Jul 2013 21:41:08 -0400 Subject: [petsc-users] Variable ordering in PC FieldSplit Message-ID: I have a small question about the use of PC field split, does it expect variables to be arranged in a particular fashion? Should it be variable major (all degrees of freedom corresponding to the first followed by all degrees of freedom corresponding to the next) or node major ( all degrees of freedom corresponding to each node - with a sub block structure . Is there some way of switching between the two ?Thanks, Subramanya Sadasiva -------------- next part -------------- An HTML attachment was scrubbed... URL: From luqiyue at gmail.com Tue Jul 9 20:49:32 2013 From: luqiyue at gmail.com (Lu Qiyue) Date: Tue, 9 Jul 2013 20:49:32 -0500 Subject: [petsc-users] MatCreateSeqAIJ( ) Quesion Message-ID: Dear All: I am using a modified version of ex72.c in /src/mat/examples/tests directory to create a matrix with COO format. In the line: ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n, (m*n/nnz),PETSC_NULL,&A);CHKERRQ(ierr); The 'nz' is set to (m*n/nnz). And from documents, nz is: *nz *- number of nonzeros per row (same for all rows) http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateSeqAIJ.html I am wondering, why this value is set to be m*n/nnz? Looks obvious this is not the number of nonzeros per row. What's the rule for choosing nz? If only one value here, should it be the largest number of non-zeroes among all rows? Thanks Qiyue Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 9 20:49:55 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 9 Jul 2013 20:49:55 -0500 Subject: [petsc-users] Variable ordering in PC FieldSplit In-Reply-To: References: Message-ID: PCFieldsplit is officially neutral as to the arrangement. One can define IS's to select any arbitrary subset of variables for each field. Unofficially we generally recommend node major order, it generally is more computationally efficient and the code is a bit easier to use with this default. Barry On Jul 9, 2013, at 8:41 PM, subramanya sadasiva wrote: > I have a small question about the use of PC field split, does it expect variables to be arranged in a particular fashion? Should it be variable major (all degrees of freedom corresponding to the first followed by all degrees of freedom corresponding to the next) or node major ( all degrees of freedom corresponding to each node - with a sub block structure . Is there some way of switching between the two ? > Thanks, > Subramanya Sadasiva From bsmith at mcs.anl.gov Tue Jul 9 20:56:15 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 9 Jul 2013 20:56:15 -0500 Subject: [petsc-users] MatCreateSeqAIJ( ) Quesion In-Reply-To: References: Message-ID: On Jul 9, 2013, at 8:49 PM, Lu Qiyue wrote: > Dear All: > I am using a modified version of ex72.c in > /src/mat/examples/tests > directory to create a matrix with COO format. > > In the line: > ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n, (m*n/nnz),PETSC_NULL,&A);CHKERRQ(ierr); > > The 'nz' is set to (m*n/nnz). Hmm, perhaps you are looking at an older version of the code. The current version http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tests/ex72.c.html has MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,nnz*2/m,0,&A); > > And from documents, nz is: > nz - number of nonzeros per row (same for all rows) > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateSeqAIJ.html > > > I am wondering, why this value is set to be m*n/nnz? Looks obvious this is not the number of nonzeros per row. What's the rule for choosing nz? > > If only one value here, should it be the largest number of non-zeroes among all rows? Yes it should be the largest, this will lead to the fastest mat assembly (at an expense of using extra memory). We recommend preallocating the correct value for each row except in the most trivial codes. Barry > > Thanks > > Qiyue Lu From subramanya.g at gmail.com Tue Jul 9 20:57:32 2013 From: subramanya.g at gmail.com (Subramanya G) Date: Tue, 9 Jul 2013 21:57:32 -0400 Subject: [petsc-users] Variable ordering in PC FieldSplit In-Reply-To: References: Message-ID: Hi Barry, Thanks. I've been looking at KSP/ex43.C and from what I understand the following are the main steps. PCFieldSplitSetBlockSize (pc,3); // Total number of fields PCFieldSplitSetFields (pc,"u",2,ufields,ufields); // split into groups (u and v) PCFieldSplitSetFields (pc,"p",1,pfields,pfields); // p I am unable to see where petsc expects to know whether the values in node major or variable major order. Thanks, Subramanya Subramanya G Sadasiva, Graduate Research Assistant, Hierarchical Design and Characterization Laboratory, School of Mechanical Engineering, Purdue University. "The art of structure is where to put the holes" Robert Le Ricolais, 1894-1977 On Tue, Jul 9, 2013 at 9:49 PM, Barry Smith wrote: > > PCFieldsplit is officially neutral as to the arrangement. One can > define IS's to select any arbitrary subset of variables for each field. > > Unofficially we generally recommend node major order, it generally is > more computationally efficient and the code is a bit easier to use with > this default. > > Barry > > On Jul 9, 2013, at 8:41 PM, subramanya sadasiva > wrote: > > > I have a small question about the use of PC field split, does it expect > variables to be arranged in a particular fashion? Should it be variable > major (all degrees of freedom corresponding to the first followed by all > degrees of freedom corresponding to the next) or node major ( all degrees > of freedom corresponding to each node - with a sub block structure . Is > there some way of switching between the two ? > > Thanks, > > Subramanya Sadasiva > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 9 21:05:17 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 9 Jul 2013 21:05:17 -0500 Subject: [petsc-users] Variable ordering in PC FieldSplit In-Reply-To: References: Message-ID: <3181BA38-065A-4D9A-B214-9EEEFB80E540@mcs.anl.gov> On Jul 9, 2013, at 8:57 PM, Subramanya G wrote: > Hi Barry, > Thanks. I've been looking at KSP/ex43.C and from what I understand the following are the main steps. > PCFieldSplitSetBlockSize(pc,3); // Total number of fields > PCFieldSplitSetFields(pc,"u" > ,2,ufields,ufields); // split into groups (u and v) > > PCFieldSplitSetFields(pc,"p",1,pfields,pfields); // p > > I am unable to see where petsc expects to know whether the values in node major or variable major order. Yes, this is why I wrote "the code is easier to use with node major" :-). The SetFields() interface is for simple node major order. To use any other ordering you need to use the PCFieldSplitSetIS() interface to set the fields. Most of the examples use node major. Sorry for the confusion. Barry > > Thanks, > Subramanya > > > Subramanya G Sadasiva, > > Graduate Research Assistant, > Hierarchical Design and Characterization Laboratory, > School of Mechanical Engineering, > Purdue University. > > "The art of structure is where to put the holes" > Robert Le Ricolais, 1894-1977 > > > On Tue, Jul 9, 2013 at 9:49 PM, Barry Smith wrote: > > PCFieldsplit is officially neutral as to the arrangement. One can define IS's to select any arbitrary subset of variables for each field. > > Unofficially we generally recommend node major order, it generally is more computationally efficient and the code is a bit easier to use with this default. > > Barry > > On Jul 9, 2013, at 8:41 PM, subramanya sadasiva wrote: > > > I have a small question about the use of PC field split, does it expect variables to be arranged in a particular fashion? Should it be variable major (all degrees of freedom corresponding to the first followed by all degrees of freedom corresponding to the next) or node major ( all degrees of freedom corresponding to each node - with a sub block structure . Is there some way of switching between the two ? > > Thanks, > > Subramanya Sadasiva > > From subramanya.g at gmail.com Tue Jul 9 21:07:25 2013 From: subramanya.g at gmail.com (Subramanya G) Date: Tue, 9 Jul 2013 22:07:25 -0400 Subject: [petsc-users] Variable ordering in PC FieldSplit In-Reply-To: <3181BA38-065A-4D9A-B214-9EEEFB80E540@mcs.anl.gov> References: <3181BA38-065A-4D9A-B214-9EEEFB80E540@mcs.anl.gov> Message-ID: Hi Barry, Thanks for clearing it up. It will be useful if this info was added to the documentation for SetFields. Thanks again Subramanya Subramanya G Sadasiva, Graduate Research Assistant, Hierarchical Design and Characterization Laboratory, School of Mechanical Engineering, Purdue University. "The art of structure is where to put the holes" Robert Le Ricolais, 1894-1977 On Tue, Jul 9, 2013 at 10:05 PM, Barry Smith wrote: > > On Jul 9, 2013, at 8:57 PM, Subramanya G wrote: > > > Hi Barry, > > Thanks. I've been looking at KSP/ex43.C and from what I understand the > following are the main steps. > > PCFieldSplitSetBlockSize(pc,3); // Total number of fields > > PCFieldSplitSetFields(pc,"u" > > ,2,ufields,ufields); // split into groups (u and v) > > > > PCFieldSplitSetFields(pc,"p",1,pfields,pfields); // p > > > > I am unable to see where petsc expects to know whether the values in > node major or variable major order. > > Yes, this is why I wrote "the code is easier to use with node major" > :-). The SetFields() interface is for simple node major order. To use any > other ordering you need to use the PCFieldSplitSetIS() interface to set the > fields. Most of the examples use node major. Sorry for the confusion. > > Barry > > > > > Thanks, > > Subramanya > > > > > > Subramanya G Sadasiva, > > > > Graduate Research Assistant, > > Hierarchical Design and Characterization Laboratory, > > School of Mechanical Engineering, > > Purdue University. > > > > "The art of structure is where to put the holes" > > Robert Le Ricolais, 1894-1977 > > > > > > On Tue, Jul 9, 2013 at 9:49 PM, Barry Smith wrote: > > > > PCFieldsplit is officially neutral as to the arrangement. One can > define IS's to select any arbitrary subset of variables for each field. > > > > Unofficially we generally recommend node major order, it generally is > more computationally efficient and the code is a bit easier to use with > this default. > > > > Barry > > > > On Jul 9, 2013, at 8:41 PM, subramanya sadasiva > wrote: > > > > > I have a small question about the use of PC field split, does it > expect variables to be arranged in a particular fashion? Should it be > variable major (all degrees of freedom corresponding to the first followed > by all degrees of freedom corresponding to the next) or node major ( all > degrees of freedom corresponding to each node - with a sub block structure > . Is there some way of switching between the two ? > > > Thanks, > > > Subramanya Sadasiva > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ztdepyahoo at 163.com Tue Jul 9 21:54:24 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Wed, 10 Jul 2013 10:54:24 +0800 (CST) Subject: [petsc-users] which function return the L2 norm of the residual vector in ksp solver Message-ID: <67ee80a.18605.13fc6804bf5.Coremail.ztdepyahoo@163.com> An HTML attachment was scrubbed... URL: From garnet.vaz at gmail.com Tue Jul 9 22:46:04 2013 From: garnet.vaz at gmail.com (Garnet Vaz) Date: Tue, 9 Jul 2013 20:46:04 -0700 Subject: [petsc-users] Error message Exit Code 9 In-Reply-To: <87obab1ctf.fsf@mcs.anl.gov> References: <8761wj2xj1.fsf@mcs.anl.gov> <87zjtv1guq.fsf@mcs.anl.gov> <87obab1ctf.fsf@mcs.anl.gov> Message-ID: Hi Jed, The problem is being caused from Out of Memory error. So I am going to stick to the smaller problems. Thanks a lot. - Garnet Vaz On Tue, Jul 9, 2013 at 5:25 PM, Jed Brown wrote: > Garnet Vaz writes: > > > Hi Jed, > > > > Yes. It has been running fine for problems up to 3M triangles. > > /var/log/messages does say that the process is killed. > > > > Changed the oom options to allow over-commiting. I think it > > should work now. > > Hmm, normally when over-commit is turned off, malloc will return NULL, > but when turned on (default on most systems), _some_ process will be > killed when you run out of memory. The process receiving SIGKILL is > somehow evaluated to be a memory offender by the operating system, but > may not be the process "responsible". > -- Regards, Garnet -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 9 22:49:11 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 9 Jul 2013 22:49:11 -0500 Subject: [petsc-users] which function return the L2 norm of the residual vector in ksp solver In-Reply-To: <67ee80a.18605.13fc6804bf5.Coremail.ztdepyahoo@163.com> References: <67ee80a.18605.13fc6804bf5.Coremail.ztdepyahoo@163.com> Message-ID: <9D5CFCC3-1B21-4897-877E-203DDA385C8A@mcs.anl.gov> This depends on when you want the l2 norm and which solver. With right preconditioning generally the l2 norm of the residual (or a an estimate of it) is computed as part of the computation and is available. With left preconditioning the l2 norm of the preconditioned residual (which is B*r) is computed and available. It is also possible to explicitly compute the residual and its l2 norm (at additional computation). For example the routine http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/interface/iterativ.c.html#KSPMonitorTrueResidualNorm explicitly computes the norm. You can provide a routine that is called at each iteration with KSPMonitorSet() that takes as one of its inputs the (preconditioned) residual norm. See also KSPSetNormType() for determining which residual norm is used and computed by the solver. After the solve is complete you can call KSPGetResidualNorm() to get the l2 norm of the (preconditioned) residual. If you tell us when and where you plan to use this value we might be able to provide additional ways to obtain it. Barry On Jul 9, 2013, at 9:54 PM, ??? wrote: > > From mpovolot at purdue.edu Wed Jul 10 00:00:17 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 01:00:17 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> Message-ID: <51DCEA61.7050507@purdue.edu> Hello everybody, unfortunately building petsc without fortran cannot work for me because I need MUMPs that requires Scalapack that needs fortran. I played with the options. As result the configuration runs okay, the build gives an error that does not seem to be related to fortran: [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o Building Fortran object CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o Building CXX object CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ compilation aborted for /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c (code 4) compilation aborted for /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c (code 4) I attach here the log files. Any advise is highly appreciated. Michael. On 7/9/2013 4:34 PM, Matthew Knepley wrote: > On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi > > wrote: > > If I do not need to use petsc in fortran programs, can I build > petsc without fortran and thus avoid this situation? > Michael. > > > On 07/09/2013 04:31 PM, Satish Balay wrote: > > For some reason this issue comes up with mpi.mod provided by intel > mpi. > > We have a configure test for it - but looks like its not > sufficient to > catch this issue. > > satish > > > On Tue, 9 Jul 2013, Matthew Knepley wrote: > > On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi > >wrote: > > Dear Petsc users and developers, > I'm trying to build petsc with Intel compiler. > > 1) First, ask yourself whether you really want to build > with the Intel > compiler. Then ask again. > > 2) Do you need Fortran? If not, turn it off --with-fc=0. > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Matt > > 3) If you want Fortran and Intel (and have a hatred of > free time), try the > legacy build > > make all-legacy > > 4) If this is still broken, send the new make.log > > Thanks, > > Matt > > > The configuration process runs okay (I attach the log > here), > but I get an error when I build it: > -- Configuring done > -- Generating done > -- Build files have been written to: > /home/mpovolot/Code_intel/** > libs/petsc/build-real/linux > Scanning dependencies of target petscsys > [ 0%] [ 0%] [ 0%] Building CXX object > CMakeFiles/petscsys.dir/src/** > sys/info/verboseinfo.c.o > Building Fortran object CMakeFiles/petscsys.dir/src/** > sys/f90-mod/petscsysmod.F.o > Building CXX object CMakeFiles/petscsys.dir/src/** > sys/totalview/tv_data_display.**c.o > Building CXX object > CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o > Building CXX object > CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o > /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** > f90-mod/petscsysmod.F:6.11: > > use mpi > 1 > Fatal Error: File 'mpi.mod' opened at (1) is not a > GFORTRAN module file > make[6]: *** > [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] > Error 1 > make[6]: *** Waiting for unfinished jobs.... > make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 > make[4]: *** [all] Error 2 > make[3]: *** [ccmake] Error 2 > make[2]: *** [cmake] Error 2 > ****************************ERROR************************************** > Error during compile, check linux/conf/make.log > Send it and linux/conf/configure.log to > petsc-maint at mcs.anl.gov > ************************************************************************ > > I attach here the make.log > What is strange to me that it has something to do with > Gfortran, while I > want to build everything with Intel. > Thank you for help, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.gz Type: application/gzip Size: 477432 bytes Desc: not available URL: -------------- next part -------------- ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems To subscribe to the PETSc announcement list, send mail to majordomo at mcs.anl.gov with the message: subscribe petsc-announce To subscribe to the PETSc users mailing list, send mail to majordomo at mcs.anl.gov with the message: subscribe petsc-users ========================================== On Wed Jul 10 00:47:31 EDT 2013 on ncnlnx15 Machine characteristics: Linux ncnlnx15 3.8.0-19-generic #29-Ubuntu SMP Wed Apr 17 18:16:28 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux ----------------------------------------- Using PETSc directory: /home/mpovolot/Code_intel/libs/petsc/build-real Using PETSc arch: linux ----------------------------------------- PETSC_VERSION_RELEASE 1 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 4 PETSC_VERSION_SUBMINOR 0 PETSC_VERSION_PATCH 0 PETSC_VERSION_DATE "May, 13, 2013" PETSC_VERSION_GIT "0f0f11e432ef0f042adf57ab89328b3ebb184576" PETSC_VERSION_DATE "May, 13, 2013" PETSC_VERSION_(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_LT(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_LE(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_GT(MAJOR,MINOR,SUBMINOR) \ PETSC_VERSION_GE(MAJOR,MINOR,SUBMINOR) \ ----------------------------------------- Using configure Options: --with-x=0 --with-hdf5 --download-hdf5=1 --with-scalar-type=real --with-single-library=0 --with-pic=1 --with-shared-libraries=0 --with-clanguage=C++ --with-fortran=1 --with-debugging=1 --with-cc=/opt/intel/impi/4.1.0/intel64/bin/mpiicc --with-fc=/opt/intel/impi/4.1.0/intel64/bin/mpiifort --with-cxx="/opt/intel/impi/4.1.0/intel64/bin/mpiicpc " COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 --LDFLAGS="-Wl,-rpath,/opt/intel/mkl//lib/intel64 -L/opt/intel/mkl//lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -L/opt/intel/lib/intel64 -Wl,-rpath=/opt/intel/lib/intel64 -lintlc" --download-metis=1 --download-parmetis=1 --download-scalapack=/home/mpovolot/Code_intel/libs/petsc/scalapack-2.0.2.tgz --download-mumps=1 --with-fortran-kernels=0 --with-blas-lapack-dir=/opt/intel/mkl/ --download-blacs=1 Using configuration flags: #define INCLUDED_PETSCCONF_H #define IS_COLORING_MAX 65535 #define STDC_HEADERS 1 #define MPIU_COLORING_VALUE MPI_UNSIGNED_SHORT #define PETSC_UINTPTR_T uintptr_t #define PETSC_HAVE_PTHREAD 1 #define PETSC_DEPRECATED(why) __attribute((deprecated(why))) #define PETSC_STATIC_INLINE static inline #define PETSC_REPLACE_DIR_SEPARATOR '\\' #define PETSC_HAVE_HDF5 1 #define PETSC_RESTRICT __restrict__ #define PETSC_HAVE_SO_REUSEADDR 1 #define PETSC_HAVE_MPI 1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC__GNU_SOURCE 1 #define PETSC_HAVE_FORTRAN 1 #define PETSC_LIB_DIR "/home/mpovolot/Code_intel/libs/petsc/build-real/linux/lib" #define PETSC_HAVE_PARMETIS 1 #define PETSC_USE_SOCKET_VIEWER 1 #define PETSC_SLSUFFIX "so" #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_FLUSH 1 #define PETSC_HAVE_MUMPS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_UNUSED __attribute((unused)) #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_HAVE_VALGRIND 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_METIS 1 #define PETSC_DIR_SEPARATOR '/' #define PETSC_PATH_SEPARATOR ':' #define PETSC__BSD_SOURCE 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_HAVE_BLASLAPACK 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_STRING_H 1 #define PETSC_HAVE_SYS_TIMES_H 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_ENDIAN_H 1 #define PETSC_HAVE_SYS_PROCFS_H 1 #define PETSC_HAVE_DLFCN_H 1 #define PETSC_HAVE_SCHED_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_LINUX_KERNEL_H 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_MATH_H 1 #define PETSC_HAVE_STDLIB_H 1 #define PETSC_HAVE_SYS_PARAM_H 1 #define PETSC_HAVE_PTHREAD_H 1 #define PETSC_HAVE_UNISTD_H 1 #define PETSC_HAVE_SYS_WAIT_H 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_LIMITS_H 1 #define PETSC_HAVE_SYS_UTSNAME_H 1 #define PETSC_HAVE_NETINET_IN_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_SYS_SOCKET_H 1 #define PETSC_HAVE_MEMORY_H 1 #define PETSC_HAVE_SEARCH_H 1 #define PETSC_HAVE_SYS_RESOURCE_H 1 #define PETSC_TIME_WITH_SYS_TIME 1 #define PETSC_HAVE_NETDB_H 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_PWD_H 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_STRINGS_H 1 #define PETSC_HAVE_SYS_SYSINFO_H 1 #define PETSC_HAVE_SYS_TIME_H 1 #define PETSC_USING_F90 1 #define PETSC_USING_F2003 1 #define PETSC_HAVE_RTLD_NOW 1 #define PETSC_HAVE_RTLD_LOCAL 1 #define PETSC_HAVE_RTLD_LAZY 1 #define PETSC_C_STATIC_INLINE static inline #define PETSC_HAVE_FORTRAN_UNDERSCORE 1 #define PETSC_HAVE_CXX_NAMESPACE 1 #define PETSC_HAVE_RTLD_GLOBAL 1 #define PETSC_C_RESTRICT __restrict__ #define PETSC_CXX_RESTRICT __restrict__ #define PETSC_CXX_STATIC_INLINE static inline #define PETSC_HAVE_LIBZ 1 #define PETSC_HAVE_LIBDL 1 #define PETSC_HAVE_LIBSCALAPACK 1 #define PETSC_HAVE_LIBMETIS 1 #define PETSC_HAVE_LIBLAPACK 1 #define PETSC_HAVE_LIBM 1 #define PETSC_HAVE_LIBIFCORE 1 #define PETSC_HAVE_LIBMKL_INTEL_LP64 1 #define PETSC_HAVE_LIBIFPORT 1 #define PETSC_HAVE_LIBDMUMPS 1 #define PETSC_HAVE_LIBMUMPS_COMMON 1 #define PETSC_HAVE_LIBPTHREAD 1 #define PETSC_HAVE_LIBHDF5 1 #define PETSC_HAVE_LIBPARMETIS 1 #define PETSC_HAVE_LIBMKL_SEQUENTIAL 1 #define PETSC_HAVE_LIBZMUMPS 1 #define PETSC_HAVE_LIBHDF5_HL 1 #define PETSC_HAVE_LIBMKL_CORE 1 #define PETSC_HAVE_LIBSMUMPS 1 #define PETSC_HAVE_LIBCMUMPS 1 #define PETSC_HAVE_LIBPORD 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_LIBHDF5_FORTRAN 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_ARCH "linux" #define PETSC_DIR "/home/mpovolot/Code_intel/libs/petsc/build-real" #define HAVE_GZIP 1 #define PETSC_CLANGUAGE_CXX 1 #define PETSC_USE_ERRORCHECKING 1 #define PETSC_MISSING_DREAL 1 #define PETSC_SIZEOF_MPI_COMM 4 #define PETSC_BITS_PER_BYTE 8 #define PETSC_SIZEOF_MPI_FINT 4 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_RETSIGTYPE void #define PETSC_HAVE___INT64 1 #define PETSC_SIZEOF_LONG 8 #define PETSC_USE_FORTRANKIND 1 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_HAVE_SIGINFO_T 1 #define PETSC_SIZEOF_CHAR 1 #define PETSC_SIZEOF_DOUBLE 8 #define PETSC_SIZEOF_FLOAT 4 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SHORT 2 #define PETSC_HAVE_STRCASECMP 1 #define PETSC_HAVE_GET_NPROCS 1 #define PETSC_HAVE_POPEN 1 #define PETSC_HAVE_SIGSET 1 #define PETSC_HAVE_GETWD 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_TIMES 1 #define PETSC_HAVE_DLSYM 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_GETPWUID 1 #define PETSC_HAVE_IPXFARGC_ 1 #define PETSC_HAVE_GETHOSTBYNAME 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_DLERROR 1 #define PETSC_HAVE_FORK 1 #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_GETTIMEOFDAY 1 #define PETSC_HAVE_DLCLOSE 1 #define PETSC_HAVE_UNAME 1 #define PETSC_HAVE_GETHOSTNAME 1 #define PETSC_HAVE_MKSTEMP 1 #define PETSC_HAVE_SIGACTION 1 #define PETSC_HAVE_DRAND48 1 #define PETSC_HAVE_MEMALIGN 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_SIGNAL 1 #define PETSC_HAVE_USLEEP 1 #define PETSC_HAVE_GETRUSAGE 1 #define PETSC_HAVE_VFPRINTF 1 #define PETSC_HAVE_NANOSLEEP 1 #define PETSC_HAVE_GETDOMAINNAME 1 #define PETSC_HAVE__INTEL_FAST_MEMSET 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE__INTEL_FAST_MEMCPY 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_SOCKET 1 #define PETSC_HAVE_SYSINFO 1 #define PETSC_HAVE_READLINK 1 #define PETSC_HAVE_REALPATH 1 #define PETSC_HAVE_DLOPEN 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_GETPAGESIZE 1 #define PETSC_HAVE_SLEEP 1 #define PETSC_HAVE_VPRINTF 1 #define PETSC_HAVE_BZERO 1 #define PETSC_SIGNAL_CAST #define PETSC_WRITE_MEMORY_BARRIER() asm volatile("sfence":::"memory") #define PETSC_MEMORY_BARRIER() asm volatile("mfence":::"memory") #define PETSC_READ_MEMORY_BARRIER() asm volatile("lfence":::"memory") #define PETSC_CPU_RELAX() asm volatile("rep; nop" ::: "memory") #define PETSC_BLASLAPACK_UNDERSCORE 1 #define PETSC_HAVE_MPI_COMM_C2F 1 #define PETSC_HAVE_MPI_EXSCAN 1 #define PETSC_HAVE_MPI_LONG_DOUBLE 1 #define PETSC_HAVE_MPI_COMM_F2C 1 #define PETSC_HAVE_MPI_FINT 1 #define PETSC_HAVE_MPI_F90MODULE 1 #define PETSC_HAVE_MPI_TYPE_GET_ENVELOPE 1 #define PETSC_HAVE_MPI_FINALIZED 1 #define PETSC_HAVE_MPI_COMM_SPAWN 1 #define PETSC_HAVE_MPI_TYPE_GET_EXTENT 1 #define PETSC_HAVE_MPI_COMBINER_DUP 1 #define PETSC_HAVE_MPI_WIN_CREATE 1 #define PETSC_HAVE_MPI_REPLACE 1 #define PETSC_HAVE_MPI_TYPE_DUP 1 #define PETSC_HAVE_MPIIO 1 #define PETSC_HAVE_MPI_INIT_THREAD 1 #define PETSC_HAVE_MPI_ALLTOALLW 1 #define PETSC_HAVE_MPI_IN_PLACE 1 #define PETSC_LEVEL1_DCACHE_LINESIZE 64 #define PETSC_LEVEL1_DCACHE_SIZE 32768 #define PETSC_LEVEL1_DCACHE_ASSOC 8 #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #define PETSC_HAVE_SHARED_LIBRARIES 1 #define PETSC_USE_GDB_DEBUGGER 1 #define PETSC_MEMALIGN 16 #define PETSC_USE_INFO 1 #define PETSC_Alignx(a,b) #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_DEBUG 1 #define PETSC_USE_LOG 1 #define PETSC_IS_COLOR_VALUE_TYPE short #define PETSC_USE_CTABLE 1 #define PETSC_USE_SCALAR_REAL 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_HAVE_PXFGETARG_NEW 1 #define PETSC_HAVE_FORTRAN_GET_COMMAND_ARGUMENT 1 #define PETSC_HAVE_GETARG 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_HAVE_SCHED_CPU_SET_T 1 #define PETSC_HAVE_PTHREAD_BARRIER_T 1 #define PETSC_HAVE_SYS_SYSCTL_H 1 #define PETSC_HAVE_H5PSET_FAPL_MPIO 1 ----------------------------------------- Using C/C++ compile: /opt/intel/impi/4.1.0/intel64/bin/mpiicpc -c -wd1572 -O3 -fPIC -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -D__INSDIR__=./ Using Fortran compile: /opt/intel/impi/4.1.0/intel64/bin/mpiifort -c -fPIC -O3 -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include ----------------------------------------- Using C/C++ linker: /opt/intel/impi/4.1.0/intel64/bin/mpiicpc Using C/C++ flags: -Wl,-rpath,/opt/intel/mkl//lib/intel64 -L/opt/intel/mkl//lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -L/opt/intel/lib/intel64 -Wl,-rpath=/opt/intel/lib/intel64 -lintlc -wd1572 -O3 Using Fortran linker: /opt/intel/impi/4.1.0/intel64/bin/mpiifort Using Fortran flags: -Wl,-rpath,/opt/intel/mkl//lib/intel64 -L/opt/intel/mkl//lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -L/opt/intel/lib/intel64 -Wl,-rpath=/opt/intel/lib/intel64 -lintlc -fPIC -O3 ----------------------------------------- Using libraries: -Wl,-rpath,/home/mpovolot/Code_intel/libs/petsc/build-real/linux/lib -L/home/mpovolot/Code_intel/libs/petsc/build-real/linux/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetscsys -Wl,-rpath,/home/mpovolot/Code_intel/libs/petsc/build-real/linux/lib -L/home/mpovolot/Code_intel/libs/petsc/build-real/linux/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -Wl,-rpath,/opt/intel/mkl/lib/intel64 -L/opt/intel/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lpthread -lparmetis -lmetis -lhdf5_fortran -lhdf5_hl -lhdf5 -lz -Wl,-rpath,/opt/intel/lib/intel64 -L/opt/intel/lib/intel64 -Wl,-rpath,/opt/intel/impi/4.1.0.024/intel64/lib -L/opt/intel/impi/4.1.0.024/intel64/lib -Wl,-rpath,/opt/intel/composer_xe_2013.1.117/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.1.117/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 -L/usr/lib/gcc/x86_64-linux-gnu/4.7 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/usr/lib/i386-linux-gnu -L/usr/lib/i386-linux-gnu -Wl,-rpath,/home/mpovolot/Code_intel/libs/petsc/build-real/-Xlinker -Wl,-rpath,/opt/intel/mpi-rt/4.1 -lifport -lifcore -lm -lm -lmpigc4 -ldl -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lintlc -lmpi -lmpigf -lmpigi -lrt -lpthread -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl ------------------------------------------ Using mpiexec: /opt/intel/impi/4.1.0.024/intel64/bin/mpiexec ========================================== Building PETSc using CMake with 5 build threads ========================================== /usr/bin/cmake -H/home/mpovolot/Code_intel/libs/petsc/build-real -B/home/mpovolot/Code_intel/libs/petsc/build-real/linux --check-build-system CMakeFiles/Makefile.cmake 0 Re-run cmake file: Makefile older than: ../CMakeLists.txt -- Configuring done -- Generating done -- Build files have been written to: /home/mpovolot/Code_intel/libs/petsc/build-real/linux /usr/bin/cmake -E cmake_progress_start /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles/progress.marks /usr/bin/make -f CMakeFiles/Makefile2 all /usr/bin/make -f CMakeFiles/petscsys.dir/build.make CMakeFiles/petscsys.dir/depend cd /home/mpovolot/Code_intel/libs/petsc/build-real/linux && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/mpovolot/Code_intel/libs/petsc/build-real /home/mpovolot/Code_intel/libs/petsc/build-real /home/mpovolot/Code_intel/libs/petsc/build-real/linux /home/mpovolot/Code_intel/libs/petsc/build-real/linux /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles/petscsys.dir/DependInfo.cmake --color= Dependee "/home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles/petscsys.dir/DependInfo.cmake" is newer than depender "/home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles/petscsys.dir/depend.internal". Dependee "/home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles/CMakeDirectoryInformation.cmake" is newer than depender "/home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles/petscsys.dir/depend.internal". Scanning dependencies of target petscsys /usr/bin/make -f CMakeFiles/petscsys.dir/build.make CMakeFiles/petscsys.dir/requires make[6]: Nothing to be done for `CMakeFiles/petscsys.dir/requires'. /usr/bin/make -f CMakeFiles/petscsys.dir/build.make CMakeFiles/petscsys.dir/build /usr/bin/cmake -E cmake_progress_report /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles /usr/bin/cmake -E cmake_progress_report /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles /usr/bin/cmake -E cmake_progress_report /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles /usr/bin/cmake -E cmake_progress_report /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles /usr/bin/cmake -E cmake_progress_report /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles [ 0%] [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/sys/info/verboseinfo.c.o Building CXX object CMakeFiles/petscsys.dir/src/sys/logging/plog.c.o [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o /opt/intel/impi/4.1.0/intel64/bin/mpiicpc -D__INSDIR__="" -wd1572 -O3 -fPIC -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -o CMakeFiles/petscsys.dir/src/sys/info/verboseinfo.c.o -c /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c /opt/intel/impi/4.1.0/intel64/bin/mpiicpc -D__INSDIR__="" -wd1572 -O3 -fPIC -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -o CMakeFiles/petscsys.dir/src/sys/logging/plog.c.o -c /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c Building Fortran object CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o /opt/intel/impi/4.1.0/intel64/bin/mpiicpc -D__INSDIR__="" -wd1572 -O3 -fPIC -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -o CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o -c /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/totalview/tv_data_display.c /opt/intel/impi/4.1.0/intel64/bin/mpiifort -D__INSDIR__="" -fPIC -O3 -module include -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -c /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/f90-mod/petscsysmod.F -o CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o Building CXX object CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o /opt/intel/impi/4.1.0/intel64/bin/mpiicpc -D__INSDIR__="" -wd1572 -O3 -fPIC -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -o CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o -c /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/python/pythonsys.c /usr/bin/cmake -E cmake_progress_report /home/mpovolot/Code_intel/libs/petsc/build-real/linux/CMakeFiles [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ compilation aborted for /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c (code 4) compilation aborted for /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c (code 4) make[6]: *** [CMakeFiles/petscsys.dir/src/sys/info/verboseinfo.c.o] Error 4 make[6]: *** Waiting for unfinished jobs.... make[6]: *** [CMakeFiles/petscsys.dir/src/sys/logging/plog.c.o] Error 4 Building CXX object CMakeFiles/petscsys.dir/src/sys/utils/arch.c.o /opt/intel/impi/4.1.0/intel64/bin/mpiicpc -D__INSDIR__="" -wd1572 -O3 -fPIC -I/home/mpovolot/Code_intel/libs/petsc/build-real/include -I/home/mpovolot/Code_intel/libs/petsc/build-real/linux/include -I/opt/intel/impi/4.1.0.024/intel64/include -o CMakeFiles/petscsys.dir/src/sys/utils/arch.c.o -c /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/utils/arch.c /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ compilation aborted for /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/python/pythonsys.c (code 4) make[6]: *** [CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o] Error 4 /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ compilation aborted for /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/utils/arch.c (code 4) make[6]: *** [CMakeFiles/petscsys.dir/src/sys/utils/arch.c.o] Error 4 make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 make[4]: *** [all] Error 2 make[3]: *** [ccmake] Error 2 make[2]: *** [cmake] Error 2 **************************ERROR************************************ Error during compile, check linux/conf/make.log Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** From knepley at gmail.com Wed Jul 10 07:05:08 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 10 Jul 2013 07:05:08 -0500 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DCEA61.7050507@purdue.edu> References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> Message-ID: On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi wrote: > Hello everybody, > unfortunately building petsc without fortran cannot work for me because I > need MUMPs that requires Scalapack that needs fortran. I played with the > options. As result the configuration runs okay, the build gives an error > that does not seem to be related to fortran: > Quit building with CMake. It complicates everything. Use the legacy build. Matt > [ 0%] Building CXX object > CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o > Building Fortran object > CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o > Building CXX object CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o > [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: > cannot open source file "bits/c++config.h" > #include > ^ > > /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot > open source file "bits/c++config.h" > #include > ^ > > compilation aborted for > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c > (code 4) > compilation aborted for > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c > (code 4) > > I attach here the log files. > Any advise is highly appreciated. > Michael. > > On 7/9/2013 4:34 PM, Matthew Knepley wrote: > > On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi wrote: > >> If I do not need to use petsc in fortran programs, can I build petsc >> without fortran and thus avoid this situation? >> Michael. >> >> >> On 07/09/2013 04:31 PM, Satish Balay wrote: >> >>> For some reason this issue comes up with mpi.mod provided by intel >>> mpi. >>> >>> We have a configure test for it - but looks like its not sufficient to >>> catch this issue. >>> >>> satish >>> >>> >>> On Tue, 9 Jul 2013, Matthew Knepley wrote: >>> >>> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi < >>>> mpovolot at purdue.edu>wrote: >>>> >>>> Dear Petsc users and developers, >>>>> I'm trying to build petsc with Intel compiler. >>>>> >>>>> 1) First, ask yourself whether you really want to build with the Intel >>>> compiler. Then ask again. >>>> >>>> 2) Do you need Fortran? If not, turn it off --with-fc=0. >>>> >>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Matt > > >> 3) If you want Fortran and Intel (and have a hatred of free time), try >>>> the >>>> legacy build >>>> >>>> make all-legacy >>>> >>>> 4) If this is still broken, send the new make.log >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>> The configuration process runs okay (I attach the log here), >>>>> but I get an error when I build it: >>>>> -- Configuring done >>>>> -- Generating done >>>>> -- Build files have been written to: /home/mpovolot/Code_intel/** >>>>> libs/petsc/build-real/linux >>>>> Scanning dependencies of target petscsys >>>>> [ 0%] [ 0%] [ 0%] Building CXX object CMakeFiles/petscsys.dir/src/** >>>>> sys/info/verboseinfo.c.o >>>>> Building Fortran object CMakeFiles/petscsys.dir/src/** >>>>> sys/f90-mod/petscsysmod.F.o >>>>> Building CXX object CMakeFiles/petscsys.dir/src/** >>>>> sys/totalview/tv_data_display.**c.o >>>>> Building CXX object >>>>> CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >>>>> Building CXX object CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >>>>> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >>>>> f90-mod/petscsysmod.F:6.11: >>>>> >>>>> use mpi >>>>> 1 >>>>> Fatal Error: File 'mpi.mod' opened at (1) is not a GFORTRAN module file >>>>> make[6]: *** >>>>> [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >>>>> Error 1 >>>>> make[6]: *** Waiting for unfinished jobs.... >>>>> make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 >>>>> make[4]: *** [all] Error 2 >>>>> make[3]: *** [ccmake] Error 2 >>>>> make[2]: *** [cmake] Error 2 >>>>> ****************************ERROR************************************** >>>>> Error during compile, check linux/conf/make.log >>>>> Send it and linux/conf/configure.log to petsc-maint at mcs.anl.gov >>>>> >>>>> ************************************************************************ >>>>> >>>>> I attach here the make.log >>>>> What is strange to me that it has something to do with Gfortran, while >>>>> I >>>>> want to build everything with Intel. >>>>> Thank you for help, >>>>> Michael. >>>>> >>>>> -- >>>>> Michael Povolotskyi, PhD >>>>> Research Assistant Professor >>>>> Network for Computational Nanotechnology >>>>> 207 S Martin Jischke Drive >>>>> Purdue University, DLR, room 441-10 >>>>> West Lafayette, Indiana 47907 >>>>> >>>>> phone: +1-765-494-9396 >>>>> fax: +1-765-496-6026 >>>>> >>>>> >>>>> >>>> >>>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Wed Jul 10 09:26:44 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 10:26:44 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> Message-ID: <51DD6F24.2030801@purdue.edu> Thank you Matt. Unfortunately 'make all-legacy' gives the same error: ========================================= libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open source file "bits/c++config.h" #include ^ compilation aborted for send.c (code 4) Michael. On 07/10/2013 08:05 AM, Matthew Knepley wrote: > On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi > > wrote: > > Hello everybody, > unfortunately building petsc without fortran cannot work for me > because I need MUMPs that requires Scalapack that needs fortran. I > played with the options. As result the configuration runs okay, > the build gives an error that does not seem to be related to fortran: > > > Quit building with CMake. It complicates everything. Use the legacy build. > > Matt > > [ 0%] Building CXX object > CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o > Building Fortran object > CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o > Building CXX object > CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o > [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic > error: cannot open source file "bits/c++config.h" > #include > ^ > > /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: > cannot open source file "bits/c++config.h" > #include > ^ > > compilation aborted for > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c > (code 4) > compilation aborted for > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c > (code 4) > > I attach here the log files. > Any advise is highly appreciated. > Michael. > > On 7/9/2013 4:34 PM, Matthew Knepley wrote: >> On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi >> > wrote: >> >> If I do not need to use petsc in fortran programs, can I >> build petsc without fortran and thus avoid this situation? >> Michael. >> >> >> On 07/09/2013 04:31 PM, Satish Balay wrote: >> >> For some reason this issue comes up with mpi.mod provided >> by intel >> mpi. >> >> We have a configure test for it - but looks like its not >> sufficient to >> catch this issue. >> >> satish >> >> >> On Tue, 9 Jul 2013, Matthew Knepley wrote: >> >> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi >> >wrote: >> >> Dear Petsc users and developers, >> I'm trying to build petsc with Intel compiler. >> >> 1) First, ask yourself whether you really want to >> build with the Intel >> compiler. Then ask again. >> >> 2) Do you need Fortran? If not, turn it off --with-fc=0. >> >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> Matt >> >> 3) If you want Fortran and Intel (and have a hatred >> of free time), try the >> legacy build >> >> make all-legacy >> >> 4) If this is still broken, send the new make.log >> >> Thanks, >> >> Matt >> >> >> The configuration process runs okay (I attach the >> log here), >> but I get an error when I build it: >> -- Configuring done >> -- Generating done >> -- Build files have been written to: >> /home/mpovolot/Code_intel/** >> libs/petsc/build-real/linux >> Scanning dependencies of target petscsys >> [ 0%] [ 0%] [ 0%] Building CXX object >> CMakeFiles/petscsys.dir/src/** >> sys/info/verboseinfo.c.o >> Building Fortran object >> CMakeFiles/petscsys.dir/src/** >> sys/f90-mod/petscsysmod.F.o >> Building CXX object CMakeFiles/petscsys.dir/src/** >> sys/totalview/tv_data_display.**c.o >> Building CXX object >> CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >> Building CXX object >> CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >> f90-mod/petscsysmod.F:6.11: >> >> use mpi >> 1 >> Fatal Error: File 'mpi.mod' opened at (1) is not >> a GFORTRAN module file >> make[6]: *** >> [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >> Error 1 >> make[6]: *** Waiting for unfinished jobs.... >> make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 >> make[4]: *** [all] Error 2 >> make[3]: *** [ccmake] Error 2 >> make[2]: *** [cmake] Error 2 >> ****************************ERROR************************************** >> Error during compile, check linux/conf/make.log >> Send it and linux/conf/configure.log to >> petsc-maint at mcs.anl.gov >> >> ************************************************************************ >> >> I attach here the make.log >> What is strange to me that it has something to do >> with Gfortran, while I >> want to build everything with Intel. >> Thank you for help, >> Michael. >> >> -- >> Michael Povolotskyi, PhD >> Research Assistant Professor >> Network for Computational Nanotechnology >> 207 S Martin Jischke Drive >> Purdue University, DLR, room 441-10 >> West Lafayette, Indiana 47907 >> >> phone: +1-765-494-9396 >> fax: +1-765-496-6026 >> >> >> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jul 10 09:41:01 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Jul 2013 09:41:01 -0500 (CDT) Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DD6F24.2030801@purdue.edu> References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> <51DD6F24.2030801@purdue.edu> Message-ID: Perhaps stuff is not installed correctly on this machine? [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/bits/stl_algobase.h libstdc++6-4.6-dev: /usr/include/c++/4.6/bits/stl_algobase.h [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h libstdc++6-4.6-dev: /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h [petsc:~] petsc> So both bits/stl_algobase.h and bits/c++config.h are installed by libstdc++ package. What do you have for: ls /usr/include/c++/4.7/bits/stl_algobase.h ls /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h dpkg -S /usr/include/c++/4.7/bits/stl_algobase.h dpkg -S /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h [perhaps the files exist - but the intel compiler is not finding it?] Satish On Wed, 10 Jul 2013, Michael Povolotskyi wrote: > Thank you Matt. > Unfortunately 'make all-legacy' gives the same error: > ========================================= > libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src > libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys > libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes > libfast in: > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer > libfast in: > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls > libfast in: > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket > /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open > source file "bits/c++config.h" > #include > ^ > > compilation aborted for send.c (code 4) > > > > Michael. > > On 07/10/2013 08:05 AM, Matthew Knepley wrote: > > On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi > > wrote: > > > > Hello everybody, > > unfortunately building petsc without fortran cannot work for me > > because I need MUMPs that requires Scalapack that needs fortran. I > > played with the options. As result the configuration runs okay, > > the build gives an error that does not seem to be related to fortran: > > > > > > Quit building with CMake. It complicates everything. Use the legacy build. > > > > Matt > > > > [ 0%] Building CXX object > > CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o > > Building Fortran object > > CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o > > Building CXX object > > CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o > > [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic > > error: cannot open source file "bits/c++config.h" > > #include > > ^ > > > > /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: > > cannot open source file "bits/c++config.h" > > #include > > ^ > > > > compilation aborted for > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c > > (code 4) > > compilation aborted for > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c > > (code 4) > > > > I attach here the log files. > > Any advise is highly appreciated. > > Michael. > > > > On 7/9/2013 4:34 PM, Matthew Knepley wrote: > > > On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi > > > > wrote: > > > > > > If I do not need to use petsc in fortran programs, can I > > > build petsc without fortran and thus avoid this situation? > > > Michael. > > > > > > > > > On 07/09/2013 04:31 PM, Satish Balay wrote: > > > > > > For some reason this issue comes up with mpi.mod provided > > > by intel > > > mpi. > > > > > > We have a configure test for it - but looks like its not > > > sufficient to > > > catch this issue. > > > > > > satish > > > > > > > > > On Tue, 9 Jul 2013, Matthew Knepley wrote: > > > > > > On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi > > > >wrote: > > > > > > Dear Petsc users and developers, > > > I'm trying to build petsc with Intel compiler. > > > > > > 1) First, ask yourself whether you really want to > > > build with the Intel > > > compiler. Then ask again. > > > > > > 2) Do you need Fortran? If not, turn it off --with-fc=0. > > > > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > > > Matt > > > > > > 3) If you want Fortran and Intel (and have a hatred > > > of free time), try the > > > legacy build > > > > > > make all-legacy > > > > > > 4) If this is still broken, send the new make.log > > > > > > Thanks, > > > > > > Matt > > > > > > > > > The configuration process runs okay (I attach the > > > log here), > > > but I get an error when I build it: > > > -- Configuring done > > > -- Generating done > > > -- Build files have been written to: > > > /home/mpovolot/Code_intel/** > > > libs/petsc/build-real/linux > > > Scanning dependencies of target petscsys > > > [ 0%] [ 0%] [ 0%] Building CXX object > > > CMakeFiles/petscsys.dir/src/** > > > sys/info/verboseinfo.c.o > > > Building Fortran object > > > CMakeFiles/petscsys.dir/src/** > > > sys/f90-mod/petscsysmod.F.o > > > Building CXX object CMakeFiles/petscsys.dir/src/** > > > sys/totalview/tv_data_display.**c.o > > > Building CXX object > > > CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o > > > Building CXX object > > > CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o > > > /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** > > > f90-mod/petscsysmod.F:6.11: > > > > > > use mpi > > > 1 > > > Fatal Error: File 'mpi.mod' opened at (1) is not > > > a GFORTRAN module file > > > make[6]: *** > > > [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] > > > Error 1 > > > make[6]: *** Waiting for unfinished jobs.... > > > make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 > > > make[4]: *** [all] Error 2 > > > make[3]: *** [ccmake] Error 2 > > > make[2]: *** [cmake] Error 2 > > > ****************************ERROR************************************** > > > Error during compile, check linux/conf/make.log > > > Send it and linux/conf/configure.log to > > > petsc-maint at mcs.anl.gov > > > > > > ************************************************************************ > > > > > > I attach here the make.log > > > What is strange to me that it has something to do > > > with Gfortran, while I > > > want to build everything with Intel. > > > Thank you for help, > > > Michael. > > > > > > -- > > > Michael Povolotskyi, PhD > > > Research Assistant Professor > > > Network for Computational Nanotechnology > > > 207 S Martin Jischke Drive > > > Purdue University, DLR, room 441-10 > > > West Lafayette, Indiana 47907 > > > > > > phone: +1-765-494-9396 > > > fax: +1-765-496-6026 > > > > > > > > > > > > > > > > > > > > > > > > > > > -- What most experimenters take for granted before they begin > > > their > > > experiments is infinitely more interesting than any results to > > > which their experiments lead. > > > -- Norbert Wiener > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments > > is infinitely more interesting than any results to which their experiments > > lead. > > -- Norbert Wiener > > From mpovolot at purdue.edu Wed Jul 10 10:05:11 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 11:05:11 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> <51DD6F24.2030801@purdue.edu> Message-ID: <51DD7827.90301@purdue.edu> Hi Satish, it looks like I have this file but not in "standard" location: locate bits/c++config.h /usr/include/x86_64-linux-gnu/c++/4.7/bits/c++config.h /usr/share/gccxml-0.9/GCC/4.4/bits/c++config.h /usr/share/gccxml-0.9/GCC/4.6/bits/c++config.h /usr/share/gccxml-0.9/GCC/4.7/bits/c++config.h Should I try to add -I usr/include/x86_64-linux-gnu/c++/4.7/ to the CPPLAGS? Michael. On 07/10/2013 10:41 AM, Satish Balay wrote: > Perhaps stuff is not installed correctly on this machine? > > [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/bits/stl_algobase.h > libstdc++6-4.6-dev: /usr/include/c++/4.6/bits/stl_algobase.h > [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h > libstdc++6-4.6-dev: /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h > [petsc:~] petsc> > > > So both bits/stl_algobase.h and bits/c++config.h are installed by libstdc++ package. > > What do you have for: > > ls /usr/include/c++/4.7/bits/stl_algobase.h > ls /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h > dpkg -S /usr/include/c++/4.7/bits/stl_algobase.h > dpkg -S /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h > > [perhaps the files exist - but the intel compiler is not finding it?] > > Satish > > > On Wed, 10 Jul 2013, Michael Povolotskyi wrote: > >> Thank you Matt. >> Unfortunately 'make all-legacy' gives the same error: >> ========================================= >> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src >> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys >> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes >> libfast in: >> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer >> libfast in: >> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls >> libfast in: >> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket >> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot open >> source file "bits/c++config.h" >> #include >> ^ >> >> compilation aborted for send.c (code 4) >> >> >> >> Michael. >> >> On 07/10/2013 08:05 AM, Matthew Knepley wrote: >>> On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi >> > wrote: >>> >>> Hello everybody, >>> unfortunately building petsc without fortran cannot work for me >>> because I need MUMPs that requires Scalapack that needs fortran. I >>> played with the options. As result the configuration runs okay, >>> the build gives an error that does not seem to be related to fortran: >>> >>> >>> Quit building with CMake. It complicates everything. Use the legacy build. >>> >>> Matt >>> >>> [ 0%] Building CXX object >>> CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o >>> Building Fortran object >>> CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o >>> Building CXX object >>> CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o >>> [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic >>> error: cannot open source file "bits/c++config.h" >>> #include >>> ^ >>> >>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: >>> cannot open source file "bits/c++config.h" >>> #include >>> ^ >>> >>> compilation aborted for >>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c >>> (code 4) >>> compilation aborted for >>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c >>> (code 4) >>> >>> I attach here the log files. >>> Any advise is highly appreciated. >>> Michael. >>> >>> On 7/9/2013 4:34 PM, Matthew Knepley wrote: >>>> On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi >>>> > wrote: >>>> >>>> If I do not need to use petsc in fortran programs, can I >>>> build petsc without fortran and thus avoid this situation? >>>> Michael. >>>> >>>> >>>> On 07/09/2013 04:31 PM, Satish Balay wrote: >>>> >>>> For some reason this issue comes up with mpi.mod provided >>>> by intel >>>> mpi. >>>> >>>> We have a configure test for it - but looks like its not >>>> sufficient to >>>> catch this issue. >>>> >>>> satish >>>> >>>> >>>> On Tue, 9 Jul 2013, Matthew Knepley wrote: >>>> >>>> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi >>>> >wrote: >>>> >>>> Dear Petsc users and developers, >>>> I'm trying to build petsc with Intel compiler. >>>> >>>> 1) First, ask yourself whether you really want to >>>> build with the Intel >>>> compiler. Then ask again. >>>> >>>> 2) Do you need Fortran? If not, turn it off --with-fc=0. >>>> >>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>> >>>> Matt >>>> >>>> 3) If you want Fortran and Intel (and have a hatred >>>> of free time), try the >>>> legacy build >>>> >>>> make all-legacy >>>> >>>> 4) If this is still broken, send the new make.log >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>> The configuration process runs okay (I attach the >>>> log here), >>>> but I get an error when I build it: >>>> -- Configuring done >>>> -- Generating done >>>> -- Build files have been written to: >>>> /home/mpovolot/Code_intel/** >>>> libs/petsc/build-real/linux >>>> Scanning dependencies of target petscsys >>>> [ 0%] [ 0%] [ 0%] Building CXX object >>>> CMakeFiles/petscsys.dir/src/** >>>> sys/info/verboseinfo.c.o >>>> Building Fortran object >>>> CMakeFiles/petscsys.dir/src/** >>>> sys/f90-mod/petscsysmod.F.o >>>> Building CXX object CMakeFiles/petscsys.dir/src/** >>>> sys/totalview/tv_data_display.**c.o >>>> Building CXX object >>>> CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >>>> Building CXX object >>>> CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >>>> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >>>> f90-mod/petscsysmod.F:6.11: >>>> >>>> use mpi >>>> 1 >>>> Fatal Error: File 'mpi.mod' opened at (1) is not >>>> a GFORTRAN module file >>>> make[6]: *** >>>> [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >>>> Error 1 >>>> make[6]: *** Waiting for unfinished jobs.... >>>> make[5]: *** [CMakeFiles/petscsys.dir/all] Error 2 >>>> make[4]: *** [all] Error 2 >>>> make[3]: *** [ccmake] Error 2 >>>> make[2]: *** [cmake] Error 2 >>>> ****************************ERROR************************************** >>>> Error during compile, check linux/conf/make.log >>>> Send it and linux/conf/configure.log to >>>> petsc-maint at mcs.anl.gov >>>> >>>> ************************************************************************ >>>> >>>> I attach here the make.log >>>> What is strange to me that it has something to do >>>> with Gfortran, while I >>>> want to build everything with Intel. >>>> Thank you for help, >>>> Michael. >>>> >>>> -- >>>> Michael Povolotskyi, PhD >>>> Research Assistant Professor >>>> Network for Computational Nanotechnology >>>> 207 S Martin Jischke Drive >>>> Purdue University, DLR, room 441-10 >>>> West Lafayette, Indiana 47907 >>>> >>>> phone: +1-765-494-9396 >>>> fax: +1-765-496-6026 >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> -- What most experimenters take for granted before they begin >>>> their >>>> experiments is infinitely more interesting than any results to >>>> which their experiments lead. >>>> -- Norbert Wiener >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments >>> is infinitely more interesting than any results to which their experiments >>> lead. >>> -- Norbert Wiener >> From balay at mcs.anl.gov Wed Jul 10 10:12:05 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Jul 2013 10:12:05 -0500 (CDT) Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DD7827.90301@purdue.edu> References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> <51DD6F24.2030801@purdue.edu> <51DD7827.90301@purdue.edu> Message-ID: Hm - thats wierd. Was it installed by dpkg or something else? On my ubuntu 12.04 box - I get: balay at petsc^~ $ locate bits/c++config.h /usr/include/c++/4.4/x86_64-linux-gnu/32/bits/c++config.h /usr/include/c++/4.4/x86_64-linux-gnu/bits/c++config.h /usr/include/c++/4.6/x86_64-linux-gnu/32/bits/c++config.h /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h You can try specifing the include path with CXXCPPFLAGS [for the c++ compiler] Satish On Wed, 10 Jul 2013, Michael Povolotskyi wrote: > Hi Satish, > it looks like I have this file but not in "standard" location: > locate bits/c++config.h > /usr/include/x86_64-linux-gnu/c++/4.7/bits/c++config.h > /usr/share/gccxml-0.9/GCC/4.4/bits/c++config.h > /usr/share/gccxml-0.9/GCC/4.6/bits/c++config.h > /usr/share/gccxml-0.9/GCC/4.7/bits/c++config.h > > > Should I try to add -I usr/include/x86_64-linux-gnu/c++/4.7/ to the CPPLAGS? > Michael. > > On 07/10/2013 10:41 AM, Satish Balay wrote: > > Perhaps stuff is not installed correctly on this machine? > > > > [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/bits/stl_algobase.h > > libstdc++6-4.6-dev: /usr/include/c++/4.6/bits/stl_algobase.h > > [petsc:~] petsc> dpkg -S > > /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h > > libstdc++6-4.6-dev: /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h > > [petsc:~] petsc> > > > > > > So both bits/stl_algobase.h and bits/c++config.h are installed by libstdc++ > > package. > > > > What do you have for: > > > > ls /usr/include/c++/4.7/bits/stl_algobase.h > > ls /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h > > dpkg -S /usr/include/c++/4.7/bits/stl_algobase.h > > dpkg -S /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h > > > > [perhaps the files exist - but the intel compiler is not finding it?] > > > > Satish > > > > > > On Wed, 10 Jul 2013, Michael Povolotskyi wrote: > > > > > Thank you Matt. > > > Unfortunately 'make all-legacy' gives the same error: > > > ========================================= > > > libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src > > > libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys > > > libfast in: > > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes > > > libfast in: > > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer > > > libfast in: > > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls > > > libfast in: > > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket > > > /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot > > > open > > > source file "bits/c++config.h" > > > #include > > > ^ > > > > > > compilation aborted for send.c (code 4) > > > > > > > > > > > > Michael. > > > > > > On 07/10/2013 08:05 AM, Matthew Knepley wrote: > > > > On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi > > > > > > > > wrote: > > > > > > > > Hello everybody, > > > > unfortunately building petsc without fortran cannot work for me > > > > because I need MUMPs that requires Scalapack that needs fortran. I > > > > played with the options. As result the configuration runs okay, > > > > the build gives an error that does not seem to be related to > > > > fortran: > > > > > > > > > > > > Quit building with CMake. It complicates everything. Use the legacy > > > > build. > > > > > > > > Matt > > > > > > > > [ 0%] Building CXX object > > > > CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o > > > > Building Fortran object > > > > CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o > > > > Building CXX object > > > > CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o > > > > [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic > > > > error: cannot open source file "bits/c++config.h" > > > > #include > > > > ^ > > > > > > > > /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: > > > > cannot open source file "bits/c++config.h" > > > > #include > > > > ^ > > > > > > > > compilation aborted for > > > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c > > > > (code 4) > > > > compilation aborted for > > > > /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c > > > > (code 4) > > > > > > > > I attach here the log files. > > > > Any advise is highly appreciated. > > > > Michael. > > > > > > > > On 7/9/2013 4:34 PM, Matthew Knepley wrote: > > > > > On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi > > > > > > wrote: > > > > > > > > > > If I do not need to use petsc in fortran programs, can I > > > > > build petsc without fortran and thus avoid this situation? > > > > > Michael. > > > > > > > > > > > > > > > On 07/09/2013 04:31 PM, Satish Balay wrote: > > > > > > > > > > For some reason this issue comes up with mpi.mod provided > > > > > by intel > > > > > mpi. > > > > > > > > > > We have a configure test for it - but looks like its not > > > > > sufficient to > > > > > catch this issue. > > > > > > > > > > satish > > > > > > > > > > > > > > > On Tue, 9 Jul 2013, Matthew Knepley wrote: > > > > > > > > > > On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi > > > > > > > > > >wrote: > > > > > > > > > > Dear Petsc users and developers, > > > > > I'm trying to build petsc with Intel compiler. > > > > > > > > > > 1) First, ask yourself whether you really want to > > > > > build with the Intel > > > > > compiler. Then ask again. > > > > > > > > > > 2) Do you need Fortran? If not, turn it off > > > > > --with-fc=0. > > > > > > > > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > > > > > > > Matt > > > > > > > > > > 3) If you want Fortran and Intel (and have a hatred > > > > > of free time), try the > > > > > legacy build > > > > > > > > > > make all-legacy > > > > > > > > > > 4) If this is still broken, send the new make.log > > > > > > > > > > Thanks, > > > > > > > > > > Matt > > > > > > > > > > > > > > > The configuration process runs okay (I attach the > > > > > log here), > > > > > but I get an error when I build it: > > > > > -- Configuring done > > > > > -- Generating done > > > > > -- Build files have been written to: > > > > > /home/mpovolot/Code_intel/** > > > > > libs/petsc/build-real/linux > > > > > Scanning dependencies of target petscsys > > > > > [ 0%] [ 0%] [ 0%] Building CXX object > > > > > CMakeFiles/petscsys.dir/src/** > > > > > sys/info/verboseinfo.c.o > > > > > Building Fortran object > > > > > CMakeFiles/petscsys.dir/src/** > > > > > sys/f90-mod/petscsysmod.F.o > > > > > Building CXX object > > > > > CMakeFiles/petscsys.dir/src/** > > > > > sys/totalview/tv_data_display.**c.o > > > > > Building CXX object > > > > > CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o > > > > > Building CXX object > > > > > CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o > > > > > /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** > > > > > f90-mod/petscsysmod.F:6.11: > > > > > > > > > > use mpi > > > > > 1 > > > > > Fatal Error: File 'mpi.mod' opened at (1) is not > > > > > a GFORTRAN module file > > > > > make[6]: *** > > > > > [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] > > > > > Error 1 > > > > > make[6]: *** Waiting for unfinished jobs.... > > > > > make[5]: *** [CMakeFiles/petscsys.dir/all] Error > > > > > 2 > > > > > make[4]: *** [all] Error 2 > > > > > make[3]: *** [ccmake] Error 2 > > > > > make[2]: *** [cmake] Error 2 > > > > > ****************************ERROR************************************** > > > > > Error during compile, check > > > > > linux/conf/make.log > > > > > Send it and linux/conf/configure.log to > > > > > petsc-maint at mcs.anl.gov > > > > > > > > > > ************************************************************************ > > > > > > > > > > I attach here the make.log > > > > > What is strange to me that it has something to do > > > > > with Gfortran, while I > > > > > want to build everything with Intel. > > > > > Thank you for help, > > > > > Michael. > > > > > > > > > > -- > > > > > Michael Povolotskyi, PhD > > > > > Research Assistant Professor > > > > > Network for Computational Nanotechnology > > > > > 207 S Martin Jischke Drive > > > > > Purdue University, DLR, room 441-10 > > > > > West Lafayette, Indiana 47907 > > > > > > > > > > phone: +1-765-494-9396 > > > > > fax: +1-765-496-6026 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- What most experimenters take for granted before they begin > > > > > their > > > > > experiments is infinitely more interesting than any results to > > > > > which their experiments lead. > > > > > -- Norbert Wiener > > > > > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin their > > > > experiments > > > > is infinitely more interesting than any results to which their > > > > experiments > > > > lead. > > > > -- Norbert Wiener > > > > > From mpovolot at purdue.edu Wed Jul 10 10:14:49 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 11:14:49 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> <51DD6F24.2030801@purdue.edu> <51DD7827.90301@purdue.edu> Message-ID: <51DD7A69.2020501@purdue.edu> Everything was installed by synaptic that calls dpkg. I'm reconfiguring and will let you know. Michael. On 07/10/2013 11:12 AM, Satish Balay wrote: > Hm - thats wierd. Was it installed by dpkg or something else? > On my ubuntu 12.04 box - I get: > > balay at petsc^~ $ locate bits/c++config.h > /usr/include/c++/4.4/x86_64-linux-gnu/32/bits/c++config.h > /usr/include/c++/4.4/x86_64-linux-gnu/bits/c++config.h > /usr/include/c++/4.6/x86_64-linux-gnu/32/bits/c++config.h > /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h > > > You can try specifing the include path with CXXCPPFLAGS [for the c++ compiler] > > Satish > > On Wed, 10 Jul 2013, Michael Povolotskyi wrote: > >> Hi Satish, >> it looks like I have this file but not in "standard" location: >> locate bits/c++config.h >> /usr/include/x86_64-linux-gnu/c++/4.7/bits/c++config.h >> /usr/share/gccxml-0.9/GCC/4.4/bits/c++config.h >> /usr/share/gccxml-0.9/GCC/4.6/bits/c++config.h >> /usr/share/gccxml-0.9/GCC/4.7/bits/c++config.h >> >> >> Should I try to add -I usr/include/x86_64-linux-gnu/c++/4.7/ to the CPPLAGS? >> Michael. >> >> On 07/10/2013 10:41 AM, Satish Balay wrote: >>> Perhaps stuff is not installed correctly on this machine? >>> >>> [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/bits/stl_algobase.h >>> libstdc++6-4.6-dev: /usr/include/c++/4.6/bits/stl_algobase.h >>> [petsc:~] petsc> dpkg -S >>> /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >>> libstdc++6-4.6-dev: /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >>> [petsc:~] petsc> >>> >>> >>> So both bits/stl_algobase.h and bits/c++config.h are installed by libstdc++ >>> package. >>> >>> What do you have for: >>> >>> ls /usr/include/c++/4.7/bits/stl_algobase.h >>> ls /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h >>> dpkg -S /usr/include/c++/4.7/bits/stl_algobase.h >>> dpkg -S /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h >>> >>> [perhaps the files exist - but the intel compiler is not finding it?] >>> >>> Satish >>> >>> >>> On Wed, 10 Jul 2013, Michael Povolotskyi wrote: >>> >>>> Thank you Matt. >>>> Unfortunately 'make all-legacy' gives the same error: >>>> ========================================= >>>> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src >>>> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket >>>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot >>>> open >>>> source file "bits/c++config.h" >>>> #include >>>> ^ >>>> >>>> compilation aborted for send.c (code 4) >>>> >>>> >>>> >>>> Michael. >>>> >>>> On 07/10/2013 08:05 AM, Matthew Knepley wrote: >>>>> On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi >>>>> >>>> > wrote: >>>>> >>>>> Hello everybody, >>>>> unfortunately building petsc without fortran cannot work for me >>>>> because I need MUMPs that requires Scalapack that needs fortran. I >>>>> played with the options. As result the configuration runs okay, >>>>> the build gives an error that does not seem to be related to >>>>> fortran: >>>>> >>>>> >>>>> Quit building with CMake. It complicates everything. Use the legacy >>>>> build. >>>>> >>>>> Matt >>>>> >>>>> [ 0%] Building CXX object >>>>> CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o >>>>> Building Fortran object >>>>> CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o >>>>> Building CXX object >>>>> CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o >>>>> [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic >>>>> error: cannot open source file "bits/c++config.h" >>>>> #include >>>>> ^ >>>>> >>>>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: >>>>> cannot open source file "bits/c++config.h" >>>>> #include >>>>> ^ >>>>> >>>>> compilation aborted for >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c >>>>> (code 4) >>>>> compilation aborted for >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c >>>>> (code 4) >>>>> >>>>> I attach here the log files. >>>>> Any advise is highly appreciated. >>>>> Michael. >>>>> >>>>> On 7/9/2013 4:34 PM, Matthew Knepley wrote: >>>>>> On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi >>>>>> > wrote: >>>>>> >>>>>> If I do not need to use petsc in fortran programs, can I >>>>>> build petsc without fortran and thus avoid this situation? >>>>>> Michael. >>>>>> >>>>>> >>>>>> On 07/09/2013 04:31 PM, Satish Balay wrote: >>>>>> >>>>>> For some reason this issue comes up with mpi.mod provided >>>>>> by intel >>>>>> mpi. >>>>>> >>>>>> We have a configure test for it - but looks like its not >>>>>> sufficient to >>>>>> catch this issue. >>>>>> >>>>>> satish >>>>>> >>>>>> >>>>>> On Tue, 9 Jul 2013, Matthew Knepley wrote: >>>>>> >>>>>> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi >>>>>> >>>>> >wrote: >>>>>> >>>>>> Dear Petsc users and developers, >>>>>> I'm trying to build petsc with Intel compiler. >>>>>> >>>>>> 1) First, ask yourself whether you really want to >>>>>> build with the Intel >>>>>> compiler. Then ask again. >>>>>> >>>>>> 2) Do you need Fortran? If not, turn it off >>>>>> --with-fc=0. >>>>>> >>>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>>>> >>>>>> Matt >>>>>> >>>>>> 3) If you want Fortran and Intel (and have a hatred >>>>>> of free time), try the >>>>>> legacy build >>>>>> >>>>>> make all-legacy >>>>>> >>>>>> 4) If this is still broken, send the new make.log >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>> The configuration process runs okay (I attach the >>>>>> log here), >>>>>> but I get an error when I build it: >>>>>> -- Configuring done >>>>>> -- Generating done >>>>>> -- Build files have been written to: >>>>>> /home/mpovolot/Code_intel/** >>>>>> libs/petsc/build-real/linux >>>>>> Scanning dependencies of target petscsys >>>>>> [ 0%] [ 0%] [ 0%] Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/** >>>>>> sys/info/verboseinfo.c.o >>>>>> Building Fortran object >>>>>> CMakeFiles/petscsys.dir/src/** >>>>>> sys/f90-mod/petscsysmod.F.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/** >>>>>> sys/totalview/tv_data_display.**c.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >>>>>> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >>>>>> f90-mod/petscsysmod.F:6.11: >>>>>> >>>>>> use mpi >>>>>> 1 >>>>>> Fatal Error: File 'mpi.mod' opened at (1) is not >>>>>> a GFORTRAN module file >>>>>> make[6]: *** >>>>>> [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >>>>>> Error 1 >>>>>> make[6]: *** Waiting for unfinished jobs.... >>>>>> make[5]: *** [CMakeFiles/petscsys.dir/all] Error >>>>>> 2 >>>>>> make[4]: *** [all] Error 2 >>>>>> make[3]: *** [ccmake] Error 2 >>>>>> make[2]: *** [cmake] Error 2 >>>>>> ****************************ERROR************************************** >>>>>> Error during compile, check >>>>>> linux/conf/make.log >>>>>> Send it and linux/conf/configure.log to >>>>>> petsc-maint at mcs.anl.gov >>>>>> >>>>>> ************************************************************************ >>>>>> >>>>>> I attach here the make.log >>>>>> What is strange to me that it has something to do >>>>>> with Gfortran, while I >>>>>> want to build everything with Intel. >>>>>> Thank you for help, >>>>>> Michael. >>>>>> >>>>>> -- >>>>>> Michael Povolotskyi, PhD >>>>>> Research Assistant Professor >>>>>> Network for Computational Nanotechnology >>>>>> 207 S Martin Jischke Drive >>>>>> Purdue University, DLR, room 441-10 >>>>>> West Lafayette, Indiana 47907 >>>>>> >>>>>> phone: +1-765-494-9396 >>>>>> fax: +1-765-496-6026 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- What most experimenters take for granted before they begin >>>>>> their >>>>>> experiments is infinitely more interesting than any results to >>>>>> which their experiments lead. >>>>>> -- Norbert Wiener >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments >>>>> is infinitely more interesting than any results to which their >>>>> experiments >>>>> lead. >>>>> -- Norbert Wiener >> From klaus.zimmermann at physik.uni-freiburg.de Wed Jul 10 10:50:38 2013 From: klaus.zimmermann at physik.uni-freiburg.de (Klaus Zimmermann) Date: Wed, 10 Jul 2013 17:50:38 +0200 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: <51DD7A69.2020501@purdue.edu> References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> <51DD6F24.2030801@purdue.edu> <51DD7827.90301@purdue.edu> <51DD7A69.2020501@purdue.edu> Message-ID: <51DD82CE.2080807@physik.uni-freiburg.de> Hi Michael, for what it's worth, I have built petsc 3.1 and 3.2 with the intel compiler. It went smoothly ones I had the mpi environment set up. Your output looks to me like your mpicc and/or mpiicc is still calling gcc. Could you confirm your mpicc actually calls icc by looking at mpicc --version ? Best Klaus Am 10.07.2013 17:14, schrieb Michael Povolotskyi: > Everything was installed by synaptic that calls dpkg. > I'm reconfiguring and will let you know. > Michael. > > > On 07/10/2013 11:12 AM, Satish Balay wrote: >> Hm - thats wierd. Was it installed by dpkg or something else? >> On my ubuntu 12.04 box - I get: >> >> balay at petsc^~ $ locate bits/c++config.h >> /usr/include/c++/4.4/x86_64-linux-gnu/32/bits/c++config.h >> /usr/include/c++/4.4/x86_64-linux-gnu/bits/c++config.h >> /usr/include/c++/4.6/x86_64-linux-gnu/32/bits/c++config.h >> /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >> >> >> You can try specifing the include path with CXXCPPFLAGS [for the c++ >> compiler] >> >> Satish >> >> On Wed, 10 Jul 2013, Michael Povolotskyi wrote: >> >>> Hi Satish, >>> it looks like I have this file but not in "standard" location: >>> locate bits/c++config.h >>> /usr/include/x86_64-linux-gnu/c++/4.7/bits/c++config.h >>> /usr/share/gccxml-0.9/GCC/4.4/bits/c++config.h >>> /usr/share/gccxml-0.9/GCC/4.6/bits/c++config.h >>> /usr/share/gccxml-0.9/GCC/4.7/bits/c++config.h >>> >>> >>> Should I try to add -I usr/include/x86_64-linux-gnu/c++/4.7/ to the >>> CPPLAGS? >>> Michael. >>> >>> On 07/10/2013 10:41 AM, Satish Balay wrote: >>>> Perhaps stuff is not installed correctly on this machine? >>>> >>>> [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/bits/stl_algobase.h >>>> libstdc++6-4.6-dev: /usr/include/c++/4.6/bits/stl_algobase.h >>>> [petsc:~] petsc> dpkg -S >>>> /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >>>> libstdc++6-4.6-dev: >>>> /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >>>> [petsc:~] petsc> >>>> >>>> >>>> So both bits/stl_algobase.h and bits/c++config.h are installed by >>>> libstdc++ >>>> package. >>>> >>>> What do you have for: >>>> >>>> ls /usr/include/c++/4.7/bits/stl_algobase.h >>>> ls /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h >>>> dpkg -S /usr/include/c++/4.7/bits/stl_algobase.h >>>> dpkg -S /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h >>>> >>>> [perhaps the files exist - but the intel compiler is not finding it?] >>>> >>>> Satish >>>> >>>> >>>> On Wed, 10 Jul 2013, Michael Povolotskyi wrote: >>>> >>>>> Thank you Matt. >>>>> Unfortunately 'make all-legacy' gives the same error: >>>>> ========================================= >>>>> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src >>>>> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys >>>>> libfast in: >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes >>>>> libfast in: >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer >>>>> libfast in: >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls >>>>> >>>>> libfast in: >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket >>>>> >>>>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: >>>>> cannot >>>>> open >>>>> source file "bits/c++config.h" >>>>> #include >>>>> ^ >>>>> >>>>> compilation aborted for send.c (code 4) >>>>> >>>>> >>>>> >>>>> Michael. >>>>> >>>>> On 07/10/2013 08:05 AM, Matthew Knepley wrote: >>>>>> On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi >>>>>> >>>>> > wrote: >>>>>> >>>>>> Hello everybody, >>>>>> unfortunately building petsc without fortran cannot work for me >>>>>> because I need MUMPs that requires Scalapack that needs >>>>>> fortran. I >>>>>> played with the options. As result the configuration runs okay, >>>>>> the build gives an error that does not seem to be related to >>>>>> fortran: >>>>>> >>>>>> >>>>>> Quit building with CMake. It complicates everything. Use the legacy >>>>>> build. >>>>>> >>>>>> Matt >>>>>> >>>>>> [ 0%] Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o >>>>>> Building Fortran object >>>>>> CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o >>>>>> [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): >>>>>> catastrophic >>>>>> error: cannot open source file "bits/c++config.h" >>>>>> #include >>>>>> ^ >>>>>> >>>>>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic >>>>>> error: >>>>>> cannot open source file "bits/c++config.h" >>>>>> #include >>>>>> ^ >>>>>> >>>>>> compilation aborted for >>>>>> >>>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c >>>>>> >>>>>> (code 4) >>>>>> compilation aborted for >>>>>> >>>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c >>>>>> >>>>>> (code 4) >>>>>> >>>>>> I attach here the log files. >>>>>> Any advise is highly appreciated. >>>>>> Michael. >>>>>> >>>>>> On 7/9/2013 4:34 PM, Matthew Knepley wrote: >>>>>>> On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi >>>>>>> > wrote: >>>>>>> >>>>>>> If I do not need to use petsc in fortran programs, can I >>>>>>> build petsc without fortran and thus avoid this situation? >>>>>>> Michael. >>>>>>> >>>>>>> >>>>>>> On 07/09/2013 04:31 PM, Satish Balay wrote: >>>>>>> >>>>>>> For some reason this issue comes up with mpi.mod >>>>>>> provided >>>>>>> by intel >>>>>>> mpi. >>>>>>> >>>>>>> We have a configure test for it - but looks like >>>>>>> its not >>>>>>> sufficient to >>>>>>> catch this issue. >>>>>>> >>>>>>> satish >>>>>>> >>>>>>> >>>>>>> On Tue, 9 Jul 2013, Matthew Knepley wrote: >>>>>>> >>>>>>> On Tue, Jul 9, 2013 at 3:11 PM, Michael >>>>>>> Povolotskyi >>>>>>> >>>>>> >wrote: >>>>>>> >>>>>>> Dear Petsc users and developers, >>>>>>> I'm trying to build petsc with Intel compiler. >>>>>>> >>>>>>> 1) First, ask yourself whether you really want to >>>>>>> build with the Intel >>>>>>> compiler. Then ask again. >>>>>>> >>>>>>> 2) Do you need Fortran? If not, turn it off >>>>>>> --with-fc=0. >>>>>>> >>>>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> 3) If you want Fortran and Intel (and have a >>>>>>> hatred >>>>>>> of free time), try the >>>>>>> legacy build >>>>>>> >>>>>>> make all-legacy >>>>>>> >>>>>>> 4) If this is still broken, send the new make.log >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>> The configuration process runs okay (I >>>>>>> attach the >>>>>>> log here), >>>>>>> but I get an error when I build it: >>>>>>> -- Configuring done >>>>>>> -- Generating done >>>>>>> -- Build files have been written to: >>>>>>> /home/mpovolot/Code_intel/** >>>>>>> libs/petsc/build-real/linux >>>>>>> Scanning dependencies of target petscsys >>>>>>> [ 0%] [ 0%] [ 0%] Building CXX object >>>>>>> CMakeFiles/petscsys.dir/src/** >>>>>>> sys/info/verboseinfo.c.o >>>>>>> Building Fortran object >>>>>>> CMakeFiles/petscsys.dir/src/** >>>>>>> sys/f90-mod/petscsysmod.F.o >>>>>>> Building CXX object >>>>>>> CMakeFiles/petscsys.dir/src/** >>>>>>> sys/totalview/tv_data_display.**c.o >>>>>>> Building CXX object >>>>>>> >>>>>>> CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >>>>>>> Building CXX object >>>>>>> >>>>>>> CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >>>>>>> >>>>>>> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >>>>>>> f90-mod/petscsysmod.F:6.11: >>>>>>> >>>>>>> use mpi >>>>>>> 1 >>>>>>> Fatal Error: File 'mpi.mod' opened at (1) >>>>>>> is not >>>>>>> a GFORTRAN module file >>>>>>> make[6]: *** >>>>>>> >>>>>>> [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >>>>>>> Error 1 >>>>>>> make[6]: *** Waiting for unfinished jobs.... >>>>>>> make[5]: *** [CMakeFiles/petscsys.dir/all] >>>>>>> Error >>>>>>> 2 >>>>>>> make[4]: *** [all] Error 2 >>>>>>> make[3]: *** [ccmake] Error 2 >>>>>>> make[2]: *** [cmake] Error 2 >>>>>>> >>>>>>> ****************************ERROR************************************** >>>>>>> >>>>>>> Error during compile, check >>>>>>> linux/conf/make.log >>>>>>> Send it and linux/conf/configure.log to >>>>>>> petsc-maint at mcs.anl.gov >>>>>>> >>>>>>> >>>>>>> ************************************************************************ >>>>>>> >>>>>>> >>>>>>> I attach here the make.log >>>>>>> What is strange to me that it has something >>>>>>> to do >>>>>>> with Gfortran, while I >>>>>>> want to build everything with Intel. >>>>>>> Thank you for help, >>>>>>> Michael. >>>>>>> >>>>>>> -- >>>>>>> Michael Povolotskyi, PhD >>>>>>> Research Assistant Professor >>>>>>> Network for Computational Nanotechnology >>>>>>> 207 S Martin Jischke Drive >>>>>>> Purdue University, DLR, room 441-10 >>>>>>> West Lafayette, Indiana 47907 >>>>>>> >>>>>>> phone: +1-765-494-9396 >>>>>>> fax: +1-765-496-6026 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- What most experimenters take for granted before they >>>>>>> begin >>>>>>> their >>>>>>> experiments is infinitely more interesting than any results to >>>>>>> which their experiments lead. >>>>>>> -- Norbert Wiener >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments >>>>>> is infinitely more interesting than any results to which their >>>>>> experiments >>>>>> lead. >>>>>> -- Norbert Wiener >>> > From mpovolot at purdue.edu Wed Jul 10 12:25:57 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 13:25:57 -0400 Subject: [petsc-users] compile petsc with intel compiler In-Reply-To: References: <51DC6E58.2060307@purdue.edu> <51DC7370.9070603@purdue.edu> <51DCEA61.7050507@purdue.edu> <51DD6F24.2030801@purdue.edu> <51DD7827.90301@purdue.edu> Message-ID: <51DD9925.50102@purdue.edu> Hello everybody, with two tricks I managed to build petsc (the real double precision number version). The tricks are as follows: 1) add the following to the linker flags: -L/opt/intel/lib/intel64 -Wl,-rpath=/opt/intel/lib/intel64 -lintlc This makes configuration to run 2) add the following to the compiler options: -I /usr/include/x86_64-linux-gnu/c++/4.7 This allows to compile the source Thanks to everybody for help. Michael. On 07/10/2013 11:12 AM, Satish Balay wrote: > Hm - thats wierd. Was it installed by dpkg or something else? > On my ubuntu 12.04 box - I get: > > balay at petsc^~ $ locate bits/c++config.h > /usr/include/c++/4.4/x86_64-linux-gnu/32/bits/c++config.h > /usr/include/c++/4.4/x86_64-linux-gnu/bits/c++config.h > /usr/include/c++/4.6/x86_64-linux-gnu/32/bits/c++config.h > /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h > > > You can try specifing the include path with CXXCPPFLAGS [for the c++ compiler] > > Satish > > On Wed, 10 Jul 2013, Michael Povolotskyi wrote: > >> Hi Satish, >> it looks like I have this file but not in "standard" location: >> locate bits/c++config.h >> /usr/include/x86_64-linux-gnu/c++/4.7/bits/c++config.h >> /usr/share/gccxml-0.9/GCC/4.4/bits/c++config.h >> /usr/share/gccxml-0.9/GCC/4.6/bits/c++config.h >> /usr/share/gccxml-0.9/GCC/4.7/bits/c++config.h >> >> >> Should I try to add -I usr/include/x86_64-linux-gnu/c++/4.7/ to the CPPLAGS? >> Michael. >> >> On 07/10/2013 10:41 AM, Satish Balay wrote: >>> Perhaps stuff is not installed correctly on this machine? >>> >>> [petsc:~] petsc> dpkg -S /usr/include/c++/4.6/bits/stl_algobase.h >>> libstdc++6-4.6-dev: /usr/include/c++/4.6/bits/stl_algobase.h >>> [petsc:~] petsc> dpkg -S >>> /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >>> libstdc++6-4.6-dev: /usr/include/c++/4.6/x86_64-linux-gnu/bits/c++config.h >>> [petsc:~] petsc> >>> >>> >>> So both bits/stl_algobase.h and bits/c++config.h are installed by libstdc++ >>> package. >>> >>> What do you have for: >>> >>> ls /usr/include/c++/4.7/bits/stl_algobase.h >>> ls /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h >>> dpkg -S /usr/include/c++/4.7/bits/stl_algobase.h >>> dpkg -S /usr/include/c++/4.7/x86_64-linux-gnu/bits/c++config.h >>> >>> [perhaps the files exist - but the intel compiler is not finding it?] >>> >>> Satish >>> >>> >>> On Wed, 10 Jul 2013, Michael Povolotskyi wrote: >>> >>>> Thank you Matt. >>>> Unfortunately 'make all-legacy' gives the same error: >>>> ========================================= >>>> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src >>>> libfast in: /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls >>>> libfast in: >>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/classes/viewer/impls/socket >>>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: cannot >>>> open >>>> source file "bits/c++config.h" >>>> #include >>>> ^ >>>> >>>> compilation aborted for send.c (code 4) >>>> >>>> >>>> >>>> Michael. >>>> >>>> On 07/10/2013 08:05 AM, Matthew Knepley wrote: >>>>> On Wed, Jul 10, 2013 at 12:00 AM, Michael Povolotskyi >>>>> >>>> > wrote: >>>>> >>>>> Hello everybody, >>>>> unfortunately building petsc without fortran cannot work for me >>>>> because I need MUMPs that requires Scalapack that needs fortran. I >>>>> played with the options. As result the configuration runs okay, >>>>> the build gives an error that does not seem to be related to >>>>> fortran: >>>>> >>>>> >>>>> Quit building with CMake. It complicates everything. Use the legacy >>>>> build. >>>>> >>>>> Matt >>>>> >>>>> [ 0%] Building CXX object >>>>> CMakeFiles/petscsys.dir/src/sys/totalview/tv_data_display.c.o >>>>> Building Fortran object >>>>> CMakeFiles/petscsys.dir/src/sys/f90-mod/petscsysmod.F.o >>>>> Building CXX object >>>>> CMakeFiles/petscsys.dir/src/sys/python/pythonsys.c.o >>>>> [ 0%] /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic >>>>> error: cannot open source file "bits/c++config.h" >>>>> #include >>>>> ^ >>>>> >>>>> /usr/include/c++/4.7/bits/stl_algobase.h(60): catastrophic error: >>>>> cannot open source file "bits/c++config.h" >>>>> #include >>>>> ^ >>>>> >>>>> compilation aborted for >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/info/verboseinfo.c >>>>> (code 4) >>>>> compilation aborted for >>>>> /home/mpovolot/Code_intel/libs/petsc/build-real/src/sys/logging/plog.c >>>>> (code 4) >>>>> >>>>> I attach here the log files. >>>>> Any advise is highly appreciated. >>>>> Michael. >>>>> >>>>> On 7/9/2013 4:34 PM, Matthew Knepley wrote: >>>>>> On Tue, Jul 9, 2013 at 3:32 PM, Michael Povolotskyi >>>>>> > wrote: >>>>>> >>>>>> If I do not need to use petsc in fortran programs, can I >>>>>> build petsc without fortran and thus avoid this situation? >>>>>> Michael. >>>>>> >>>>>> >>>>>> On 07/09/2013 04:31 PM, Satish Balay wrote: >>>>>> >>>>>> For some reason this issue comes up with mpi.mod provided >>>>>> by intel >>>>>> mpi. >>>>>> >>>>>> We have a configure test for it - but looks like its not >>>>>> sufficient to >>>>>> catch this issue. >>>>>> >>>>>> satish >>>>>> >>>>>> >>>>>> On Tue, 9 Jul 2013, Matthew Knepley wrote: >>>>>> >>>>>> On Tue, Jul 9, 2013 at 3:11 PM, Michael Povolotskyi >>>>>> >>>>> >wrote: >>>>>> >>>>>> Dear Petsc users and developers, >>>>>> I'm trying to build petsc with Intel compiler. >>>>>> >>>>>> 1) First, ask yourself whether you really want to >>>>>> build with the Intel >>>>>> compiler. Then ask again. >>>>>> >>>>>> 2) Do you need Fortran? If not, turn it off >>>>>> --with-fc=0. >>>>>> >>>>>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >>>>>> >>>>>> Matt >>>>>> >>>>>> 3) If you want Fortran and Intel (and have a hatred >>>>>> of free time), try the >>>>>> legacy build >>>>>> >>>>>> make all-legacy >>>>>> >>>>>> 4) If this is still broken, send the new make.log >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>> The configuration process runs okay (I attach the >>>>>> log here), >>>>>> but I get an error when I build it: >>>>>> -- Configuring done >>>>>> -- Generating done >>>>>> -- Build files have been written to: >>>>>> /home/mpovolot/Code_intel/** >>>>>> libs/petsc/build-real/linux >>>>>> Scanning dependencies of target petscsys >>>>>> [ 0%] [ 0%] [ 0%] Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/** >>>>>> sys/info/verboseinfo.c.o >>>>>> Building Fortran object >>>>>> CMakeFiles/petscsys.dir/src/** >>>>>> sys/f90-mod/petscsysmod.F.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/** >>>>>> sys/totalview/tv_data_display.**c.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/**sys/python/pythonsys.c.o >>>>>> Building CXX object >>>>>> CMakeFiles/petscsys.dir/src/**sys/logging/plog.c.o >>>>>> /home/mpovolot/Code_intel/**libs/petsc/build-real/src/sys/** >>>>>> f90-mod/petscsysmod.F:6.11: >>>>>> >>>>>> use mpi >>>>>> 1 >>>>>> Fatal Error: File 'mpi.mod' opened at (1) is not >>>>>> a GFORTRAN module file >>>>>> make[6]: *** >>>>>> [CMakeFiles/petscsys.dir/src/**sys/f90-mod/petscsysmod.F.o] >>>>>> Error 1 >>>>>> make[6]: *** Waiting for unfinished jobs.... >>>>>> make[5]: *** [CMakeFiles/petscsys.dir/all] Error >>>>>> 2 >>>>>> make[4]: *** [all] Error 2 >>>>>> make[3]: *** [ccmake] Error 2 >>>>>> make[2]: *** [cmake] Error 2 >>>>>> ****************************ERROR************************************** >>>>>> Error during compile, check >>>>>> linux/conf/make.log >>>>>> Send it and linux/conf/configure.log to >>>>>> petsc-maint at mcs.anl.gov >>>>>> >>>>>> ************************************************************************ >>>>>> >>>>>> I attach here the make.log >>>>>> What is strange to me that it has something to do >>>>>> with Gfortran, while I >>>>>> want to build everything with Intel. >>>>>> Thank you for help, >>>>>> Michael. >>>>>> >>>>>> -- >>>>>> Michael Povolotskyi, PhD >>>>>> Research Assistant Professor >>>>>> Network for Computational Nanotechnology >>>>>> 207 S Martin Jischke Drive >>>>>> Purdue University, DLR, room 441-10 >>>>>> West Lafayette, Indiana 47907 >>>>>> >>>>>> phone: +1-765-494-9396 >>>>>> fax: +1-765-496-6026 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- What most experimenters take for granted before they begin >>>>>> their >>>>>> experiments is infinitely more interesting than any results to >>>>>> which their experiments lead. >>>>>> -- Norbert Wiener >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments >>>>> is infinitely more interesting than any results to which their >>>>> experiments >>>>> lead. >>>>> -- Norbert Wiener >> From mpovolot at purdue.edu Wed Jul 10 13:09:27 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 14:09:27 -0400 Subject: [petsc-users] question about --with-fortran-kernels option Message-ID: <51DDA357.4080107@purdue.edu> Hello everybody, does anybody know if it makes sense to build petsc with --with-fortran-kernels? Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From bsmith at mcs.anl.gov Wed Jul 10 13:13:48 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Jul 2013 13:13:48 -0500 Subject: [petsc-users] question about --with-fortran-kernels option In-Reply-To: <51DDA357.4080107@purdue.edu> References: <51DDA357.4080107@purdue.edu> Message-ID: On Jul 10, 2013, at 1:09 PM, Michael Povolotskyi wrote: > Hello everybody, > does anybody know if it makes sense to build petsc with --with-fortran-kernels? No. It is just their so people who think Fortran code compiles to faster code than C can try it and see that it doesn't make any meaningful difference. Barry > Thank you, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > From mpovolot at purdue.edu Wed Jul 10 13:15:55 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 10 Jul 2013 14:15:55 -0400 Subject: [petsc-users] question about --with-fortran-kernels option In-Reply-To: References: <51DDA357.4080107@purdue.edu> Message-ID: <51DDA4DB.2000304@purdue.edu> On 07/10/2013 02:13 PM, Barry Smith wrote: > On Jul 10, 2013, at 1:09 PM, Michael Povolotskyi wrote: > >> Hello everybody, >> does anybody know if it makes sense to build petsc with --with-fortran-kernels? > No. It is just their so people who think Fortran code compiles to faster code than C can try it and see that it doesn't make any meaningful difference. > > Barry > >> Thank you, >> Michael. >> >> I see. I heard some gossips that fortran compiles faster code to deal with complex numbers than C. But you do no agree with that. Am I right? Michael. From bsmith at mcs.anl.gov Wed Jul 10 13:19:18 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Jul 2013 13:19:18 -0500 Subject: [petsc-users] question about --with-fortran-kernels option In-Reply-To: <51DDA4DB.2000304@purdue.edu> References: <51DDA357.4080107@purdue.edu> <51DDA4DB.2000304@purdue.edu> Message-ID: <2C5C27CF-F3ED-4845-9242-80F59C2D1D6F@mcs.anl.gov> On Jul 10, 2013, at 1:15 PM, Michael Povolotskyi wrote: > On 07/10/2013 02:13 PM, Barry Smith wrote: >> On Jul 10, 2013, at 1:09 PM, Michael Povolotskyi wrote: >> >>> Hello everybody, >>> does anybody know if it makes sense to build petsc with --with-fortran-kernels? >> No. It is just their so people who think Fortran code compiles to faster code than C can try it and see that it doesn't make any meaningful difference. >> >> Barry >> >>> Thank you, >>> Michael. >>> >>> > I see. I heard some gossips that fortran compiles faster code to deal with complex numbers than C. > But you do no agree with that. Am I right? Pick two PETSC_ARCH names, compile one with the fortran kernels, one without. Link you application against each, run with -log_summary and see if it matters. Should take an hour tops on a reasonable machine and then the debate is over for your configuration. Barry > Michael. From john.mousel at gmail.com Thu Jul 11 11:18:43 2013 From: john.mousel at gmail.com (John Mousel) Date: Thu, 11 Jul 2013 11:18:43 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side Message-ID: I'm trying to reuse the PC constructed by GAMG during a sub-iterative procedure where KSPSolve is called with the same matrix repeatedly. The right-hand side changes during the sub-iteration. I've been attempting to use SAME_PRECONDITIONER for nit = 2..., but this seems to lead to very different results than using DIFFERENT_PRECONDITIONER each sub-iteration. I'm using the following options: -pres_ksp_type preonly -pres_pc_type redistribute -pres_redistribute_ksp_type bcgsl -pres_redistribute_pc_type gamg -pres_redistribute_pc_gamg_threshold 0.01 -pres_redistribute_mg_levels_ksp_type richardson -pres_redistribute_mg_levels_pc_type sor -pres_redistribute_mg_coarse_ksp_type richardson -pres_redistribute_mg_coarse_pc_type sor -pres_redistribute_mg_coarse_pc_sor_its 5 -pres_redistribute_pc_gamg_type agg -pres_redistribute_pc_gamg_agg_nsmooths 1 -pres_redistribute_pc_gamg_sym_graph true -pres_redistribute_ksp_initial_guess_nonzero 0 Is there a dependence on the right-hand side in GAMG somewhere? John -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 11 13:00:51 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 11 Jul 2013 13:00:51 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: References: Message-ID: <87vc4hrn70.fsf@mcs.anl.gov> John Mousel writes: > I'm trying to reuse the PC constructed by GAMG during a sub-iterative > procedure where KSPSolve is called with the same matrix repeatedly. The > right-hand side changes during the sub-iteration. I've been attempting to > use SAME_PRECONDITIONER for nit = 2..., What do you mean by SAME_PRECONDITIONER? If you are solving with the same matrix, you can just call KSPSolve without KSPSetOperators. > Is there a dependence on the right-hand side in GAMG somewhere? It uses a pseudo-random number generator for eigenvalue estimation. Try adding -random_seed 1 to your run and see if it cuts the variation you see. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From john.mousel at gmail.com Thu Jul 11 14:07:31 2013 From: john.mousel at gmail.com (John Mousel) Date: Thu, 11 Jul 2013 14:07:31 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: <87vc4hrn70.fsf@mcs.anl.gov> References: <87vc4hrn70.fsf@mcs.anl.gov> Message-ID: Jed, I alternate between solving Helmholtz and Poisson equations in an outer loop. The matrix is fixed for both operators during the process, and only the right-hand side is updated for each system. I just tried -random_seed 1, and it seems to have made all the difference in the world! John On Thu, Jul 11, 2013 at 1:00 PM, Jed Brown wrote: > John Mousel writes: > > > I'm trying to reuse the PC constructed by GAMG during a sub-iterative > > procedure where KSPSolve is called with the same matrix repeatedly. The > > right-hand side changes during the sub-iteration. I've been attempting to > > use SAME_PRECONDITIONER for nit = 2..., > > What do you mean by SAME_PRECONDITIONER? If you are solving with the > same matrix, you can just call KSPSolve without KSPSetOperators. > > > Is there a dependence on the right-hand side in GAMG somewhere? > > It uses a pseudo-random number generator for eigenvalue estimation. Try > adding > > -random_seed 1 > > to your run and see if it cuts the variation you see. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 11 15:41:50 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 11 Jul 2013 15:41:50 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: References: <87vc4hrn70.fsf@mcs.anl.gov> Message-ID: <87sizksub5.fsf@mcs.anl.gov> John Mousel writes: > Jed, > > I alternate between solving Helmholtz and Poisson equations in an outer > loop. The matrix is fixed for both operators during the process, and only > the right-hand side is updated for each system. Do you have one KSP or two? If one, I assume you're trying to preserve memory, but there is likely more memory in the preconditioner than the Krylov space and if you want to reuse the preconditioner, you have to set it up anew each time. > I just tried -random_seed 1, and it seems to have made all the difference > in the world! That's disconcerting because we would like the algorithm to be insensitive to such things. How big was the performance difference? Can you give us some information to reproduce? (Maybe a smallish example matrix that demonstrates this problem.) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From john.mousel at gmail.com Thu Jul 11 15:47:45 2013 From: john.mousel at gmail.com (John Mousel) Date: Thu, 11 Jul 2013 15:47:45 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: <87sizksub5.fsf@mcs.anl.gov> References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> Message-ID: I have two KSP contexts, helm%ksp and proj%ksp. I switch between the two by calling KSPSetOperators. There was no performance difference, just a different answer. On Thu, Jul 11, 2013 at 3:41 PM, Jed Brown wrote: > John Mousel writes: > > > Jed, > > > > I alternate between solving Helmholtz and Poisson equations in an outer > > loop. The matrix is fixed for both operators during the process, and only > > the right-hand side is updated for each system. > > Do you have one KSP or two? If one, I assume you're trying to preserve > memory, but there is likely more memory in the preconditioner than the > Krylov space and if you want to reuse the preconditioner, you have to > set it up anew each time. > > > I just tried -random_seed 1, and it seems to have made all the difference > > in the world! > > That's disconcerting because we would like the algorithm to be > insensitive to such things. How big was the performance difference? > Can you give us some information to reproduce? (Maybe a smallish > example matrix that demonstrates this problem.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 11 16:00:35 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 11 Jul 2013 16:00:35 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> Message-ID: <87li5cstfw.fsf@mcs.anl.gov> John Mousel writes: > I have two KSP contexts, helm%ksp and proj%ksp. I switch between the two by > calling KSPSetOperators. Why call KSPSetOperators? You should be able to call KSPSetOperators once for helm%ksp and once for proj%ksp, then just call for each step: KSPSolve(helm%ksp, new_rhs, x) update new_rhs KSPSolve(proj%ksp, new_rhs, y) > There was no performance difference, just a different answer. Okay, but both equally accurate up to your convergence tolerance? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From john.mousel at gmail.com Thu Jul 11 16:27:49 2013 From: john.mousel at gmail.com (John Mousel) Date: Thu, 11 Jul 2013 16:27:49 -0500 Subject: [petsc-users] PETSc option Message-ID: If you are running the code, please set the option "-random_seed 1". This seems to strongly affect the robustness of the PETSc solver. John -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Thu Jul 11 16:30:53 2013 From: john.mousel at gmail.com (John Mousel) Date: Thu, 11 Jul 2013 16:30:53 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: <87li5cstfw.fsf@mcs.anl.gov> References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> <87li5cstfw.fsf@mcs.anl.gov> Message-ID: I call it because the KSPs get passed to an AxbSolver routine that sets up the linear solve. That routine calls KSPSetOperators. Does it cause a problem in doing that if I want to reuse the PC? Yes, both solutions were equally accurate. On Thu, Jul 11, 2013 at 4:00 PM, Jed Brown wrote: > John Mousel writes: > > > I have two KSP contexts, helm%ksp and proj%ksp. I switch between the two > by > > calling KSPSetOperators. > > Why call KSPSetOperators? You should be able to call KSPSetOperators > once for helm%ksp and once for proj%ksp, then just call > > for each step: > KSPSolve(helm%ksp, new_rhs, x) > update new_rhs > KSPSolve(proj%ksp, new_rhs, y) > > > There was no performance difference, just a different answer. > > Okay, but both equally accurate up to your convergence tolerance? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 11 17:10:08 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 11 Jul 2013 17:10:08 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> <87li5cstfw.fsf@mcs.anl.gov> Message-ID: <87a9lssq7z.fsf@mcs.anl.gov> John Mousel writes: > I call it because the KSPs get passed to an AxbSolver routine that sets up > the linear solve. That routine calls KSPSetOperators. Does it cause a > problem in doing that if I want to reuse the PC? If you use SAME_PRECONDITIONER, then the same thing will be used, but if you know when to use SAME_PRECONDITIONER, you could instead just not call KSPSetOperators. At some point, recycling methods might reuse some information when the matrix has not changed, though our usual mode of operation is to absorb that information into the preconditioner. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mark.adams at columbia.edu Thu Jul 11 17:28:30 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Thu, 11 Jul 2013 18:28:30 -0400 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> Message-ID: <2E2F21D3-F98A-4B86-B52D-87A68E052278@columbia.edu> As Jed mentioned you should not see a significant difference with '-random_seed 1'. This fix may just be papering over a problem that will bite you eventually. If you run with '-XXX_gamg_verbose 2' the eigenvalues used will be printed out. Try running with and without '-random_seed 1' and send the output. We are looking to see if the computed eigenvalues on these second (bad) solves is different in the two runs. And I'm a bit puzzled because I don't know how '-random_seed 1' could be used in GAMG to compute eigen estimates... On Jul 11, 2013, at 4:47 PM, John Mousel wrote: > I have two KSP contexts, helm%ksp and proj%ksp. I switch between the two by calling KSPSetOperators. There was no performance difference, just a different answer. > > > On Thu, Jul 11, 2013 at 3:41 PM, Jed Brown wrote: > John Mousel writes: > > > Jed, > > > > I alternate between solving Helmholtz and Poisson equations in an outer > > loop. The matrix is fixed for both operators during the process, and only > > the right-hand side is updated for each system. > > Do you have one KSP or two? If one, I assume you're trying to preserve > memory, but there is likely more memory in the preconditioner than the > Krylov space and if you want to reuse the preconditioner, you have to > set it up anew each time. > > > I just tried -random_seed 1, and it seems to have made all the difference > > in the world! > > That's disconcerting because we would like the algorithm to be > insensitive to such things. How big was the performance difference? > Can you give us some information to reproduce? (Maybe a smallish > example matrix that demonstrates this problem.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jul 12 00:14:13 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 12 Jul 2013 00:14:13 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: <2E2F21D3-F98A-4B86-B52D-87A68E052278@columbia.edu> References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> <2E2F21D3-F98A-4B86-B52D-87A68E052278@columbia.edu> Message-ID: <8738rks6l6.fsf@mcs.anl.gov> "Mark F. Adams" writes: > And I'm a bit puzzled because I don't know how '-random_seed 1' could > be used in GAMG to compute eigen estimates... Look at PetscRandomSetFromOptions. ierr = PetscOptionsInt("-random_seed","Seed to use to generate random numbers","PetscRandomSetSeed",0,&seed,&set);CHKERRQ(ierr); if (set) { ierr = PetscRandomSetSeed(rnd,(unsigned long int)seed);CHKERRQ(ierr); ierr = PetscRandomSeed(rnd);CHKERRQ(ierr); } Otherwise the default seed is used, but that changes for repeat solves. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From i.gutheil at fz-juelich.de Fri Jul 12 04:17:21 2013 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Fri, 12 Jul 2013 11:17:21 +0200 Subject: [petsc-users] What does "--with-is-color-value-type=short" mean? Message-ID: <51DFC9A1.8070808@fz-juelich.de> Hello, sorry, I do not use PETSc, I only installl it for those who want to use it. I installed it on an BlueGene/P with the example for that machine and now I am going to install it for BlueGene/Q and I don't see an example file for that machine. I had used the same arch-bgp-... file with modified known-sizeof things for the 64-bit architecture and the installs and examples usually were ok. Now one user had problems and in the errors there was something about is-color, so do I have to omit the "--with-is-color-value=short" for bgq? Thanks for your answers Inge Gutheil -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de From salkork at ornl.gov Fri Jul 12 08:46:39 2013 From: salkork at ornl.gov (Salko, Robert K.) Date: Fri, 12 Jul 2013 09:46:39 -0400 Subject: [petsc-users] (no subject) Message-ID: I'm attempting to implement PETSc for the parallel solution of a large sparse matrix in a computer program I work with. The program is written in Fortran. I currently have been able to get the program working using PETSc and giving correct results, but the issue is its performance. It is much slower than solving the matrix in serial. I seem to have narrowed the major PETSc time sink down to actually constructing the matrix (getting the matrix terms into the PETSc matrix). Specifically, it is hanging for a very long time to do this step: do values_counter = 1, total_values call MatSetValues(A,1,coeff_row(values_counter),1,coeff_col(values_counter),coeff(values_counter),INSERT_VALUES,petsc_err) end do Building the right-hand-side vector for this matrix is much faster: call VecSetValues(y,nrows,index_of_values,rhs_values,INSERT_VALUES,petsc_err) The difference, as I see it, is that I'm sending the matrix coefficients to the PETSc matrix one element at a time whereas with the RHS vector, I'm sending all the values in one shot. There are 7 times more elements in the matrix than the vector, so I get that it will take longer. So I did some timing studies using MPI_Wtime and putting the matrix values into the PETSc matrix is taking 4,800 times longer than putting the values into the RHS vector. Then there is the actual assembly of the matrix that you have to do using MatAssemblyBegin and MatAssemblyEnd. That takes even longer than assigning values to the matrix. More than twice as long. Actually solving the matrix and then distributing the solution back to the domains in the program takes about as much time as the RHS vector assignment step, so that is nearly 5,000 times faster than putting data into PETSc. Surely there must be something wrong that I'm doing with the assignment of data into PETSc. So my question is, is there a more efficient way to build the matrix? I already have my coefficients stored in AIJ form in the program. I understand that PETSc uses AIJ. Can't I somehow just send the three AIJ vectors to PETSc in one shot? Thanks, Bob From abhyshr at mcs.anl.gov Fri Jul 12 09:00:11 2013 From: abhyshr at mcs.anl.gov (Shri) Date: Fri, 12 Jul 2013 09:00:11 -0500 Subject: [petsc-users] (no subject) In-Reply-To: References: Message-ID: <7541049C-E376-4685-9A71-CBFB2A1D0F78@mcs.anl.gov> http://www.mcs.anl.gov/petsc/documentation/faq.html#efficient-assembly On Jul 12, 2013, at 8:46 AM, "Salko, Robert K." wrote: > I'm attempting to implement PETSc for the parallel solution of a large sparse matrix in a computer program I work with. The program is written in Fortran. I currently have been able to get the program working using PETSc and giving correct results, but the issue is its performance. It is much slower than solving the matrix in serial. I seem to have narrowed the major PETSc time sink down to actually constructing the matrix (getting the matrix terms into the PETSc matrix). Specifically, it is hanging for a very long time to do this step: > > do values_counter = 1, total_values > call MatSetValues(A,1,coeff_row(values_counter),1,coeff_col(values_counter),coeff(values_counter),INSERT_VALUES,petsc_err) > end do > > Building the right-hand-side vector for this matrix is much faster: > > call VecSetValues(y,nrows,index_of_values,rhs_values,INSERT_VALUES,petsc_err) > > The difference, as I see it, is that I'm sending the matrix coefficients to the PETSc matrix one element at a time whereas with the RHS vector, I'm sending all the values in one shot. There are 7 times more elements in the matrix than the vector, so I get that it will take longer. So I did some timing studies using MPI_Wtime and putting the matrix values into the PETSc matrix is taking 4,800 times longer than putting the values into the RHS vector. > > Then there is the actual assembly of the matrix that you have to do using MatAssemblyBegin and MatAssemblyEnd. That takes even longer than assigning values to the matrix. More than twice as long. Actually solving the matrix and then distributing the solution back to the domains in the program takes about as much time as the RHS vector assignment step, so that is nearly 5,000 times faster than putting data into PETSc. Surely there must be something wrong that I'm doing with the assignment of data into PETSc. > > So my question is, is there a more efficient way to build the matrix? I already have my coefficients stored in AIJ form in the program. I understand that PETSc uses AIJ. Can't I somehow just send the three AIJ vectors to PETSc in one shot? > > Thanks, > Bob -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Fri Jul 12 12:31:58 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Fri, 12 Jul 2013 13:31:58 -0400 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: <8738rks6l6.fsf@mcs.anl.gov> References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> <2E2F21D3-F98A-4B86-B52D-87A68E052278@columbia.edu> <8738rks6l6.fsf@mcs.anl.gov> Message-ID: <7B54E63F-BBFE-479A-92B7-FFD181D69595@columbia.edu> > > > if (set) { > ierr = PetscRandomSetSeed(rnd,(unsigned long int)seed);CHKERRQ(ierr); > ierr = PetscRandomSeed(rnd);CHKERRQ(ierr); > } > > Otherwise the default seed is used, but that changes for repeat solves. OK, right I see. Anyway it is bad that the code is sensitive to the random RHS in the eigen estimator. These operators are not symmetric apparently and I'm not sure about the nature of this non-symetry and how that can effect eigen estimators. This is a fundamental disadvantage of smoothed aggregation. So at this point it is not clear if 1) there is an simple code/parameter change in the code that can fix this, 2) a code bug, or 3) a fundamental problem with cheby on these (asymmetric operators). It would be nice to verify and get some number on the differences in eigen estimates in these two (good and bad) solves. Also, hypre is a good robust alternative if you need to get stuff done now. From bsmith at mcs.anl.gov Fri Jul 12 12:59:03 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 12 Jul 2013 12:59:03 -0500 Subject: [petsc-users] What does "--with-is-color-value-type=short" mean? In-Reply-To: <51DFC9A1.8070808@fz-juelich.de> References: <51DFC9A1.8070808@fz-juelich.de> Message-ID: <48C9D75E-5D8E-4990-93CC-B89658331EDC@mcs.anl.gov> Inge, We will need the entire error message to know what the issue is. The --with-is-color-value=short should be fine and is not likely the cause of the problem. Barry On Jul 12, 2013, at 4:17 AM, Inge Gutheil wrote: > Hello, > sorry, I do not use PETSc, I only installl it for those who want to use > it. I installed it on an BlueGene/P with the example for that machine > and now I am going to install it for BlueGene/Q and I don't see an > example file for that machine. > I had used the same arch-bgp-... file with modified known-sizeof things > for the 64-bit architecture and the installs and examples usually were > ok. Now one user had problems and in the errors there was something > about is-color, so do I have to omit the "--with-is-color-value=short" > for bgq? > Thanks for your answers > Inge Gutheil > > -- > -- > > Inge Gutheil > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-3135 > Fax: +49-2461-61-6656 > E-mail:i.gutheil at fz-juelich.de > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > > Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de From john.mousel at gmail.com Fri Jul 12 13:00:21 2013 From: john.mousel at gmail.com (John Mousel) Date: Fri, 12 Jul 2013 13:00:21 -0500 Subject: [petsc-users] GAMG PC dependence on right-hand side In-Reply-To: <7B54E63F-BBFE-479A-92B7-FFD181D69595@columbia.edu> References: <87vc4hrn70.fsf@mcs.anl.gov> <87sizksub5.fsf@mcs.anl.gov> <2E2F21D3-F98A-4B86-B52D-87A68E052278@columbia.edu> <8738rks6l6.fsf@mcs.anl.gov> <7B54E63F-BBFE-479A-92B7-FFD181D69595@columbia.edu> Message-ID: Mark, I compared the eigen estimates and they were identical. I found an issue that was somehow miraculously covered up by -random_seed 1 for 10,000 matrix solves. I was initially sent on a wild chase by noting a strong behavior difference between DIFFERENT_NONZERO_PATTERN and SAME_PRECONDITIONER, where DIFFERENT_NONZERO_PATTERN gave the correct solution, but it was totally a fluke. Sorry about that. John On Fri, Jul 12, 2013 at 12:31 PM, Mark F. Adams wrote: > > > > > > if (set) { > > ierr = PetscRandomSetSeed(rnd,(unsigned long int)seed);CHKERRQ(ierr); > > ierr = PetscRandomSeed(rnd);CHKERRQ(ierr); > > } > > > > Otherwise the default seed is used, but that changes for repeat solves. > > OK, right I see. > > Anyway it is bad that the code is sensitive to the random RHS in the eigen > estimator. These operators are not symmetric apparently and I'm not sure > about the nature of this non-symetry and how that can effect eigen > estimators. This is a fundamental disadvantage of smoothed aggregation. > > So at this point it is not clear if 1) there is an simple code/parameter > change in the code that can fix this, 2) a code bug, or 3) a fundamental > problem with cheby on these (asymmetric operators). > > It would be nice to verify and get some number on the differences in eigen > estimates in these two (good and bad) solves. > > Also, hypre is a good robust alternative if you need to get stuff done now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hsahasra at purdue.edu Sat Jul 13 10:52:34 2013 From: hsahasra at purdue.edu (=?utf-8?B?aHNhaGFzcmFAcHVyZHVlLmVkdQ==?=) Date: Sat, 13 Jul 2013 11:52:34 -0400 Subject: [petsc-users] =?utf-8?q?Extracting_data_from_a_Petsc_matrix?= Message-ID: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> Hi, I am working on solving a system of linear equations with square matrix. I'm first factoring the matrix using LU decomposition. I want to do the LU decomposition step using MAGMA on GPUs. MAGMA library implements LAPACK functions on a CPU+GPU based system. So my question is, how do I extract the data from a Petsc Mat so that it can be sent to the dgetrf routine in MAGMA. Is there any need for duplicating the data for this step? Thanks! Harshad -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 13 11:17:32 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Jul 2013 11:17:32 -0500 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> Message-ID: On Sat, Jul 13, 2013 at 10:52 AM, hsahasra at purdue.edu wrote: > Hi, > > I am working on solving a system of linear equations with square matrix. > I'm first factoring the matrix using LU decomposition. > > I want to do the LU decomposition step using MAGMA on GPUs. MAGMA library > implements LAPACK functions on a CPU+GPU based system. > > So my question is, how do I extract the data from a Petsc Mat so that it > can be sent to the dgetrf routine in MAGMA. > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatDenseGetArray.html Matt > Is there any need for duplicating the data for this step? > > Thanks! > Harshad > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Jul 13 11:43:08 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 13 Jul 2013 08:43:08 -0800 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> Message-ID: <87ehb2fm1v.fsf@mcs.anl.gov> "hsahasra at purdue.edu" writes: > Hi, > > I am working on solving a system of linear equations with square > matrix. I'm first factoring the matrix using LU decomposition. I assume you're solving a dense problem because that is all MAGMA does. > I want to do the LU decomposition step using MAGMA on GPUs. MAGMA > library implements LAPACK functions on a CPU+GPU based system. > > So my question is, how do I extract the data from a Petsc Mat so that > it can be sent to the dgetrf routine in MAGMA. MatDenseGetArray > Is there any need for duplicating the data for this step? You're on your own for storage of factors. Alternatively, you could add library support so that you could use PCLU and '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). Doing this is not a priority for us, but we can provide guidance if you want to tackle it. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sat Jul 13 11:54:00 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 13 Jul 2013 11:54:00 -0500 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <87ehb2fm1v.fsf@mcs.anl.gov> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> Message-ID: <055A4404-35F6-41AF-BB1B-D2E21B3C90F5@mcs.anl.gov> If MAGMA "implements LAPACK functions on a CPU+GPU based system" using the standard BLAS/LAPACK interface then you should be able to simply ./configure PETSc to use the MAGMA BLAS/LAPACK library instead of the standard BLAS/LAPACK library see http://www.mcs.anl.gov/petsc/documentation/installation.html#blas-lapack because PETSc for SeqDense matrices simply calls the BLAS/LAPACK routines, see for example MatLUFactor_SeqDense() which simply calls ierr = PetscBLASIntCast(A->cmap->n,&n);CHKERRQ(ierr); ierr = PetscBLASIntCast(A->rmap->n,&m);CHKERRQ(ierr); if (!mat->pivots) { ierr = PetscMalloc((A->rmap->n+1)*sizeof(PetscBLASInt),&mat->pivots);CHKERRQ(ierr); ierr = PetscLogObjectMemory(A,A->rmap->n*sizeof(PetscBLASInt));CHKERRQ(ierr); } if (!A->rmap->n || !A->cmap->n) PetscFunctionReturn(0); ierr = PetscFPTrapPush(PETSC_FP_TRAP_OFF);CHKERRQ(ierr); PetscStackCallBLAS("LAPACKgetrf",LAPACKgetrf_(&m,&n,mat->v,&mat->lda,mat->pivots,&info)); ierr = PetscFPTrapPop();CHKERRQ(ierr); if (info<0) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_LIB,"Bad argument to LU factorization"); if (info>0) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_MAT_LU_ZRPVT,"Bad LU factorization"); If MAGMA actually changes the calling sequences of the BLAS/LAPACK routines then it won't work automatically, though it may be possible to modify the routines in PETSc's dense.c source code to call the MAGMA versions instead of the "standard" BLAS/LAPACK routines. Barry On Jul 13, 2013, at 11:43 AM, Jed Brown wrote: > "hsahasra at purdue.edu" writes: > >> Hi, >> >> I am working on solving a system of linear equations with square >> matrix. I'm first factoring the matrix using LU decomposition. > > I assume you're solving a dense problem because that is all MAGMA does. > >> I want to do the LU decomposition step using MAGMA on GPUs. MAGMA >> library implements LAPACK functions on a CPU+GPU based system. >> >> So my question is, how do I extract the data from a Petsc Mat so that >> it can be sent to the dgetrf routine in MAGMA. > > MatDenseGetArray > >> Is there any need for duplicating the data for this step? > > You're on your own for storage of factors. Alternatively, you could add > library support so that you could use PCLU and > '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). > Doing this is not a priority for us, but we can provide guidance if you > want to tackle it. From potaman at outlook.com Sun Jul 14 00:17:05 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sun, 14 Jul 2013 01:17:05 -0400 Subject: [petsc-users] Really Really Really Weird SNES behaviour Message-ID: Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 The output from SNES_Monitor with the optimized code on the other hand is, solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. Any ideas?Subramanya -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Jul 14 05:51:12 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 14 Jul 2013 05:51:12 -0500 Subject: [petsc-users] Really Really Really Weird SNES behaviour In-Reply-To: References: Message-ID: On Sun, Jul 14, 2013 at 12:17 AM, subramanya sadasiva wrote: > Hi, > I am observing some really really really weird SNES behavior in my SNESVI > code called through Libmesh. This behaviour appeared after I changed some > initial conditions and only happens in an optimized build. I am running > this code on a Macbook pro running os x 10.8.. > > When the debug code is run, the residuals computed for the initial > conditions provided give norms which are of the expected magnitude.. so the > SNES_Monitor output is, > > solving the cahn hilliard time step > 0 SNES Function norm 8.223262421671e-01 > 1 SNES Function norm 3.793806858333e-03 > Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 > > The output from SNES_Monitor with the optimized code on the other hand is, > > solving the cahn hilliard time step > 0 SNES Function norm 5.153882032022e+19 > 1 SNES Function norm 1.446612980133e+19 > Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 > > > Absolutely nothing else has changed in the code except that one code is > built with a debugging and one with an optimized version of the code. > > Any ideas? > This sound like an uninitialized variable in your residual evaluation routine. Very often, debugging code will initialize variables to 0, whereas optimized leaves them alone. Matt > Subramanya > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From codypermann at gmail.com Sun Jul 14 07:26:27 2013 From: codypermann at gmail.com (Cody Permann) Date: Sun, 14 Jul 2013 06:26:27 -0600 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: References: Message-ID: <-8697055785161439862@unknownmsgid> Sounds like a memory corruption problem. I know several users that are experts at writing code like that. Unfortunately, it can be difficult to track down on OS X due to a lack of working memory analysis tools. 1. Since this problem popped up after you changed your initial condition, you may just want to start by carefully looking at your code and thinking about cases where your variables might not be initialized, or where calculations could potentially produce underflow or overflow conditions. 2. If you have access to a Linux box, then you will have bounds checked STL containers at your disposal in debug mode. You'll also be able to run your code through valgrind. Both of those operations are relatively easy to perform. Let us know if you have any questions. Cody Sent from my iPhone On Jul 13, 2013, at 11:17 PM, subramanya sadasiva wrote: > Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. > When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, > solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 > The output from SNES_Monitor with the optimized code on the other hand is, > solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 > > Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. > Any ideas?Subramanya > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Libmesh-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/libmesh-users From benjamin.kirk-1 at nasa.gov Sun Jul 14 09:19:01 2013 From: benjamin.kirk-1 at nasa.gov (Kirk, Benjamin (JSC-EG311)) Date: Sun, 14 Jul 2013 09:19:01 -0500 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: <-8697055785161439862@unknownmsgid> References: <-8697055785161439862@unknownmsgid> Message-ID: <3E2867BA-50D8-43A9-B714-5BF5BA71E69D@nasa.gov> Have you tried devel mode? It uses compiler flags similar to optimized mode but leaves asserts on too. Otherwise, I agree with Cody that there may be an uninitialized value that is silently being set to 0 in debug mode but contains garbage in optimized mode. -Ben On Jul 14, 2013, at 7:26 AM, "Cody Permann" wrote: > Sounds like a memory corruption problem. I know several users that are > experts at writing code like that. > > Unfortunately, it can be difficult to track down on OS X due to a lack > of working memory analysis tools. > 1. Since this problem popped up after you changed your initial > condition, you may just want to start by carefully looking at your > code and thinking about cases where your variables might not be > initialized, or where calculations could potentially produce underflow > or overflow conditions. > 2. If you have access to a Linux box, then you will have bounds > checked STL containers at your disposal in debug mode. You'll also be > able to run your code through valgrind. Both of those operations are > relatively easy to perform. Let us know if you have any questions. > > Cody > > Sent from my iPhone > > On Jul 13, 2013, at 11:17 PM, subramanya sadasiva wrote: > >> Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. >> When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, >> solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >> The output from SNES_Monitor with the optimized code on the other hand is, >> solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >> >> Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. >> Any ideas?Subramanya >> ------------------------------------------------------------------------------ >> See everything from the browser to the database with AppDynamics >> Get end-to-end visibility with application monitoring from AppDynamics >> Isolate bottlenecks and diagnose root cause in seconds. >> Start your free trial of AppDynamics Pro today! >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> _______________________________________________ >> Libmesh-users mailing list >> Libmesh-users at lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/libmesh-users > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Libmesh-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/libmesh-users From potaman at outlook.com Sun Jul 14 10:58:49 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sun, 14 Jul 2013 11:58:49 -0400 Subject: [petsc-users] Really Really Really Weird SNES behaviour In-Reply-To: References: , Message-ID: Hi Matt, Thanks for the reply, Closer inspection showed the presence of a few large (10^300) values in the initial condition vector while running in debug mode. Subramanya Date: Sun, 14 Jul 2013 05:51:12 -0500 Subject: Re: [petsc-users] Really Really Really Weird SNES behaviour From: knepley at gmail.com To: potaman at outlook.com CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov On Sun, Jul 14, 2013 at 12:17 AM, subramanya sadasiva wrote: Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 The output from SNES_Monitor with the optimized code on the other hand is, solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. Any ideas? This sound like an uninitialized variable in your residual evaluation routine. Very often, debugging codewill initialize variables to 0, whereas optimized leaves them alone. Matt Subramanya -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Sun Jul 14 10:58:25 2013 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Sun, 14 Jul 2013 10:58:25 -0500 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: <-8697055785161439862@unknownmsgid> References: <-8697055785161439862@unknownmsgid> Message-ID: Are there known problems with valgrind for macos? I have it installed from macports and it seems to work fine. Dmitry. On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann wrote: > Sounds like a memory corruption problem. I know several users that are > experts at writing code like that. > > Unfortunately, it can be difficult to track down on OS X due to a lack > of working memory analysis tools. > 1. Since this problem popped up after you changed your initial > condition, you may just want to start by carefully looking at your > code and thinking about cases where your variables might not be > initialized, or where calculations could potentially produce underflow > or overflow conditions. > 2. If you have access to a Linux box, then you will have bounds > checked STL containers at your disposal in debug mode. You'll also be > able to run your code through valgrind. Both of those operations are > relatively easy to perform. Let us know if you have any questions. > > Cody > > Sent from my iPhone > > On Jul 13, 2013, at 11:17 PM, subramanya sadasiva > wrote: > > > Hi, I am observing some really really really weird SNES behavior in my > SNESVI code called through Libmesh. This behaviour appeared after I changed > some initial conditions and only happens in an optimized build. I am > running this code on a Macbook pro running os x 10.8.. > > When the debug code is run, the residuals computed for the initial > conditions provided give norms which are of the expected magnitude.. so the > SNES_Monitor output is, > > solving the cahn hilliard time step 0 SNES Function norm > 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve > did not converge due to DIVERGED_MAX_IT iterations 1 > > The output from SNES_Monitor with the optimized code on the other hand > is, > > solving the cahn hilliard time step 0 SNES Function norm > 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve > did not converge due to DIVERGED_MAX_IT iterations 1 > > > > Absolutely nothing else has changed in the code except that one code is > built with a debugging and one with an optimized version of the code. > > Any ideas?Subramanya > > > ------------------------------------------------------------------------------ > > See everything from the browser to the database with AppDynamics > > Get end-to-end visibility with application monitoring from AppDynamics > > Isolate bottlenecks and diagnose root cause in seconds. > > Start your free trial of AppDynamics Pro today! > > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > _______________________________________________ > > Libmesh-users mailing list > > Libmesh-users at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Sun Jul 14 11:00:28 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sun, 14 Jul 2013 12:00:28 -0400 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: References: <-8697055785161439862@unknownmsgid>, Message-ID: I haven't been able to install valgrind on 10.8 . Subramanya From: karpeev at mcs.anl.gov Date: Sun, 14 Jul 2013 10:58:25 -0500 Subject: Re: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour To: codypermann at gmail.com CC: potaman at outlook.com; libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov Are there known problems with valgrind for macos?I have it installed from macports and it seems to work fine.Dmitry. On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann wrote: Sounds like a memory corruption problem. I know several users that are experts at writing code like that. Unfortunately, it can be difficult to track down on OS X due to a lack of working memory analysis tools. 1. Since this problem popped up after you changed your initial condition, you may just want to start by carefully looking at your code and thinking about cases where your variables might not be initialized, or where calculations could potentially produce underflow or overflow conditions. 2. If you have access to a Linux box, then you will have bounds checked STL containers at your disposal in debug mode. You'll also be able to run your code through valgrind. Both of those operations are relatively easy to perform. Let us know if you have any questions. Cody Sent from my iPhone On Jul 13, 2013, at 11:17 PM, subramanya sadasiva wrote: > Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. > When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, > solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 > The output from SNES_Monitor with the optimized code on the other hand is, > solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 > > Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. > Any ideas?Subramanya > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Libmesh-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/libmesh-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From codypermann at gmail.com Sun Jul 14 11:00:30 2013 From: codypermann at gmail.com (Cody Permann) Date: Sun, 14 Jul 2013 10:00:30 -0600 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: References: <-8697055785161439862@unknownmsgid> Message-ID: <-3663603499374441662@unknownmsgid> It's not guaranteed to work on 10.8 and above. If you have an older OS, it's fine. Sent from my iPhone On Jul 14, 2013, at 9:59 AM, Dmitry Karpeyev wrote: Are there known problems with valgrind for macos? I have it installed from macports and it seems to work fine. Dmitry. On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann wrote: > Sounds like a memory corruption problem. I know several users that are > experts at writing code like that. > > Unfortunately, it can be difficult to track down on OS X due to a lack > of working memory analysis tools. > 1. Since this problem popped up after you changed your initial > condition, you may just want to start by carefully looking at your > code and thinking about cases where your variables might not be > initialized, or where calculations could potentially produce underflow > or overflow conditions. > 2. If you have access to a Linux box, then you will have bounds > checked STL containers at your disposal in debug mode. You'll also be > able to run your code through valgrind. Both of those operations are > relatively easy to perform. Let us know if you have any questions. > > Cody > > Sent from my iPhone > > On Jul 13, 2013, at 11:17 PM, subramanya sadasiva > wrote: > > > Hi, I am observing some really really really weird SNES behavior in my > SNESVI code called through Libmesh. This behaviour appeared after I changed > some initial conditions and only happens in an optimized build. I am > running this code on a Macbook pro running os x 10.8.. > > When the debug code is run, the residuals computed for the initial > conditions provided give norms which are of the expected magnitude.. so the > SNES_Monitor output is, > > solving the cahn hilliard time step 0 SNES Function norm > 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve > did not converge due to DIVERGED_MAX_IT iterations 1 > > The output from SNES_Monitor with the optimized code on the other hand > is, > > solving the cahn hilliard time step 0 SNES Function norm > 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve > did not converge due to DIVERGED_MAX_IT iterations 1 > > > > Absolutely nothing else has changed in the code except that one code is > built with a debugging and one with an optimized version of the code. > > Any ideas?Subramanya > > > ------------------------------------------------------------------------------ > > See everything from the browser to the database with AppDynamics > > Get end-to-end visibility with application monitoring from AppDynamics > > Isolate bottlenecks and diagnose root cause in seconds. > > Start your free trial of AppDynamics Pro today! > > > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > > _______________________________________________ > > Libmesh-users mailing list > > Libmesh-users at lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Jul 14 11:13:35 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 14 Jul 2013 11:13:35 -0500 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: <-3663603499374441662@unknownmsgid> References: <-8697055785161439862@unknownmsgid> <-3663603499374441662@unknownmsgid> Message-ID: <06E78DEC-6426-4E18-9B3D-87A3EB6E70A0@mcs.anl.gov> On Jul 14, 2013, at 11:00 AM, Cody Permann wrote: > It's not guaranteed to work on 10.8 and above. It prints a message saying it is not guaranteed to work, but I use it and it seems to work. It certainly finds "some" problems and doesn't "seem" to find problems that are not there. So, maybe not perfect but still helpful, much better than not using it. Barry > If you have an older OS, it's fine. > > Sent from my iPhone > > On Jul 14, 2013, at 9:59 AM, Dmitry Karpeyev wrote: > >> Are there known problems with valgrind for macos? >> I have it installed from macports and it seems to work fine. >> Dmitry. >> >> >> On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann wrote: >> Sounds like a memory corruption problem. I know several users that are >> experts at writing code like that. >> >> Unfortunately, it can be difficult to track down on OS X due to a lack >> of working memory analysis tools. >> 1. Since this problem popped up after you changed your initial >> condition, you may just want to start by carefully looking at your >> code and thinking about cases where your variables might not be >> initialized, or where calculations could potentially produce underflow >> or overflow conditions. >> 2. If you have access to a Linux box, then you will have bounds >> checked STL containers at your disposal in debug mode. You'll also be >> able to run your code through valgrind. Both of those operations are >> relatively easy to perform. Let us know if you have any questions. >> >> Cody >> >> Sent from my iPhone >> >> On Jul 13, 2013, at 11:17 PM, subramanya sadasiva wrote: >> >> > Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. >> > When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, >> > solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >> > The output from SNES_Monitor with the optimized code on the other hand is, >> > solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >> > >> > Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. >> > Any ideas?Subramanya >> > ------------------------------------------------------------------------------ >> > See everything from the browser to the database with AppDynamics >> > Get end-to-end visibility with application monitoring from AppDynamics >> > Isolate bottlenecks and diagnose root cause in seconds. >> > Start your free trial of AppDynamics Pro today! >> > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> > _______________________________________________ >> > Libmesh-users mailing list >> > Libmesh-users at lists.sourceforge.net >> > https://lists.sourceforge.net/lists/listinfo/libmesh-users >> >> From codypermann at gmail.com Sun Jul 14 11:41:08 2013 From: codypermann at gmail.com (Cody Permann) Date: Sun, 14 Jul 2013 10:41:08 -0600 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: <06E78DEC-6426-4E18-9B3D-87A3EB6E70A0@mcs.anl.gov> References: <-8697055785161439862@unknownmsgid> <-3663603499374441662@unknownmsgid> <06E78DEC-6426-4E18-9B3D-87A3EB6E70A0@mcs.anl.gov> Message-ID: <-785423555309965217@unknownmsgid> Sent from my evil iPad On Jul 14, 2013, at 10:13 AM, Barry Smith wrote: > > On Jul 14, 2013, at 11:00 AM, Cody Permann wrote: > >> It's not guaranteed to work on 10.8 and above. > > It prints a message saying it is not guaranteed to work, but I use it and it seems to work. It certainly finds "some" problems and doesn't "seem" to find problems that are not there. So, maybe not perfect but still helpful, much better than not using it. > > I think it breaks down more on the C++ side. I always get a ton of false positives. Another point for C? :) Cody > Barry > >> If you have an older OS, it's fine. >> >> Sent from my iPhone >> >> On Jul 14, 2013, at 9:59 AM, Dmitry Karpeyev wrote: >> >>> Are there known problems with valgrind for macos? >>> I have it installed from macports and it seems to work fine. >>> Dmitry. >>> >>> >>> On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann wrote: >>> Sounds like a memory corruption problem. I know several users that are >>> experts at writing code like that. >>> >>> Unfortunately, it can be difficult to track down on OS X due to a lack >>> of working memory analysis tools. >>> 1. Since this problem popped up after you changed your initial >>> condition, you may just want to start by carefully looking at your >>> code and thinking about cases where your variables might not be >>> initialized, or where calculations could potentially produce underflow >>> or overflow conditions. >>> 2. If you have access to a Linux box, then you will have bounds >>> checked STL containers at your disposal in debug mode. You'll also be >>> able to run your code through valgrind. Both of those operations are >>> relatively easy to perform. Let us know if you have any questions. >>> >>> Cody >>> >>> Sent from my iPhone >>> >>> On Jul 13, 2013, at 11:17 PM, subramanya sadasiva wrote: >>> >>>> Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. >>>> When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, >>>> solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >>>> The output from SNES_Monitor with the optimized code on the other hand is, >>>> solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >>>> >>>> Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. >>>> Any ideas?Subramanya >>>> ------------------------------------------------------------------------------ >>>> See everything from the browser to the database with AppDynamics >>>> Get end-to-end visibility with application monitoring from AppDynamics >>>> Isolate bottlenecks and diagnose root cause in seconds. >>>> Start your free trial of AppDynamics Pro today! >>>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Libmesh-users mailing list >>>> Libmesh-users at lists.sourceforge.net >>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users > From bsmith at mcs.anl.gov Sun Jul 14 12:29:51 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 14 Jul 2013 12:29:51 -0500 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: <-785423555309965217@unknownmsgid> References: <-8697055785161439862@unknownmsgid> <-3663603499374441662@unknownmsgid> <06E78DEC-6426-4E18-9B3D-87A3EB6E70A0@mcs.anl.gov> <-785423555309965217@unknownmsgid> Message-ID: <446AB342-DD64-43D0-8A2A-7706D9E56254@mcs.anl.gov> >> >> > I think it breaks down more on the C++ side. I always get a ton of > false positives. Another point for C? :) +1 > > Cody > > >> Barry >> >>> If you have an older OS, it's fine. >>> >>> Sent from my iPhone >>> >>> On Jul 14, 2013, at 9:59 AM, Dmitry Karpeyev wrote: >>> >>>> Are there known problems with valgrind for macos? >>>> I have it installed from macports and it seems to work fine. >>>> Dmitry. >>>> >>>> >>>> On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann wrote: >>>> Sounds like a memory corruption problem. I know several users that are >>>> experts at writing code like that. >>>> >>>> Unfortunately, it can be difficult to track down on OS X due to a lack >>>> of working memory analysis tools. >>>> 1. Since this problem popped up after you changed your initial >>>> condition, you may just want to start by carefully looking at your >>>> code and thinking about cases where your variables might not be >>>> initialized, or where calculations could potentially produce underflow >>>> or overflow conditions. >>>> 2. If you have access to a Linux box, then you will have bounds >>>> checked STL containers at your disposal in debug mode. You'll also be >>>> able to run your code through valgrind. Both of those operations are >>>> relatively easy to perform. Let us know if you have any questions. >>>> >>>> Cody >>>> >>>> Sent from my iPhone >>>> >>>> On Jul 13, 2013, at 11:17 PM, subramanya sadasiva wrote: >>>> >>>>> Hi, I am observing some really really really weird SNES behavior in my SNESVI code called through Libmesh. This behaviour appeared after I changed some initial conditions and only happens in an optimized build. I am running this code on a Macbook pro running os x 10.8.. >>>>> When the debug code is run, the residuals computed for the initial conditions provided give norms which are of the expected magnitude.. so the SNES_Monitor output is, >>>>> solving the cahn hilliard time step 0 SNES Function norm 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >>>>> The output from SNES_Monitor with the optimized code on the other hand is, >>>>> solving the cahn hilliard time step 0 SNES Function norm 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve did not converge due to DIVERGED_MAX_IT iterations 1 >>>>> >>>>> Absolutely nothing else has changed in the code except that one code is built with a debugging and one with an optimized version of the code. >>>>> Any ideas?Subramanya >>>>> ------------------------------------------------------------------------------ >>>>> See everything from the browser to the database with AppDynamics >>>>> Get end-to-end visibility with application monitoring from AppDynamics >>>>> Isolate bottlenecks and diagnose root cause in seconds. >>>>> Start your free trial of AppDynamics Pro today! >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> Libmesh-users mailing list >>>>> Libmesh-users at lists.sourceforge.net >>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users >> From knepley at gmail.com Sun Jul 14 12:39:07 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 14 Jul 2013 12:39:07 -0500 Subject: [petsc-users] [Libmesh-users] Really Really Really Weird SNES behaviour In-Reply-To: <446AB342-DD64-43D0-8A2A-7706D9E56254@mcs.anl.gov> References: <-8697055785161439862@unknownmsgid> <-3663603499374441662@unknownmsgid> <06E78DEC-6426-4E18-9B3D-87A3EB6E70A0@mcs.anl.gov> <-785423555309965217@unknownmsgid> <446AB342-DD64-43D0-8A2A-7706D9E56254@mcs.anl.gov> Message-ID: On Sun, Jul 14, 2013 at 12:29 PM, Barry Smith wrote: > > >> > >> > > I think it breaks down more on the C++ side. I always get a ton of > > false positives. Another point for C? :) > > +1 It vomits and kernel panics most times when I run MPI code in 10.6. Matt > > > > Cody > > > > > >> Barry > >> > >>> If you have an older OS, it's fine. > >>> > >>> Sent from my iPhone > >>> > >>> On Jul 14, 2013, at 9:59 AM, Dmitry Karpeyev > wrote: > >>> > >>>> Are there known problems with valgrind for macos? > >>>> I have it installed from macports and it seems to work fine. > >>>> Dmitry. > >>>> > >>>> > >>>> On Sun, Jul 14, 2013 at 7:26 AM, Cody Permann > wrote: > >>>> Sounds like a memory corruption problem. I know several users that are > >>>> experts at writing code like that. > >>>> > >>>> Unfortunately, it can be difficult to track down on OS X due to a lack > >>>> of working memory analysis tools. > >>>> 1. Since this problem popped up after you changed your initial > >>>> condition, you may just want to start by carefully looking at your > >>>> code and thinking about cases where your variables might not be > >>>> initialized, or where calculations could potentially produce underflow > >>>> or overflow conditions. > >>>> 2. If you have access to a Linux box, then you will have bounds > >>>> checked STL containers at your disposal in debug mode. You'll also be > >>>> able to run your code through valgrind. Both of those operations are > >>>> relatively easy to perform. Let us know if you have any questions. > >>>> > >>>> Cody > >>>> > >>>> Sent from my iPhone > >>>> > >>>> On Jul 13, 2013, at 11:17 PM, subramanya sadasiva < > potaman at outlook.com> wrote: > >>>> > >>>>> Hi, I am observing some really really really weird SNES behavior in > my SNESVI code called through Libmesh. This behaviour appeared after I > changed some initial conditions and only happens in an optimized build. I > am running this code on a Macbook pro running os x 10.8.. > >>>>> When the debug code is run, the residuals computed for the initial > conditions provided give norms which are of the expected magnitude.. so the > SNES_Monitor output is, > >>>>> solving the cahn hilliard time step 0 SNES Function norm > 8.223262421671e-01 1 SNES Function norm 3.793806858333e-03Nonlinear solve > did not converge due to DIVERGED_MAX_IT iterations 1 > >>>>> The output from SNES_Monitor with the optimized code on the other > hand is, > >>>>> solving the cahn hilliard time step 0 SNES Function norm > 5.153882032022e+19 1 SNES Function norm 1.446612980133e+19Nonlinear solve > did not converge due to DIVERGED_MAX_IT iterations 1 > >>>>> > >>>>> Absolutely nothing else has changed in the code except that one code > is built with a debugging and one with an optimized version of the code. > >>>>> Any ideas?Subramanya > >>>>> > ------------------------------------------------------------------------------ > >>>>> See everything from the browser to the database with AppDynamics > >>>>> Get end-to-end visibility with application monitoring from > AppDynamics > >>>>> Isolate bottlenecks and diagnose root cause in seconds. > >>>>> Start your free trial of AppDynamics Pro today! > >>>>> > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > >>>>> _______________________________________________ > >>>>> Libmesh-users mailing list > >>>>> Libmesh-users at lists.sourceforge.net > >>>>> https://lists.sourceforge.net/lists/listinfo/libmesh-users > >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Mon Jul 15 13:51:36 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Mon, 15 Jul 2013 20:51:36 +0200 Subject: [petsc-users] default orthogonalization in gmres Message-ID: <51E444B8.2010602@tudelft.nl> Hi list, I was wondering the reason why classical gram schmidt is used in the orthogonalizations in the gmres implementation as default? As far as I remember, this was unstable numerically. Best, Umut From suyan0 at gmail.com Mon Jul 15 15:08:22 2013 From: suyan0 at gmail.com (Su Yan) Date: Mon, 15 Jul 2013 15:08:22 -0500 Subject: [petsc-users] SuperLU options database Message-ID: Hi, I am trying to use the external package SuperLU to solve a linear equation with ILUTP preconditioner. But the current PETSc only provides one option interface which is MatSuperluSetILUDropTol. There are actually many other options as suggested on the manual page of MATSOLVERSUPERLU, such as: *-mat_superlu_equil* *-mat_superlu_ilu_filltol* *-**mat_superlu_ilu_fillfactor* etc. However, they all need to be invoked via command line. Is there any way to set up other options for SuperLU in the code instead of using the command line? Because it's not so convenient to type in all those runtime commands every time you run a code. Thanks a lot. Su -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Mon Jul 15 15:15:12 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 15 Jul 2013 15:15:12 -0500 Subject: [petsc-users] SuperLU options database In-Reply-To: References: Message-ID: Su : We provide these via runtime options. With the latest petsc release (v3.4), you can get petsc/src/ksp/ksp/examples/tutorials/ex2.c: ./ex2 -pc_type ilu -pc_factor_mat_solver_package superlu -hels |grep superlu ... -mat_superlu_ilu_droptol <0.0001>: ILU_DropTol (None) -mat_superlu_ilu_filltol <0.01>: ILU_FillTol (None) -mat_superlu_ilu_fillfactor <10>: ILU_FillFactor (None) -mat_superlu_ilu_droprull <9>: ILU_DropRule (None) -mat_superlu_ilu_norm <2>: ILU_Norm (None) -mat_superlu_ilu_milu <0>: ILU_MILU (None) i.e., you can set/change your own options at runtime. Hong Hi, I am trying to use the external package SuperLU to solve a linear > equation with ILUTP preconditioner. But the current PETSc only provides one > option interface which is MatSuperluSetILUDropTol. There are actually many > other options as suggested on the manual page of MATSOLVERSUPERLU, such as: > > *-mat_superlu_equil* > *-mat_superlu_ilu_filltol* > *-**mat_superlu_ilu_fillfactor* > > etc. However, they all need to be invoked via command line. > > Is there any way to set up other options for SuperLU in the code instead > of using the command line? Because it's not so convenient to type in all > those runtime commands every time you run a code. > > Thanks a lot. > > Su > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From psanan at cms.caltech.edu Mon Jul 15 15:36:58 2013 From: psanan at cms.caltech.edu (Patrick Sanan) Date: Mon, 15 Jul 2013 15:36:58 -0500 Subject: [petsc-users] SuperLU options database In-Reply-To: References: Message-ID: More things you can do to avoid typing the arguments over and over are: - Use a file to specify options (PetscInitialize takes an argument for this file, which by default is ~/.petscrc ) - Set runtime options from within your code using PetscOptionsSetValue - Write a own shell script or makefile with a special target to run the executable with the options you'd like appended On Jul 15, 2013, at 3:15 PM, Hong Zhang wrote: > Su : > We provide these via runtime options. With the latest petsc release (v3.4), > you can get > petsc/src/ksp/ksp/examples/tutorials/ex2.c: > ./ex2 -pc_type ilu -pc_factor_mat_solver_package superlu -hels |grep superlu > ... > -mat_superlu_ilu_droptol <0.0001>: ILU_DropTol (None) > -mat_superlu_ilu_filltol <0.01>: ILU_FillTol (None) > -mat_superlu_ilu_fillfactor <10>: ILU_FillFactor (None) > -mat_superlu_ilu_droprull <9>: ILU_DropRule (None) > -mat_superlu_ilu_norm <2>: ILU_Norm (None) > -mat_superlu_ilu_milu <0>: ILU_MILU (None) > > i.e., you can set/change your own options at runtime. > > Hong > > Hi, I am trying to use the external package SuperLU to solve a linear equation with ILUTP preconditioner. But the current PETSc only provides one option interface which is MatSuperluSetILUDropTol. There are actually many other options as suggested on the manual page of MATSOLVERSUPERLU, such as: > > -mat_superlu_equil > -mat_superlu_ilu_filltol > -mat_superlu_ilu_fillfactor > > etc. However, they all need to be invoked via command line. > > Is there any way to set up other options for SuperLU in the code instead of using the command line? Because it's not so convenient to type in all those runtime commands every time you run a code. > > Thanks a lot. > > Su > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Mon Jul 15 15:42:03 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Mon, 15 Jul 2013 22:42:03 +0200 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: References: <51E444B8.2010602@tudelft.nl> Message-ID: <51E45E9B.7090303@tudelft.nl> On 07/15/2013 08:54 PM, Matthew Knepley wrote: > > > On Jul 15, 2013 1:51 PM, "Umut Tabak" > wrote: > > > > Hi list, > > > > I was wondering the reason why classical gram schmidt is used in the > orthogonalizations in the gmres implementation as default? As far as I > remember, this was unstable numerically. > Hi Matt, Excuse my naive question but mathematically they are equivalent so what is the source of speed boost? And can you please direct me to one or two of these references? Best, Umut > > Its much faster. Try the modified option and compare. There are many > papers claiming that classical+selective reorthogonalization is just > as stable. > > Matt > > > > Best, > > Umut > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at mcs.anl.gov Mon Jul 15 16:04:54 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Mon, 15 Jul 2013 16:04:54 -0500 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <51E45E9B.7090303@tudelft.nl> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> Message-ID: <51E463F6.80604@mcs.anl.gov> Hi Umut, one of the reasons is the parallelism: When doing Householder reflections, you can only process one reflection at a time without any good data reuse. However, for Gram-Schmidt you can just compute all the necessary scalar products at the same time (VecMDot) and reuse the common data vector. This gives you a speed-up of a factor of almost two. Best regards, Karli On 07/15/2013 03:42 PM, Umut Tabak wrote: > On 07/15/2013 08:54 PM, Matthew Knepley wrote: >> >> >> On Jul 15, 2013 1:51 PM, "Umut Tabak" > > wrote: >> > >> > Hi list, >> > >> > I was wondering the reason why classical gram schmidt is used in the >> orthogonalizations in the gmres implementation as default? As far as I >> remember, this was unstable numerically. >> > Hi Matt, > > Excuse my naive question but mathematically they are equivalent so what > is the source of speed boost? And can you please direct me to one or two > of these references? > > Best, > Umut >> >> Its much faster. Try the modified option and compare. There are many >> papers claiming that classical+selective reorthogonalization is just >> as stable. >> >> Matt >> > >> > Best, >> > Umut >> > From jedbrown at mcs.anl.gov Mon Jul 15 16:57:13 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Jul 2013 13:57:13 -0800 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <51E463F6.80604@mcs.anl.gov> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> <51E463F6.80604@mcs.anl.gov> Message-ID: <87y597a3ly.fsf@mcs.anl.gov> Karl Rupp writes: > However, for Gram-Schmidt you can just compute all the > necessary scalar products at the same time (VecMDot) and reuse the > common data vector. This gives you a speed-up of a factor of almost two. It's not a factor of 2, it's a factor of k where k is the size of the subspace. Classical Gram-Schmidt needs one reduction per iteration (normalization can be hidden), but modified needs k reductions. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From u.tabak at tudelft.nl Mon Jul 15 17:38:00 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 16 Jul 2013 00:38:00 +0200 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <87y597a3ly.fsf@mcs.anl.gov> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> <51E463F6.80604@mcs.anl.gov> <87y597a3ly.fsf@mcs.anl.gov> Message-ID: <51E479C8.2020504@tudelft.nl> On 07/15/2013 11:57 PM, Jed Brown wrote: > > It's not a factor of 2, it's a factor of k where k is the size of the > subspace. Classical Gram-Schmidt needs one reduction per iteration > (normalization can be hidden), but modified needs k reductions. Dear Jed, Could you please explain a bit more on what you mean by + reduction + normalization can be hidden On a problem that I am working on, cgs and mgs have a subtle difference. I would like to learn more about these details. More specifically, I would like to A orthonormalize a block of vectors, say for a block size of 4, however I can not form A explicitly because then it becomes large and dense. But it can be formed by a matrix vector operation. Due this reason, cgs and mgs is a little different for me, this is the source of the discussion. Best, Umut From rupp at mcs.anl.gov Mon Jul 15 17:49:36 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Mon, 15 Jul 2013 17:49:36 -0500 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <87y597a3ly.fsf@mcs.anl.gov> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> <51E463F6.80604@mcs.anl.gov> <87y597a3ly.fsf@mcs.anl.gov> Message-ID: <51E47C80.1070805@mcs.anl.gov> Hey, >> However, for Gram-Schmidt you can just compute all the >> necessary scalar products at the same time (VecMDot) and reuse the >> common data vector. This gives you a speed-up of a factor of almost two. > > It's not a factor of 2, it's a factor of k where k is the size of the > subspace. Classical Gram-Schmidt needs one reduction per iteration > (normalization can be hidden), but modified needs k reductions. well, you see the factor of k only if the communication for the reduction is the bottleneck. The factor of almost 2 is what you get if memory bandwidth is the bottleneck. Best regards, Karli From jedbrown at mcs.anl.gov Mon Jul 15 18:05:17 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Jul 2013 15:05:17 -0800 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <51E47C80.1070805@mcs.anl.gov> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> <51E463F6.80604@mcs.anl.gov> <87y597a3ly.fsf@mcs.anl.gov> <51E47C80.1070805@mcs.anl.gov> Message-ID: <87k3krh1aq.fsf@mcs.anl.gov> Karl Rupp writes: > well, you see the factor of k only if the communication for the > reduction is the bottleneck. The factor of almost 2 is what you get if > memory bandwidth is the bottleneck. Sure, but reductions are the bottleneck if you try to strong scale, especially if you don't have a Blue Gene. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Mon Jul 15 18:07:41 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Jul 2013 15:07:41 -0800 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <51E479C8.2020504@tudelft.nl> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> <51E463F6.80604@mcs.anl.gov> <87y597a3ly.fsf@mcs.anl.gov> <51E479C8.2020504@tudelft.nl> Message-ID: <87hafvh16q.fsf@mcs.anl.gov> Umut Tabak writes: > On 07/15/2013 11:57 PM, Jed Brown wrote: >> >> It's not a factor of 2, it's a factor of k where k is the size of the >> subspace. Classical Gram-Schmidt needs one reduction per iteration >> (normalization can be hidden), but modified needs k reductions. > Dear Jed, > > Could you please explain a bit more on what you mean by > > + reduction MPI_Allreduce, which is needed in parallel as part of computing a norm or dot product. > + normalization can be hidden Gram-Schmidt has a bunch of projections to make the vector orthogonal, then normalization. The reduction needed for normalization is easy to thread into the next iteration, so I'm ignoring it in this performance model. > On a problem that I am working on, cgs and mgs have a subtle difference. That's common. If it's a big difference, it usually means the system is ill-conditioned and you should probably work on your preconditioner. > I would like to learn more about these details. > > More specifically, I would like to A orthonormalize a block of vectors, > say for a block size of 4, however I can not form A explicitly because > then it becomes large and dense. But it can be formed by a matrix vector > operation. Due this reason, cgs and mgs is a little different for me, > this is the source of the discussion. > > Best, > Umut -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Jul 15 19:45:05 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 15 Jul 2013 19:45:05 -0500 Subject: [petsc-users] default orthogonalization in gmres In-Reply-To: <51E479C8.2020504@tudelft.nl> References: <51E444B8.2010602@tudelft.nl> <51E45E9B.7090303@tudelft.nl> <51E463F6.80604@mcs.anl.gov> <87y597a3ly.fsf@mcs.anl.gov> <51E479C8.2020504@tudelft.nl> Message-ID: Umut, Compare http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/gmres/borthog2.c.html#KSPGMRESClassicalGramSchmidtOrthogonalization to http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/gmres/borthog.c.html#KSPGMRESModifiedGramSchmidtOrthogonalization note the first performs all the inner products and then all the axpy updates. The second performs a single inner product and update in a loop. The memory access pattern (and amount of communication needed for the inner products) is very different in the two cases. Barry On Jul 15, 2013, at 5:38 PM, Umut Tabak wrote: > On 07/15/2013 11:57 PM, Jed Brown wrote: >> >> It's not a factor of 2, it's a factor of k where k is the size of the >> subspace. Classical Gram-Schmidt needs one reduction per iteration >> (normalization can be hidden), but modified needs k reductions. > Dear Jed, > > Could you please explain a bit more on what you mean by > > + reduction > + normalization can be hidden > > On a problem that I am working on, cgs and mgs have a subtle difference. I would like to learn more about these details. > > More specifically, I would like to A orthonormalize a block of vectors, say for a block size of 4, however I can not form A explicitly because then it becomes large and dense. But it can be formed by a matrix vector operation. Due this reason, cgs and mgs is a little different for me, this is the source of the discussion. > > Best, > Umut > > From mpovolot at purdue.edu Tue Jul 16 10:17:18 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 16 Jul 2013 11:17:18 -0400 Subject: [petsc-users] installing petsc with scalapack from mkl Message-ID: <51E563FE.6040803@purdue.edu> Dear Petsc developers and users, I'm trying to configure petsc with scalapack from mkl library. From the configure.log (see attached) it seems that when PETSc checks for the scalapack functionality it does not link blacs. Please advise. Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 -------------- next part -------------- A non-text attachment was scrubbed... Name: configure_scalapck.log.gz Type: application/gzip Size: 303357 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Jul 16 10:27:37 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Jul 2013 10:27:37 -0500 (CDT) Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: <51E563FE.6040803@purdue.edu> References: <51E563FE.6040803@purdue.edu> Message-ID: Try: --with-scalapack-lib="-L/opt/intel/mkl//lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" BLACS is now part of scalapack-2 [which petsc-3.4 uses] - but mkl has blas/scalapack split. So you would have to specify both libs with the --with-scalapack-lib option. Satish On Tue, 16 Jul 2013, Michael Povolotskyi wrote: > Dear Petsc developers and users, > I'm trying to configure petsc with scalapack from mkl library. > > From the configure.log (see attached) it seems that when PETSc checks for the > scalapack functionality it does not link blacs. > Please advise. > Thank you, > Michael. > > From mpovolot at purdue.edu Tue Jul 16 10:41:18 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 16 Jul 2013 11:41:18 -0400 Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: References: <51E563FE.6040803@purdue.edu> Message-ID: <51E5699E.5030101@purdue.edu> Thank you. Can I use dynamic libraries for scalapack and blas? On 07/16/2013 11:27 AM, Satish Balay wrote: > Try: > > --with-scalapack-lib="-L/opt/intel/mkl//lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" > > > BLACS is now part of scalapack-2 [which petsc-3.4 uses] - but mkl has blas/scalapack split. > So you would have to specify both libs with the --with-scalapack-lib option. > > Satish > > On Tue, 16 Jul 2013, Michael Povolotskyi wrote: > >> Dear Petsc developers and users, >> I'm trying to configure petsc with scalapack from mkl library. >> >> From the configure.log (see attached) it seems that when PETSc checks for the >> scalapack functionality it does not link blacs. >> Please advise. >> Thank you, >> Michael. >> >> From balay at mcs.anl.gov Tue Jul 16 10:43:39 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Jul 2013 10:43:39 -0500 (CDT) Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: <51E5699E.5030101@purdue.edu> References: <51E563FE.6040803@purdue.edu> <51E5699E.5030101@purdue.edu> Message-ID: On Tue, 16 Jul 2013, Michael Povolotskyi wrote: > Thank you. > Can I use dynamic libraries for scalapack and blas? yes, when compiler sees -lfoo - it looks for libfoo.so first.. You can also use; --with-scalapack-lib="/opt/intel/mkl//lib/intel64/libmkl_scalapack_lp64.so /opt/intel/mkl//lib/intel64/libmkl_blacs_intelmpi_lp64.so" Satish > > On 07/16/2013 11:27 AM, Satish Balay wrote: > > Try: > > > > --with-scalapack-lib="-L/opt/intel/mkl//lib/intel64 -lmkl_scalapack_lp64 > > -lmkl_blacs_intelmpi_lp64" > > > > > > BLACS is now part of scalapack-2 [which petsc-3.4 uses] - but mkl has > > blas/scalapack split. > > So you would have to specify both libs with the --with-scalapack-lib option. > > > > Satish > > > > On Tue, 16 Jul 2013, Michael Povolotskyi wrote: > > > > > Dear Petsc developers and users, > > > I'm trying to configure petsc with scalapack from mkl library. > > > > > > From the configure.log (see attached) it seems that when PETSc checks for > > > the > > > scalapack functionality it does not link blacs. > > > Please advise. > > > Thank you, > > > Michael. > > > > > > > > From jitendra.ornl at gmail.com Tue Jul 16 10:48:54 2013 From: jitendra.ornl at gmail.com (Jitendra Kumar) Date: Tue, 16 Jul 2013 11:48:54 -0400 Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: I ran into following errors while trying to build PETSc-dev on Intrepid @ALCF. (configure.log attached) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Cannot run executable to determine size of char. If this machine uses a batch system to submit jobs you will need to configure using ./configure with the additional option --with-batch. Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers? ******************************************************************************* File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, in petsc_configure framework.configure(out = sys.stdout) File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", line 933, in configure child.configure() File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", line 386, in configure map(lambda type: self.executeTest(self.checkSizeof, type), ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', 'size_t']) File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", line 386, in map(lambda type: self.executeTest(self.checkSizeof, type), ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', 'size_t']) File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", line 115, in executeTest ret = apply(test, args,kargs) File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", line 296, in checkSizeof raise RuntimeError(msg) This is what my configuration looks like (adapted from config/examples/arch-bgp-ibm-opt.py) configure_options = [ '--with-cc=mpixlc', '--with-fc=mpixlf90', '--with-cxx=mpixlcxx', 'COPTFLAGS=-O3', 'FOPTFLAGS=-O3', '--with-debugging=0', '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', # '--with-hdf5=/soft/apps/hdf5-1.8.0', '--download-parmetis=1', '--download-metis=1', '--download-plapack=1', '--download-hdf5=1' ] I would appreciate any help building the llbrary there. Thanks, Jitu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1662609 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Jul 16 10:59:40 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Jul 2013 10:59:40 -0500 (CDT) Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: As the message indicates you need '--with-batch' option on this machine Check one of the default builds on intrepid for configure options to use.. [perhaps /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] Satish On Tue, 16 Jul 2013, Jitendra Kumar wrote: > I ran into following errors while trying to build PETSc-dev on Intrepid > @ALCF. (configure.log attached) > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > Cannot run executable to determine size of char. If this machine uses a > batch system > to submit jobs you will need to configure using ./configure with the > additional option --with-batch. > Otherwise there is problem with the compilers. Can you compile and run > code with your C/C++ (and maybe Fortran) compilers? > ******************************************************************************* > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, in > petsc_configure > framework.configure(out = sys.stdout) > File > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", line > 933, in configure > child.configure() > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > line 386, in configure > map(lambda type: self.executeTest(self.checkSizeof, type), > ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', > 'size_t']) > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > line 386, in > map(lambda type: self.executeTest(self.checkSizeof, type), > ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', > 'size_t']) > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > line 115, in executeTest > ret = apply(test, args,kargs) > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > line 296, in checkSizeof > raise RuntimeError(msg) > > This is what my configuration looks like (adapted from > config/examples/arch-bgp-ibm-opt.py) > configure_options = [ > '--with-cc=mpixlc', > '--with-fc=mpixlf90', > '--with-cxx=mpixlcxx', > 'COPTFLAGS=-O3', > 'FOPTFLAGS=-O3', > '--with-debugging=0', > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > '--download-parmetis=1', > '--download-metis=1', > '--download-plapack=1', > '--download-hdf5=1' > ] > > I would appreciate any help building the llbrary there. > > Thanks, > Jitu > From jitendra.ornl at gmail.com Tue Jul 16 11:54:48 2013 From: jitendra.ornl at gmail.com (Jitendra Kumar) Date: Tue, 16 Jul 2013 12:54:48 -0400 Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: Thanks Satish. I tried using the configuration you pointed me to with the addition of --download-hdf5=1 and got error "Compression library [libz.a or equivalent] not found " Do I need to load some package to get this? Jitu On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay wrote: > As the message indicates you need '--with-batch' option on this machine > > Check one of the default builds on intrepid for configure options to use.. > > [perhaps > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > Satish > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > I ran into following errors while trying to build PETSc-dev on Intrepid > > @ALCF. (configure.log attached) > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > > ------------------------------------------------------------------------------- > > Cannot run executable to determine size of char. If this machine uses a > > batch system > > to submit jobs you will need to configure using ./configure with the > > additional option --with-batch. > > Otherwise there is problem with the compilers. Can you compile and run > > code with your C/C++ (and maybe Fortran) compilers? > > > ******************************************************************************* > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, in > > petsc_configure > > framework.configure(out = sys.stdout) > > File > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > line > > 933, in configure > > child.configure() > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > line 386, in configure > > map(lambda type: self.executeTest(self.checkSizeof, type), > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', > > 'size_t']) > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > line 386, in > > map(lambda type: self.executeTest(self.checkSizeof, type), > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', > > 'size_t']) > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > line 115, in executeTest > > ret = apply(test, args,kargs) > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > line 296, in checkSizeof > > raise RuntimeError(msg) > > > > This is what my configuration looks like (adapted from > > config/examples/arch-bgp-ibm-opt.py) > > configure_options = [ > > '--with-cc=mpixlc', > > '--with-fc=mpixlf90', > > '--with-cxx=mpixlcxx', > > 'COPTFLAGS=-O3', > > 'FOPTFLAGS=-O3', > > '--with-debugging=0', > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > '--download-parmetis=1', > > '--download-metis=1', > > '--download-plapack=1', > > '--download-hdf5=1' > > ] > > > > I would appreciate any help building the llbrary there. > > > > Thanks, > > Jitu > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jul 16 11:59:55 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Jul 2013 11:59:55 -0500 (CDT) Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: --download-package might not work on all machines. --download-hdf5=1 does not work on bg/p However there is hdf5 installed on it. You can try using --with-hdf5-include/--with-hdf5-lib options. There could still be an issue with "Compression library [libz.a or equivalent] not found" but I think the workarround is already in petsc-dev. Satish On Tue, 16 Jul 2013, Jitendra Kumar wrote: > Thanks Satish. I tried using the configuration you pointed me to with the > addition of --download-hdf5=1 and got error "Compression library [libz.a or > equivalent] not found > " > > Do I need to load some package to get this? > > Jitu > > > On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay wrote: > > > As the message indicates you need '--with-batch' option on this machine > > > > Check one of the default builds on intrepid for configure options to use.. > > > > [perhaps > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > > > Satish > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > I ran into following errors while trying to build PETSc-dev on Intrepid > > > @ALCF. (configure.log attached) > > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > details): > > > > > ------------------------------------------------------------------------------- > > > Cannot run executable to determine size of char. If this machine uses a > > > batch system > > > to submit jobs you will need to configure using ./configure with the > > > additional option --with-batch. > > > Otherwise there is problem with the compilers. Can you compile and run > > > code with your C/C++ (and maybe Fortran) compilers? > > > > > ******************************************************************************* > > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, in > > > petsc_configure > > > framework.configure(out = sys.stdout) > > > File > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > > line > > > 933, in configure > > > child.configure() > > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > line 386, in configure > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', > > > 'size_t']) > > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > line 386, in > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', > > > 'size_t']) > > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > > line 115, in executeTest > > > ret = apply(test, args,kargs) > > > File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > line 296, in checkSizeof > > > raise RuntimeError(msg) > > > > > > This is what my configuration looks like (adapted from > > > config/examples/arch-bgp-ibm-opt.py) > > > configure_options = [ > > > '--with-cc=mpixlc', > > > '--with-fc=mpixlf90', > > > '--with-cxx=mpixlcxx', > > > 'COPTFLAGS=-O3', > > > 'FOPTFLAGS=-O3', > > > '--with-debugging=0', > > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > > '--download-parmetis=1', > > > '--download-metis=1', > > > '--download-plapack=1', > > > '--download-hdf5=1' > > > ] > > > > > > I would appreciate any help building the llbrary there. > > > > > > Thanks, > > > Jitu > > > > > > > > From hsahasra at purdue.edu Tue Jul 16 13:13:08 2013 From: hsahasra at purdue.edu (Harshad Sahasrabudhe) Date: Tue, 16 Jul 2013 14:13:08 -0400 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <87ehb2fm1v.fsf@mcs.anl.gov> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> Message-ID: <51E58D34.9050805@purdue.edu> Hi Jed, Thanks for your reply. > You're on your own for storage of factors. Alternatively, you could add > library support so that you could use PCLU and > '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). > Doing this is not a priority for us, but we can provide guidance if you > want to tackle it. I would definitely like to start working on adding library support. I think this is the most efficient way to go about it. Can you give me certain details such as: 1) How should I start going about it? 2) How will I check-in the changes to Petsc? 3) What version of Petsc will the changes be reflected in if I started working on it right now? 4) How many hours does it generally take to get this done? 5) How is the peer review done? Thanks, Harshad On 07/13/2013 12:43 PM, Jed Brown wrote: > "hsahasra at purdue.edu" writes: > >> Hi, >> >> I am working on solving a system of linear equations with square >> matrix. I'm first factoring the matrix using LU decomposition. > I assume you're solving a dense problem because that is all MAGMA does. > >> I want to do the LU decomposition step using MAGMA on GPUs. MAGMA >> library implements LAPACK functions on a CPU+GPU based system. >> >> So my question is, how do I extract the data from a Petsc Mat so that >> it can be sent to the dgetrf routine in MAGMA. > MatDenseGetArray > >> Is there any need for duplicating the data for this step? > You're on your own for storage of factors. Alternatively, you could add > library support so that you could use PCLU and > '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). > Doing this is not a priority for us, but we can provide guidance if you > want to tackle it. From knepley at gmail.com Tue Jul 16 13:37:12 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 16 Jul 2013 13:37:12 -0500 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <51E58D34.9050805@purdue.edu> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> <51E58D34.9050805@purdue.edu> Message-ID: On Tue, Jul 16, 2013 at 1:13 PM, Harshad Sahasrabudhe wrote: > Hi Jed, > > Thanks for your reply. > > You're on your own for storage of factors. Alternatively, you could add >> library support so that you could use PCLU and >> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >> Doing this is not a priority for us, but we can provide guidance if you >> want to tackle it. >> > > I would definitely like to start working on adding library support. I > think this is the most efficient way to go about it. Can you give me > certain details such as: > > 1) How should I start going about it? > Read the UMFPACK implementation > 2) How will I check-in the changes to Petsc? > Using Git > 3) What version of Petsc will the changes be reflected in if I started > working on it right now? > A branch of 'master' > 4) How many hours does it generally take to get this done? > How many licks does it take to get to the center of a Tootsie Roll Pop? > 5) How is the peer review done? > Through a pull request on BitBucket. Thanks, Matt > Thanks, > Harshad > > On 07/13/2013 12:43 PM, Jed Brown wrote: > >> "hsahasra at purdue.edu" writes: >> >> Hi, >>> >>> I am working on solving a system of linear equations with square >>> matrix. I'm first factoring the matrix using LU decomposition. >>> >> I assume you're solving a dense problem because that is all MAGMA does. >> >> I want to do the LU decomposition step using MAGMA on GPUs. MAGMA >>> library implements LAPACK functions on a CPU+GPU based system. >>> >>> So my question is, how do I extract the data from a Petsc Mat so that >>> it can be sent to the dgetrf routine in MAGMA. >>> >> MatDenseGetArray >> >> Is there any need for duplicating the data for this step? >>> >> You're on your own for storage of factors. Alternatively, you could add >> library support so that you could use PCLU and >> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >> Doing this is not a priority for us, but we can provide guidance if you >> want to tackle it. >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 16 13:48:55 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Jul 2013 13:48:55 -0500 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> <51E58D34.9050805@purdue.edu> Message-ID: <0E4E6597-2525-4DAF-A093-568A623058FF@mcs.anl.gov> Read all of http://www.mcs.anl.gov/petsc/developers/index.html Note that if Magma has a calling sequence like lapack you could possible steal chunks of code from the routines I pointed you to yesterday and modify them as needed so you don't need to reinvent the wheel. Barry On Jul 16, 2013, at 1:37 PM, Matthew Knepley wrote: > On Tue, Jul 16, 2013 at 1:13 PM, Harshad Sahasrabudhe wrote: > Hi Jed, > > Thanks for your reply. > > You're on your own for storage of factors. Alternatively, you could add > library support so that you could use PCLU and > '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). > Doing this is not a priority for us, but we can provide guidance if you > want to tackle it. > > I would definitely like to start working on adding library support. I think this is the most efficient way to go about it. Can you give me certain details such as: > > 1) How should I start going about it? > > Read the UMFPACK implementation > > 2) How will I check-in the changes to Petsc? > > Using Git > > 3) What version of Petsc will the changes be reflected in if I started working on it right now? > > A branch of 'master' > > 4) How many hours does it generally take to get this done? > > How many licks does it take to get to the center of a Tootsie Roll Pop? > > 5) How is the peer review done? > > Through a pull request on BitBucket. > > Thanks, > > Matt > > Thanks, > Harshad > > On 07/13/2013 12:43 PM, Jed Brown wrote: > "hsahasra at purdue.edu" writes: > > Hi, > > I am working on solving a system of linear equations with square > matrix. I'm first factoring the matrix using LU decomposition. > I assume you're solving a dense problem because that is all MAGMA does. > > I want to do the LU decomposition step using MAGMA on GPUs. MAGMA > library implements LAPACK functions on a CPU+GPU based system. > > So my question is, how do I extract the data from a Petsc Mat so that > it can be sent to the dgetrf routine in MAGMA. > MatDenseGetArray > > Is there any need for duplicating the data for this step? > You're on your own for storage of factors. Alternatively, you could add > library support so that you could use PCLU and > '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). > Doing this is not a priority for us, but we can provide guidance if you > want to tackle it. > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From hsahasra at purdue.edu Tue Jul 16 14:18:08 2013 From: hsahasra at purdue.edu (Harshad Sahasrabudhe) Date: Tue, 16 Jul 2013 15:18:08 -0400 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <0E4E6597-2525-4DAF-A093-568A623058FF@mcs.anl.gov> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> <51E58D34.9050805@purdue.edu> <0E4E6597-2525-4DAF-A093-568A623058FF@mcs.anl.gov> Message-ID: <51E59C70.6060300@purdue.edu> Hi Barry, I'm confused, can you please explain what you meant by 'MAGMA has a calling sequence like LAPACK'? Thanks, Harshad On 07/16/2013 02:48 PM, Barry Smith wrote: > Read all of http://www.mcs.anl.gov/petsc/developers/index.html > > Note that if Magma has a calling sequence like lapack you could possible steal chunks of code from the routines I pointed you to yesterday and modify them as needed so you don't need to reinvent the wheel. > > > Barry > > On Jul 16, 2013, at 1:37 PM, Matthew Knepley wrote: > >> On Tue, Jul 16, 2013 at 1:13 PM, Harshad Sahasrabudhe wrote: >> Hi Jed, >> >> Thanks for your reply. >> >> You're on your own for storage of factors. Alternatively, you could add >> library support so that you could use PCLU and >> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >> Doing this is not a priority for us, but we can provide guidance if you >> want to tackle it. >> >> I would definitely like to start working on adding library support. I think this is the most efficient way to go about it. Can you give me certain details such as: >> >> 1) How should I start going about it? >> >> Read the UMFPACK implementation >> >> 2) How will I check-in the changes to Petsc? >> >> Using Git >> >> 3) What version of Petsc will the changes be reflected in if I started working on it right now? >> >> A branch of 'master' >> >> 4) How many hours does it generally take to get this done? >> >> How many licks does it take to get to the center of a Tootsie Roll Pop? >> >> 5) How is the peer review done? >> >> Through a pull request on BitBucket. >> >> Thanks, >> >> Matt >> >> Thanks, >> Harshad >> >> On 07/13/2013 12:43 PM, Jed Brown wrote: >> "hsahasra at purdue.edu" writes: >> >> Hi, >> >> I am working on solving a system of linear equations with square >> matrix. I'm first factoring the matrix using LU decomposition. >> I assume you're solving a dense problem because that is all MAGMA does. >> >> I want to do the LU decomposition step using MAGMA on GPUs. MAGMA >> library implements LAPACK functions on a CPU+GPU based system. >> >> So my question is, how do I extract the data from a Petsc Mat so that >> it can be sent to the dgetrf routine in MAGMA. >> MatDenseGetArray >> >> Is there any need for duplicating the data for this step? >> You're on your own for storage of factors. Alternatively, you could add >> library support so that you could use PCLU and >> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >> Doing this is not a priority for us, but we can provide guidance if you >> want to tackle it. >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener From bsmith at mcs.anl.gov Tue Jul 16 14:22:16 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Jul 2013 14:22:16 -0500 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <51E59C70.6060300@purdue.edu> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> <51E58D34.9050805@purdue.edu> <0E4E6597-2525-4DAF-A093-568A623058FF@mcs.anl.gov> <51E59C70.6060300@purdue.edu> Message-ID: <702139F9-79D9-4FBB-ABB9-8350CC6A54A9@mcs.anl.gov> On Jul 16, 2013, at 2:18 PM, Harshad Sahasrabudhe wrote: > Hi Barry, > > I'm confused, can you please explain what you meant by 'MAGMA has a calling sequence like LAPACK'? Here is how we call lapack Cholesky factorization from PETSc: #undef __FUNCT__ #define __FUNCT__ "MatCholeskyFactor_SeqDense" PetscErrorCode MatCholeskyFactor_SeqDense(Mat A,IS perm,const MatFactorInfo *factinfo) { #if defined(PETSC_MISSING_LAPACK_POTRF) PetscFunctionBegin; SETERRQ(PETSC_COMM_SELF,PETSC_ERR_SUP,"POTRF - Lapack routine is unavailable."); #else Mat_SeqDense *mat = (Mat_SeqDense*)A->data; PetscErrorCode ierr; PetscBLASInt info,n; PetscFunctionBegin; ierr = PetscBLASIntCast(A->cmap->n,&n);CHKERRQ(ierr); ierr = PetscFree(mat->pivots);CHKERRQ(ierr); if (!A->rmap->n || !A->cmap->n) PetscFunctionReturn(0); PetscStackCallBLAS("LAPACKpotrf",LAPACKpotrf_("L",&n,mat->v,&mat->lda,&info)); if (info) SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_MAT_CH_ZRPVT,"Bad factorization: zero pivot in row %D",(PetscInt)info-1); A->ops->solve = MatSolve_SeqDense; A->ops->solvetranspose = MatSolveTranspose_SeqDense; A->ops->solveadd = MatSolveAdd_SeqDense; A->ops->solvetransposeadd = MatSolveTransposeAdd_SeqDense; A->factortype = MAT_FACTOR_CHOLESKY; ierr = PetscLogFlops((A->cmap->n*A->cmap->n*A->cmap->n)/3.0);CHKERRQ(ierr); #endif PetscFunctionReturn(0); } If Magma has a calling sequence similar to LAPACKpotrf_("L",&n,mat->v,&mat->lda,&info)); then you could model your code on the routine above instead of typing in all new code that does almost the exact same thing. Barry > > Thanks, > Harshad > > On 07/16/2013 02:48 PM, Barry Smith wrote: >> Read all of http://www.mcs.anl.gov/petsc/developers/index.html >> >> Note that if Magma has a calling sequence like lapack you could possible steal chunks of code from the routines I pointed you to yesterday and modify them as needed so you don't need to reinvent the wheel. >> >> >> Barry >> >> On Jul 16, 2013, at 1:37 PM, Matthew Knepley wrote: >> >>> On Tue, Jul 16, 2013 at 1:13 PM, Harshad Sahasrabudhe wrote: >>> Hi Jed, >>> >>> Thanks for your reply. >>> >>> You're on your own for storage of factors. Alternatively, you could add >>> library support so that you could use PCLU and >>> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >>> Doing this is not a priority for us, but we can provide guidance if you >>> want to tackle it. >>> >>> I would definitely like to start working on adding library support. I think this is the most efficient way to go about it. Can you give me certain details such as: >>> >>> 1) How should I start going about it? >>> >>> Read the UMFPACK implementation >>> 2) How will I check-in the changes to Petsc? >>> >>> Using Git >>> 3) What version of Petsc will the changes be reflected in if I started working on it right now? >>> >>> A branch of 'master' >>> 4) How many hours does it generally take to get this done? >>> >>> How many licks does it take to get to the center of a Tootsie Roll Pop? >>> 5) How is the peer review done? >>> >>> Through a pull request on BitBucket. >>> >>> Thanks, >>> >>> Matt >>> Thanks, >>> Harshad >>> >>> On 07/13/2013 12:43 PM, Jed Brown wrote: >>> "hsahasra at purdue.edu" writes: >>> >>> Hi, >>> >>> I am working on solving a system of linear equations with square >>> matrix. I'm first factoring the matrix using LU decomposition. >>> I assume you're solving a dense problem because that is all MAGMA does. >>> >>> I want to do the LU decomposition step using MAGMA on GPUs. MAGMA >>> library implements LAPACK functions on a CPU+GPU based system. >>> >>> So my question is, how do I extract the data from a Petsc Mat so that >>> it can be sent to the dgetrf routine in MAGMA. >>> MatDenseGetArray >>> >>> Is there any need for duplicating the data for this step? >>> You're on your own for storage of factors. Alternatively, you could add >>> library support so that you could use PCLU and >>> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >>> Doing this is not a priority for us, but we can provide guidance if you >>> want to tackle it. >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener > From hsahasra at purdue.edu Tue Jul 16 14:26:22 2013 From: hsahasra at purdue.edu (Harshad Sahasrabudhe) Date: Tue, 16 Jul 2013 15:26:22 -0400 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <702139F9-79D9-4FBB-ABB9-8350CC6A54A9@mcs.anl.gov> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> <51E58D34.9050805@purdue.edu> <0E4E6597-2525-4DAF-A093-568A623058FF@mcs.anl.gov> <51E59C70.6060300@purdue.edu> <702139F9-79D9-4FBB-ABB9-8350CC6A54A9@mcs.anl.gov> Message-ID: <51E59E5E.6000507@purdue.edu> Yes, they has the same calling sequence for LU factorization. Thanks a lot! Will look into this. Harshad On 07/16/2013 03:22 PM, Barry Smith wrote: > On Jul 16, 2013, at 2:18 PM, Harshad Sahasrabudhe wrote: > >> Hi Barry, >> >> I'm confused, can you please explain what you meant by 'MAGMA has a calling sequence like LAPACK'? > Here is how we call lapack Cholesky factorization from PETSc: > > #undef __FUNCT__ > #define __FUNCT__ "MatCholeskyFactor_SeqDense" > PetscErrorCode MatCholeskyFactor_SeqDense(Mat A,IS perm,const MatFactorInfo *factinfo) > { > #if defined(PETSC_MISSING_LAPACK_POTRF) > PetscFunctionBegin; > SETERRQ(PETSC_COMM_SELF,PETSC_ERR_SUP,"POTRF - Lapack routine is unavailable."); > #else > Mat_SeqDense *mat = (Mat_SeqDense*)A->data; > PetscErrorCode ierr; > PetscBLASInt info,n; > > PetscFunctionBegin; > ierr = PetscBLASIntCast(A->cmap->n,&n);CHKERRQ(ierr); > ierr = PetscFree(mat->pivots);CHKERRQ(ierr); > > if (!A->rmap->n || !A->cmap->n) PetscFunctionReturn(0); > PetscStackCallBLAS("LAPACKpotrf",LAPACKpotrf_("L",&n,mat->v,&mat->lda,&info)); > if (info) SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_MAT_CH_ZRPVT,"Bad factorization: zero pivot in row %D",(PetscInt)info-1); > A->ops->solve = MatSolve_SeqDense; > A->ops->solvetranspose = MatSolveTranspose_SeqDense; > A->ops->solveadd = MatSolveAdd_SeqDense; > A->ops->solvetransposeadd = MatSolveTransposeAdd_SeqDense; > A->factortype = MAT_FACTOR_CHOLESKY; > > ierr = PetscLogFlops((A->cmap->n*A->cmap->n*A->cmap->n)/3.0);CHKERRQ(ierr); > #endif > PetscFunctionReturn(0); > } > > > If Magma has a calling sequence similar to LAPACKpotrf_("L",&n,mat->v,&mat->lda,&info)); then you could model your code on the routine above instead of typing in all new code that does almost the exact same thing. > > Barry > >> Thanks, >> Harshad >> >> On 07/16/2013 02:48 PM, Barry Smith wrote: >>> Read all of http://www.mcs.anl.gov/petsc/developers/index.html >>> >>> Note that if Magma has a calling sequence like lapack you could possible steal chunks of code from the routines I pointed you to yesterday and modify them as needed so you don't need to reinvent the wheel. >>> >>> >>> Barry >>> >>> On Jul 16, 2013, at 1:37 PM, Matthew Knepley wrote: >>> >>>> On Tue, Jul 16, 2013 at 1:13 PM, Harshad Sahasrabudhe wrote: >>>> Hi Jed, >>>> >>>> Thanks for your reply. >>>> >>>> You're on your own for storage of factors. Alternatively, you could add >>>> library support so that you could use PCLU and >>>> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >>>> Doing this is not a priority for us, but we can provide guidance if you >>>> want to tackle it. >>>> >>>> I would definitely like to start working on adding library support. I think this is the most efficient way to go about it. Can you give me certain details such as: >>>> >>>> 1) How should I start going about it? >>>> >>>> Read the UMFPACK implementation >>>> 2) How will I check-in the changes to Petsc? >>>> >>>> Using Git >>>> 3) What version of Petsc will the changes be reflected in if I started working on it right now? >>>> >>>> A branch of 'master' >>>> 4) How many hours does it generally take to get this done? >>>> >>>> How many licks does it take to get to the center of a Tootsie Roll Pop? >>>> 5) How is the peer review done? >>>> >>>> Through a pull request on BitBucket. >>>> >>>> Thanks, >>>> >>>> Matt >>>> Thanks, >>>> Harshad >>>> >>>> On 07/13/2013 12:43 PM, Jed Brown wrote: >>>> "hsahasra at purdue.edu" writes: >>>> >>>> Hi, >>>> >>>> I am working on solving a system of linear equations with square >>>> matrix. I'm first factoring the matrix using LU decomposition. >>>> I assume you're solving a dense problem because that is all MAGMA does. >>>> >>>> I want to do the LU decomposition step using MAGMA on GPUs. MAGMA >>>> library implements LAPACK functions on a CPU+GPU based system. >>>> >>>> So my question is, how do I extract the data from a Petsc Mat so that >>>> it can be sent to the dgetrf routine in MAGMA. >>>> MatDenseGetArray >>>> >>>> Is there any need for duplicating the data for this step? >>>> You're on your own for storage of factors. Alternatively, you could add >>>> library support so that you could use PCLU and >>>> '-pc_factor_mat_solver_package magma' (or PCFactorSetMatSolverPackage). >>>> Doing this is not a priority for us, but we can provide guidance if you >>>> want to tackle it. >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener From potaman at outlook.com Tue Jul 16 19:52:59 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Tue, 16 Jul 2013 20:52:59 -0400 Subject: [petsc-users] Using SNES Shell to create a new solution scheme Message-ID: Hi, In addition to simple a > 0 and b > 0 constraints at every point in th e domain, I need to apply an additional a + b < 1 constraint over the domain. The first two , I understand are pretty easy to do using SNES_VI. To apply the third using the approach used in the reduced space approach, I'd need to modify the bounds function to have the linesearch direction available.The strategies that seem feasible to me are as follows:1. Program a solver using SNES Shell that does it. 2. Program a linesearch and use SNES Linesearch Shell to solve the problem with a standard SNES VI solver. 3. Modify SNES VI in a local installation. To me. it seems that no.2 is the simplest approach. Are any other approaches feasible?Thanks, Subramanya -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 16 20:24:14 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Jul 2013 20:24:14 -0500 Subject: [petsc-users] Using SNES Shell to create a new solution scheme In-Reply-To: References: Message-ID: I think the "correct" solution is to introduce a new variable z = a + b; which introduces new equations to that satisfy a + b - z = 0 and then apply bound constraint on z. Barry On Jul 16, 2013, at 7:52 PM, subramanya sadasiva wrote: > Hi, > In addition to simple a > 0 and b > 0 constraints at every point in th e domain, I need to apply an additional a + b < 1 constraint over the domain. The first two , I understand are pretty easy to do using SNES_VI. To apply the third using the approach used in the reduced space approach, I'd need to modify the bounds function to have the linesearch direction available. > The strategies that seem feasible to me are as follows: > 1. Program a solver using SNES Shell that does it. > 2. Program a linesearch and use SNES Linesearch Shell to solve the problem with a standard SNES VI solver. > 3. Modify SNES VI in a local installation. > > To me. it seems that no.2 is the simplest approach. Are any other approaches feasible? > Thanks, > Subramanya From i.gutheil at fz-juelich.de Wed Jul 17 02:47:51 2013 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Wed, 17 Jul 2013 09:47:51 +0200 Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: References: <51E563FE.6040803@purdue.edu> Message-ID: <51E64C27.3040709@fz-juelich.de> Hello, I tried this and got the following error: ================================================================================ TEST configureLibrary from PETSc.packages.scalapack(/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py:464) TESTING: configureLibrary from PETSc.packages.scalapack(/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py:464) Find an installation and check if it can work with PETSc =============******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- You must specify a path for scalapack with --with-scalapack-dir= If you do not want scalapack, then give --with-scalapack=0 You might also consider using --download-scalapack instead ******************************************************************************* File "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/configure.py", line 293, in petsc_configure framework.configure(out = sys.stdout) File "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/framework.py", line 933, in configure child.configure() File "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py", line 556, in configure self.executeTest(self.configureLibrary) File "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/base.py", line 115, in executeTest ret = apply(test, args,kargs) File "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py", line 484, in configureLibrary for location, directory, lib, incl in self.generateGuesses(): File "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py", line 314, in generateGuesses raise RuntimeError('You must specify a path for '+self.name+' with --with-'+self.package+'-dir=\nIf you do not want '+self.name+', then give --with-'+self.package+'=0\nYou might also consider using --download-'+self.package+' instead') ===================================================================== See the attached complete configure.log This is on an intel cluster with intel compiler. Thanks Inge Gutheil On 07/16/13 17:27, Satish Balay wrote: > Try: > > --with-scalapack-lib="-L/opt/intel/mkl//lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" > > > BLACS is now part of scalapack-2 [which petsc-3.4 uses] - but mkl has blas/scalapack split. > So you would have to specify both libs with the --with-scalapack-lib option. > > Satish > > On Tue, 16 Jul 2013, Michael Povolotskyi wrote: > >> Dear Petsc developers and users, >> I'm trying to configure petsc with scalapack from mkl library. >> >> From the configure.log (see attached) it seems that when PETSc checks for the >> scalapack functionality it does not link blacs. >> Please advise. >> Thank you, >> Michael. >> >> -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.gz Type: application/x-gzip Size: 524555 bytes Desc: not available URL: From knepley at gmail.com Wed Jul 17 06:06:10 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Jul 2013 06:06:10 -0500 Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: <51E64C27.3040709@fz-juelich.de> References: <51E563FE.6040803@purdue.edu> <51E64C27.3040709@fz-juelich.de> Message-ID: On Wed, Jul 17, 2013 at 2:47 AM, Inge Gutheil wrote: > Hello, > I tried this and got the following error: > 1) Please do not send logs to petsc-users, send them to petsc-maint at mcs.anl.gov 2) The logic is wrong here. We are requiring you to also specify an empty --with-scalapack-include=[], which you should not have to. Please confirm that this works. We will put a fix in the next patch update. Thanks, Matt > ==============================**==============================** > ==================== > TEST configureLibrary from > PETSc.packages.scalapack(/**lustre/jhome5/software/** > mathprod/PETSc/petsc-3.4.2/**config/BuildSystem/config/**package.py:464) > TESTING: configureLibrary from > PETSc.packages.scalapack(/**lustre/jhome5/software/** > mathprod/PETSc/petsc-3.4.2/**config/BuildSystem/config/**package.py:464) > Find an installation and check if it can work with PETSc > =============*************************************************** > ********************************** > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > ------------------------------**------------------------------** > ------------------- > You must specify a path for scalapack with --with-scalapack-dir=<** > directory> > If you do not want scalapack, then give --with-scalapack=0 > You might also consider using --download-scalapack instead > **************************************************************** > ******************* > File > "/lustre/jhome5/software/**mathprod/PETSc/petsc-3.4.2/**config/configure.py", > line > 293, in petsc_configure > framework.configure(out = sys.stdout) > File > "/lustre/jhome5/software/**mathprod/PETSc/petsc-3.4.2/** > config/BuildSystem/config/**framework.py", > line 933, in configure > child.configure() > File > "/lustre/jhome5/software/**mathprod/PETSc/petsc-3.4.2/** > config/BuildSystem/config/**package.py", > line 556, in configure > self.executeTest(self.**configureLibrary) > File > "/lustre/jhome5/software/**mathprod/PETSc/petsc-3.4.2/** > config/BuildSystem/config/**base.py", > line 115, in executeTest > ret = apply(test, args,kargs) > File > "/lustre/jhome5/software/**mathprod/PETSc/petsc-3.4.2/** > config/BuildSystem/config/**package.py", > line 484, in configureLibrary > for location, directory, lib, incl in self.generateGuesses(): > File > "/lustre/jhome5/software/**mathprod/PETSc/petsc-3.4.2/** > config/BuildSystem/config/**package.py", > line 314, in generateGuesses > raise RuntimeError('You must specify a path for '+self.name+' with > --with-'+self.package+'-dir=<**directory>\nIf you do not want > '+self.name+', then give --with-'+self.package+'=0\nYou might also > consider using --download-'+self.package+' instead') > ==============================**==============================**========= > See the attached complete configure.log > This is on an intel cluster with intel compiler. > Thanks > Inge Gutheil > > On 07/16/13 17:27, Satish Balay wrote: > >> Try: >> >> --with-scalapack-lib="-L/opt/**intel/mkl//lib/intel64 >> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" >> >> >> BLACS is now part of scalapack-2 [which petsc-3.4 uses] - but mkl has >> blas/scalapack split. >> So you would have to specify both libs with the --with-scalapack-lib >> option. >> >> Satish >> >> On Tue, 16 Jul 2013, Michael Povolotskyi wrote: >> >> Dear Petsc developers and users, >>> I'm trying to configure petsc with scalapack from mkl library. >>> >>> From the configure.log (see attached) it seems that when PETSc checks >>> for the >>> scalapack functionality it does not link blacs. >>> Please advise. >>> Thank you, >>> Michael. >>> >>> >>> > > -- > -- > > Inge Gutheil > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-3135 > Fax: +49-2461-61-6656 > E-mail:i.gutheil at fz-juelich.de > > > > ------------------------------**------------------------------** > ------------------------------**------ > ------------------------------**------------------------------** > ------------------------------**------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------**------------------------------** > ------------------------------**------ > ------------------------------**------------------------------** > ------------------------------**------ > > Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, von > 10:00 bis 17:00 Uhr: http://www.tagderneugier.de > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jul 17 09:01:45 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Jul 2013 09:01:45 -0500 (CDT) Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: References: <51E563FE.6040803@purdue.edu> <51E64C27.3040709@fz-juelich.de> Message-ID: On Wed, 17 Jul 2013, Matthew Knepley wrote: > > 1) Please do not send logs to petsc-users, send them to > petsc-maint at mcs.anl.gov since the reorg of mailing lists - Its now acceptalble to post logs on mailing lists. [and one doesn't need to subscribe to mailing lists] Satish From bisheshkh at gmail.com Wed Jul 17 09:31:24 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Wed, 17 Jul 2013 16:31:24 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid Message-ID: Dear all, (I apologize for a long email!! I think it's better to clearly explain the problem even if it is boring than be short and confusing). I need to solve following kind of set of equations in 3D for the domain of around 250 ^ 3 grid size: div (K(grad(v)) - grad (p) = f1 (momentum eqn) div (v) = f2 (continuity eqn) v a vector of 3 components, p a scalar. K is piecewise discontinuous. Say it has two values K1, K2 with ratio K1/K2 reaching up to 10^5. f1, f2 are not zero and are functions of the space variable (x,y,z). This is thus pretty much a standard stokes flow equation (except for the non zero f2). Now, I implemented two different approaches, each for both 2D and 3D, in MATLAB. It works for the smaller sizes but I have problems solving it for the problem size I need (250^3 grid size). I use staggered grid with p on cell centers, and components of v on cell faces. Similar split up of K to cell center and faces to account for the variable viscosity case) The two approaches I take are: M1) Setting up Ax=b where x stacks up the components of v and the scalar p for all discretized points. Thus A has one of the block matrices with zero diagonal. Then use iterative solvers available. This requires the preconditioner and that's where I had problems. I could not find a suitable precondtioner and iterative solver pair in MATLAB that could produce result for this size of the saddle point problem I have. M2) Using Augmented Lagrangian / Penalty method: "Uncouple" the pressure and velocity; i.e. use a pseudotime step to iteratively update p starting from the initial guess. After each iteration use the new p value to solve for the momentum equation. In this case the Ax=b would have A with no zeros in the diagonal, and we are solving for x which has only the components of v. Here preconditioner such as ILU worked with gmres etc. But for the bigger problem size (the size I need !!) it does not scale up. Now, this is when I started learning petsc and after having played around a bit with it, I've a preliminary implementation of 2D problem with method M1. What I did was basically use DMDA and KSP. To adapt DMDA for staggered grid, I simply used ghost-values as explained in fig 7.5, page 96 of: http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA That means for 2D case, I have a DMDA with 3 dof and set KSP with this DM object and solve the system. Now my actual question (you are patient :) ) 1) From 2D example, I can see that in 3D the issue again will be to have a proper preconditioner. I saw some discussions in the archive about using PCFieldsplit or schur complement. Before implementing the 3D case I'd like to get some suggestions if I could directly use some combination of solver and preconditioner to get result for my problem size in 3D. Can I use multigrid with proper preconditioner in this case ? And by the way is there a possibility of having the support in petsc for this implementation: (Matt from petsc-dev is one of the authors of the paper, so I thought I could as it here) http://hpc.sagepub.com/content/early/2013/03/03/1094342013478640.abstract 2) The other option is to use M2 (see above), which worked in Matlab for smaller 3D sizes. But the tricky part there would be to tune the pseudotime parameter based on different viscocity values. Furthermore, I'm not sure if this will scale up in parallel to give me the results. I will have to try, but it'd be nice if you can suggest which of the two methods should I try first ? My interest in the first method was due to the possibility having an implementation in Petsc of using multigrid with GPU (if that speeds up the whole thing). Thanks, Bishesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 16 13:55:47 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 16 Jul 2013 10:55:47 -0800 Subject: [petsc-users] Extracting data from a Petsc matrix In-Reply-To: <51E58D34.9050805@purdue.edu> References: <201307131552.r6DFqZIF006475@mailhub245.itcs.purdue.edu> <87ehb2fm1v.fsf@mcs.anl.gov> <51E58D34.9050805@purdue.edu> Message-ID: <874nbucp1o.fsf@mcs.anl.gov> Harshad Sahasrabudhe writes: > 1) How should I start going about it? > 2) How will I check-in the changes to Petsc? Work in a branch and let us know where your fork is so we can comment on it. Early review will improve quality and save everyone time, so don't try to do everything in private. https://bitbucket.org/petsc/petsc/wiki/pull-request-instructions-git > > 3) What version of Petsc will the changes be reflected in if I started > working on it right now? It will be in the next feature release after it is accepted. So most likely petsc-3.5. > 4) How many hours does it generally take to get this done? Hours to days, depending on your familiarity with the packages. > 5) How is the peer review done? Pull requests are good for reviewing code. General design discussions should happen on petsc-dev, and can include patches. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From mpovolot at purdue.edu Wed Jul 17 14:26:56 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 17 Jul 2013 15:26:56 -0400 Subject: [petsc-users] valgrind and petsc Message-ID: <51E6F000.9060906@purdue.edu> Hello, at some point the configuration script prints a warning that valgrind is not found in the system. Is petsc going to link some valgrind related libraries if valgrind is installed? Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From knepley at gmail.com Wed Jul 17 14:30:16 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Jul 2013 14:30:16 -0500 Subject: [petsc-users] valgrind and petsc In-Reply-To: <51E6F000.9060906@purdue.edu> References: <51E6F000.9060906@purdue.edu> Message-ID: On Wed, Jul 17, 2013 at 2:26 PM, Michael Povolotskyi wrote: > Hello, > at some point the configuration script prints a warning that valgrind is > not found in the system. > Is petsc going to link some valgrind related libraries if valgrind is > installed? > No, we just recommend using it to debug. Matt > Thank you, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jul 17 14:31:31 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Jul 2013 14:31:31 -0500 Subject: [petsc-users] valgrind and petsc In-Reply-To: <51E6F000.9060906@purdue.edu> References: <51E6F000.9060906@purdue.edu> Message-ID: On Jul 17, 2013, at 2:26 PM, Michael Povolotskyi wrote: > Hello, > at some point the configuration script prints a warning that valgrind is not found in the system. > Is petsc going to link some valgrind related libraries if valgrind is installed? PETSc does not link any valgrind libraries, but if valgrind.h is found PETSc does include it and use it to determine if valgrind is running (when PETSc is run under valgrind we turn off all the PETSc memory error detection so that valgrinds will find any problems and not get confused by our own memory checking). For C and C++ programming valgrind is an enormously powerful tool, taking seconds to find memory bugs that manually would take hours by experts; unless you are one of the few who never put bugs into code we highly recommend installing it and learning a little about it. Barry > Thank you, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > From mpovolot at purdue.edu Wed Jul 17 14:32:12 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 17 Jul 2013 15:32:12 -0400 Subject: [petsc-users] valgrind and petsc In-Reply-To: References: <51E6F000.9060906@purdue.edu> Message-ID: <51E6F13C.9030103@purdue.edu> Thank you! This very helpful information. Michael. On 07/17/2013 03:31 PM, Barry Smith wrote: > On Jul 17, 2013, at 2:26 PM, Michael Povolotskyi wrote: > >> Hello, >> at some point the configuration script prints a warning that valgrind is not found in the system. >> Is petsc going to link some valgrind related libraries if valgrind is installed? > PETSc does not link any valgrind libraries, but if valgrind.h is found PETSc does include it and use it to determine if valgrind is running (when PETSc is run under valgrind we turn off all the PETSc memory error detection so that valgrinds will find any problems and not get confused by our own memory checking). For C and C++ programming valgrind is an enormously powerful tool, taking seconds to find memory bugs that manually would take hours by experts; unless you are one of the few who never put bugs into code we highly recommend installing it and learning a little about it. > > Barry > > >> Thank you, >> Michael. >> >> -- >> Michael Povolotskyi, PhD >> Research Assistant Professor >> Network for Computational Nanotechnology >> 207 S Martin Jischke Drive >> Purdue University, DLR, room 441-10 >> West Lafayette, Indiana 47907 >> >> phone: +1-765-494-9396 >> fax: +1-765-496-6026 >> From jedbrown at mcs.anl.gov Wed Jul 17 14:48:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Jul 2013 11:48:07 -0800 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: Message-ID: <87li5555oo.fsf@mcs.anl.gov> Bishesh Khanal writes: > Now, I implemented two different approaches, each for both 2D and 3D, in > MATLAB. It works for the smaller sizes but I have problems solving it for > the problem size I need (250^3 grid size). > I use staggered grid with p on cell centers, and components of v on cell > faces. Similar split up of K to cell center and faces to account for the > variable viscosity case) Okay, you're using a staggered-grid finite difference discretization of variable-viscosity Stokes. This is a common problem and I recommend starting with PCFieldSplit with Schur complement reduction (make that work first, then switch to block preconditioner). You can use PCLSC or (probably better for you), assemble a preconditioning matrix containing the inverse viscosity in the pressure-pressure block. This diagonal matrix is a spectrally equivalent (or nearly so, depending on discretization) approximation of the Schur complement. The velocity block can be solved with algebraic multigrid. Read the PCFieldSplit docs (follow papers as appropriate) and let us know if you get stuck. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From rtm at eecs.utk.edu Wed Jul 17 15:26:38 2013 From: rtm at eecs.utk.edu (Richard Tran Mills) Date: Wed, 17 Jul 2013 16:26:38 -0400 Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: <51E6FDFE.2010709@eecs.utk.edu> Hi Satish, I think something for handling the libz problem must be missing. I just tried configuring on Intrepid with the configure script appended to this email and I get the same problem: Compression library [libz.a or equivalent] not found I believe that the library that I need is present on the system: rmills at login1.intrepid:~/proj/petsc> ls -l /soft/apps/zlib-1.2.3/lib/ total 128 -rwxrwxr-x 1 rloy software 108724 2010-04-21 01:35 libz.a Any ideas on what I need to do to get configure.py to find and use this? Thanks, Richard #!/usr/bin/python if __name__ == '__main__': import sys import os sys.path.insert(0, os.path.abspath('config')) import configure configure_options = [ '--with-hdf5-dir=/soft/apps/current/hdf5-1.8.9', '--download-cmake=1', '--download-metis=1', '--download-parmetis=1', '--known-bits-per-byte=8', '--known-level1-dcache-assoc=0', '--known-level1-dcache-linesize=32', '--known-level1-dcache-size=32768', '--known-memcmp-ok=1', '--known-mpi-long-double=1', '--known-mpi-shared-libraries=0', '--known-sizeof-MPI_Comm=4', '--known-sizeof-MPI_Fint=4', '--known-sizeof-char=1', '--known-sizeof-double=8', '--known-sizeof-float=4', '--known-sizeof-int=4', '--known-sizeof-long-long=8', '--known-sizeof-long=4', '--known-sizeof-short=2', '--known-sizeof-size_t=4', '--known-sizeof-void-p=4', '--prefix=/soft/apps/libraries/petsc/3.3-p2/xl-opt', '--with-batch=1', #'--with-blacs-include=/soft/apps/libraries/alcf/current/xl/SCALAPACK/', #'--with-blacs-lib=/soft/apps/libraries/alcf/current/xl/SCALAPACK/lib/libscalapack.a', '--with-blas-lapack-lib=-L/soft/apps/libraries/alcf/current/xl/LAPACK/lib -llapack -L/soft/apps/libraries/alcf/current/xl/BLAS/lib -lblas', '--with-cc=mpixlc_r', '--with-cxx=mpixlcxx_r', '--with-debugging=0', '--with-fc=mpixlf77_r -qnosave', '--with-fortran-kernels=1', '--with-is-color-value-type=short', '--with-scalapack-include=/soft/apps/libraries/alcf/current/xl/SCALAPACK/', '--with-scalapack-lib=/soft/apps/libraries/alcf/current/xl/SCALAPACK/lib/libscalapack.a', '--with-shared-libraries=0', '--with-x=0', '--with-debugging=1', '-COPTFLAGS=-g', '-CXXOPTFLAGS=-g', '-FOPTFLAGS=-g', ] configure.petsc_configure(configure_options) On 7/16/13 12:59 PM, Satish Balay wrote: > --download-package might not work on all machines. --download-hdf5=1 does > not work on bg/p > > However there is hdf5 installed on it. You can try using > --with-hdf5-include/--with-hdf5-lib options. > > There could still be an issue with "Compression library [libz.a or > equivalent] not found" but I think the workarround is already in > petsc-dev. > > Satish > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > >> Thanks Satish. I tried using the configuration you pointed me to with the >> addition of --download-hdf5=1 and got error "Compression library [libz.a or >> equivalent] not found >> " >> >> Do I need to load some package to get this? >> >> Jitu >> >> >> On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay wrote: >> >>> As the message indicates you need '--with-batch' option on this machine >>> >>> Check one of the default builds on intrepid for configure options to use.. >>> >>> [perhaps >>> /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] >>> >>> Satish >>> >>> On Tue, 16 Jul 2013, Jitendra Kumar wrote: >>> >>>> I ran into following errors while trying to build PETSc-dev on Intrepid >>>> @ALCF. (configure.log attached) >>>> >>>> >>> ******************************************************************************* >>>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >>>> details): >>>> >>> ------------------------------------------------------------------------------- >>>> Cannot run executable to determine size of char. If this machine uses a >>>> batch system >>>> to submit jobs you will need to configure using ./configure with the >>>> additional option --with-batch. >>>> Otherwise there is problem with the compilers. Can you compile and run >>>> code with your C/C++ (and maybe Fortran) compilers? >>>> >>> ******************************************************************************* >>>> File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, in >>>> petsc_configure >>>> framework.configure(out = sys.stdout) >>>> File >>>> "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", >>> line >>>> 933, in configure >>>> child.configure() >>>> File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", >>>> line 386, in configure >>>> map(lambda type: self.executeTest(self.checkSizeof, type), >>>> ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', >>>> 'size_t']) >>>> File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", >>>> line 386, in >>>> map(lambda type: self.executeTest(self.checkSizeof, type), >>>> ['char','void *', 'short', 'int', 'long', 'long long', 'float', 'double', >>>> 'size_t']) >>>> File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", >>>> line 115, in executeTest >>>> ret = apply(test, args,kargs) >>>> File "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", >>>> line 296, in checkSizeof >>>> raise RuntimeError(msg) >>>> >>>> This is what my configuration looks like (adapted from >>>> config/examples/arch-bgp-ibm-opt.py) >>>> configure_options = [ >>>> '--with-cc=mpixlc', >>>> '--with-fc=mpixlf90', >>>> '--with-cxx=mpixlcxx', >>>> 'COPTFLAGS=-O3', >>>> 'FOPTFLAGS=-O3', >>>> '--with-debugging=0', >>>> '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', >>>> # '--with-hdf5=/soft/apps/hdf5-1.8.0', >>>> '--download-parmetis=1', >>>> '--download-metis=1', >>>> '--download-plapack=1', >>>> '--download-hdf5=1' >>>> ] >>>> >>>> I would appreciate any help building the llbrary there. >>>> >>>> Thanks, >>>> Jitu >>>> >>> -- Richard Tran Mills, Ph.D. Computational Earth Scientist | Joint Assistant Professor Hydrogeochemical Dynamics Team | EECS and Earth & Planetary Sciences Oak Ridge National Laboratory | University of Tennessee, Knoxville E-mail: rmills at ornl.gov V: 865-241-3198 http://climate.ornl.gov/~rmills From balay at mcs.anl.gov Wed Jul 17 15:29:44 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Jul 2013 15:29:44 -0500 (CDT) Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: <51E6FDFE.2010709@eecs.utk.edu> References: <51E6FDFE.2010709@eecs.utk.edu> Message-ID: You can try using the configure option: LIBS=/soft/apps/zlib-1.2.3/lib/libz.a Satish On Wed, 17 Jul 2013, Richard Tran Mills wrote: > Hi Satish, > > I think something for handling the libz problem must be missing. I just tried > configuring on Intrepid with the configure script appended to this email and I > get the same problem: > > Compression library [libz.a or equivalent] not found > > I believe that the library that I need is present on the system: > > rmills at login1.intrepid:~/proj/petsc> ls -l /soft/apps/zlib-1.2.3/lib/ > total 128 > -rwxrwxr-x 1 rloy software 108724 2010-04-21 01:35 libz.a > > Any ideas on what I need to do to get configure.py to find and use this? > > Thanks, > Richard > > #!/usr/bin/python > if __name__ == '__main__': > import sys > import os > sys.path.insert(0, os.path.abspath('config')) > import configure > configure_options = [ > '--with-hdf5-dir=/soft/apps/current/hdf5-1.8.9', > '--download-cmake=1', > '--download-metis=1', > '--download-parmetis=1', > '--known-bits-per-byte=8', > '--known-level1-dcache-assoc=0', > '--known-level1-dcache-linesize=32', > '--known-level1-dcache-size=32768', > '--known-memcmp-ok=1', > '--known-mpi-long-double=1', > '--known-mpi-shared-libraries=0', > '--known-sizeof-MPI_Comm=4', > '--known-sizeof-MPI_Fint=4', > '--known-sizeof-char=1', > '--known-sizeof-double=8', > '--known-sizeof-float=4', > '--known-sizeof-int=4', > '--known-sizeof-long-long=8', > '--known-sizeof-long=4', > '--known-sizeof-short=2', > '--known-sizeof-size_t=4', > '--known-sizeof-void-p=4', > '--prefix=/soft/apps/libraries/petsc/3.3-p2/xl-opt', > '--with-batch=1', > #'--with-blacs-include=/soft/apps/libraries/alcf/current/xl/SCALAPACK/', > #'--with-blacs-lib=/soft/apps/libraries/alcf/current/xl/SCALAPACK/lib/libscalapack.a', > '--with-blas-lapack-lib=-L/soft/apps/libraries/alcf/current/xl/LAPACK/lib > -llapack -L/soft/apps/libraries/alcf/current/xl/BLAS/lib -lblas', > '--with-cc=mpixlc_r', > '--with-cxx=mpixlcxx_r', > '--with-debugging=0', > '--with-fc=mpixlf77_r -qnosave', > '--with-fortran-kernels=1', > '--with-is-color-value-type=short', > '--with-scalapack-include=/soft/apps/libraries/alcf/current/xl/SCALAPACK/', > '--with-scalapack-lib=/soft/apps/libraries/alcf/current/xl/SCALAPACK/lib/libscalapack.a', > '--with-shared-libraries=0', > '--with-x=0', > '--with-debugging=1', > '-COPTFLAGS=-g', > '-CXXOPTFLAGS=-g', > '-FOPTFLAGS=-g', > ] > configure.petsc_configure(configure_options) > > On 7/16/13 12:59 PM, Satish Balay wrote: > > --download-package might not work on all machines. --download-hdf5=1 does > > not work on bg/p > > > > However there is hdf5 installed on it. You can try using > > --with-hdf5-include/--with-hdf5-lib options. > > > > There could still be an issue with "Compression library [libz.a or > > equivalent] not found" but I think the workarround is already in > > petsc-dev. > > > > Satish > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > Thanks Satish. I tried using the configuration you pointed me to with the > > > addition of --download-hdf5=1 and got error "Compression library [libz.a > > > or > > > equivalent] not found > > > " > > > > > > Do I need to load some package to get this? > > > > > > Jitu > > > > > > > > > On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay wrote: > > > > > > > As the message indicates you need '--with-batch' option on this machine > > > > > > > > Check one of the default builds on intrepid for configure options to > > > > use.. > > > > > > > > [perhaps > > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > > > > > > > Satish > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > > > I ran into following errors while trying to build PETSc-dev on > > > > > Intrepid > > > > > @ALCF. (configure.log attached) > > > > > > > > > > > > > > ******************************************************************************* > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > > > > > for > > > > > details): > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > Cannot run executable to determine size of char. If this machine uses > > > > > a > > > > > batch system > > > > > to submit jobs you will need to configure using ./configure with the > > > > > additional option --with-batch. > > > > > Otherwise there is problem with the compilers. Can you compile and > > > > > run > > > > > code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > ******************************************************************************* > > > > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, > > > > > in > > > > > petsc_configure > > > > > framework.configure(out = sys.stdout) > > > > > File > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > > > > line > > > > > 933, in configure > > > > > child.configure() > > > > > File > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > line 386, in configure > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > > > > 'double', > > > > > 'size_t']) > > > > > File > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > line 386, in > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > > > > 'double', > > > > > 'size_t']) > > > > > File > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > > > > line 115, in executeTest > > > > > ret = apply(test, args,kargs) > > > > > File > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > line 296, in checkSizeof > > > > > raise RuntimeError(msg) > > > > > > > > > > This is what my configuration looks like (adapted from > > > > > config/examples/arch-bgp-ibm-opt.py) > > > > > configure_options = [ > > > > > '--with-cc=mpixlc', > > > > > '--with-fc=mpixlf90', > > > > > '--with-cxx=mpixlcxx', > > > > > 'COPTFLAGS=-O3', > > > > > 'FOPTFLAGS=-O3', > > > > > '--with-debugging=0', > > > > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > > > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > > > > '--download-parmetis=1', > > > > > '--download-metis=1', > > > > > '--download-plapack=1', > > > > > '--download-hdf5=1' > > > > > ] > > > > > > > > > > I would appreciate any help building the llbrary there. > > > > > > > > > > Thanks, > > > > > Jitu > > > > > > > > > > > > From ling.zou at inl.gov Wed Jul 17 18:02:28 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 17 Jul 2013 17:02:28 -0600 Subject: [petsc-users] example on manually setup matrix coloring for finite difference Jacobian? Message-ID: Hi All, If I have a 2-d problem with unstructured triangle mesh, I suppose I need handle the coloring manually in case I need the finite differencing Jacobian. Also I guess it is doable as I know the mesh connectivity. I noticed there are several examples handling structured mesh case. Is there any example to setup the matrix coloring for this kind of case? Best, Ling From knepley at gmail.com Wed Jul 17 18:06:05 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Jul 2013 18:06:05 -0500 Subject: [petsc-users] example on manually setup matrix coloring for finite difference Jacobian? In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 6:02 PM, Zou (Non-US), Ling wrote: > Hi All, > > If I have a 2-d problem with unstructured triangle mesh, I suppose I > need handle the coloring manually in case I need the finite > differencing Jacobian. Also I guess it is doable as I know the mesh > connectivity. I noticed there are several examples handling structured > mesh case. Is there any example to setup the matrix coloring for this > kind of case? > We have no examples of this, and have not pushed it in our development. I would also consider forming a simplified operator for preconditioning while using the finite difference approximation for the action. Matt > Best, > > Ling > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Wed Jul 17 18:15:01 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 17 Jul 2013 17:15:01 -0600 Subject: [petsc-users] example on manually setup matrix coloring for finite difference Jacobian? In-Reply-To: References: Message-ID: (meant to send to the list) Thanks Matt for you reply. In my case, 2-d problem with unstructured mesh, what would be a practical way to move forward? 1, using fully analytical Jacobian, it however will be very difficult to make everything right eventually 2, using finite differencing Jacobian, but use keyword SNESDefaultComputeJacobian, which is very slow 3, using matrix free operation to avoid the Jacobian calculation Any suggestion? On Wed, Jul 17, 2013 at 5:06 PM, Matthew Knepley wrote: > On Wed, Jul 17, 2013 at 6:02 PM, Zou (Non-US), Ling > wrote: >> >> Hi All, >> >> If I have a 2-d problem with unstructured triangle mesh, I suppose I >> need handle the coloring manually in case I need the finite >> differencing Jacobian. Also I guess it is doable as I know the mesh >> connectivity. I noticed there are several examples handling structured >> mesh case. Is there any example to setup the matrix coloring for this >> kind of case? > > > We have no examples of this, and have not pushed it in our development. I > would also consider forming a simplified operator for preconditioning while > using the finite difference approximation for the action. > > Matt > >> >> Best, >> >> Ling > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener From knepley at gmail.com Wed Jul 17 18:17:13 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Jul 2013 18:17:13 -0500 Subject: [petsc-users] example on manually setup matrix coloring for finite difference Jacobian? In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 6:15 PM, Zou (Non-US), Ling wrote: > (meant to send to the list) > > Thanks Matt for you reply. > > In my case, 2-d problem with unstructured mesh, what would be a > practical way to move forward? > 1, using fully analytical Jacobian, it however will be very difficult > to make everything right eventually > This is, of course, preferable is preconditioning is difficult. > 2, using finite differencing Jacobian, but use keyword > SNESDefaultComputeJacobian, which is very slow > This is not really practical for anything but toy problems. > 3, using matrix free operation to avoid the Jacobian calculation > What I was suggesting is that you use MF FD action for the action of the Jacobian, which is generally an accurate operation, and then assemble a simplified operator to use for constructing the preconditioner. Matt > Any suggestion? > > On Wed, Jul 17, 2013 at 5:06 PM, Matthew Knepley > wrote: > > On Wed, Jul 17, 2013 at 6:02 PM, Zou (Non-US), Ling > > wrote: > >> > >> Hi All, > >> > >> If I have a 2-d problem with unstructured triangle mesh, I suppose I > >> need handle the coloring manually in case I need the finite > >> differencing Jacobian. Also I guess it is doable as I know the mesh > >> connectivity. I noticed there are several examples handling structured > >> mesh case. Is there any example to setup the matrix coloring for this > >> kind of case? > > > > > > We have no examples of this, and have not pushed it in our development. I > > would also consider forming a simplified operator for preconditioning > while > > using the finite difference approximation for the action. > > > > Matt > > > >> > >> Best, > >> > >> Ling > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 17 17:36:14 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Jul 2013 14:36:14 -0800 Subject: [petsc-users] AGU Session: DI012. State of the Art in Computational Geoscience Message-ID: <87mwpk4xwh.fsf@mcs.anl.gov> If you are thinking about attending the American Geophysical Union Fall Meeting (Dec 9-13 in San Francisco), please consider submitting an abstract to this diverse session. DI012. State of the Art in Computational Geoscience This session highlights computational advances in areas such as lithospheric and mantle dynamics, magma and fluid transport, landscape evolution, polar ice, subsurface flow, Earth structure inversion and Earth material properties. We seek contributions from all aspects of geophysical computation including: accurate, robust multiscale discretizations and efficient, scalable solvers; multilevel, block decomposed, and structure-preserving representations of operators, addressing model non-smoothness, and utilizing next generation hardware; efficient and flexible implementations for the community; data assimilation with uncertainty and experimental design. Featured in the both computational SWIRLs: * Computational Methods Across Scales * Characterizing Uncertainty Invited presenters: * Paul Tackley (ETH Z?rich) * David Ham (Imperial College London) * Reed Maxwell (Colorado School of Mines) * Jack Poulson (Georgia Institute of Technology) Conveners: * Jed Brown (Argonne National Laboratory) * Matthew Knepley (University of Chicago) * Dave May (ETH Z?rich) * Eh Tan (Academia Sinica) http://fallmeeting.agu.org/2013/scientific-program/session-search/sessions/di012-state-of-the-art-in-computational-geoscience-2/ Abstracts are due August 6. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Jul 17 20:23:06 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Jul 2013 20:23:06 -0500 Subject: [petsc-users] example on manually setup matrix coloring for finite difference Jacobian? In-Reply-To: References: Message-ID: <9B8B5972-797A-4D7B-AA9F-CB2EB576F0A6@mcs.anl.gov> On Jul 17, 2013, at 6:02 PM, "Zou (Non-US), Ling" wrote: > Hi All, > > If I have a 2-d problem with unstructured triangle mesh, I suppose I > need handle the coloring manually in case I need the finite > differencing Jacobian. Also I guess it is doable as I know the mesh > connectivity. I noticed there are several examples handling structured > mesh case. Is there any example to setup the matrix coloring for this > kind of case? See the section Finite Difference Jacobian Approximations in the users manual; this tells you exactly what needs to be done. See also src/snes/examples/tutorials/ex1.c that code that begins with if (fd_coloring) { similar code also for Fortran programmers in ex1f.F ex10d/ex10.c Should be straightforward, let us know if you have any problems, Barry > Best, > > Ling From olivier.bonnefon at avignon.inra.fr Thu Jul 18 05:08:47 2013 From: olivier.bonnefon at avignon.inra.fr (Olivier Bonnefon) Date: Thu, 18 Jul 2013 12:08:47 +0200 Subject: [petsc-users] FEM on 2D poisson equation Message-ID: <51E7BEAF.4090901@avignon.inra.fr> Hello, I have a 2-d heat equation that I want to simulate with Finit Element Method, to do this, I'm looking for an example solving 2D poisson equation with FEM (DMDA or DMPlex). Is there an example like this ? Thanks a lot. Olivier Bonnefon -- Olivier Bonnefon INRA PACA-Avignon, Unit? BioSP Tel: +33 (0)4 32 72 21 58 From knepley at gmail.com Thu Jul 18 06:12:31 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Jul 2013 06:12:31 -0500 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: <51E7BEAF.4090901@avignon.inra.fr> References: <51E7BEAF.4090901@avignon.inra.fr> Message-ID: On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon < olivier.bonnefon at avignon.inra.fr> wrote: > Hello, > > I have a 2-d heat equation that I want to simulate with Finit Element > Method, to do this, I'm looking for an example solving 2D poisson equation > with FEM (DMDA or DMPlex). Is there an example like this ? > There is, but there it is still somewhat problematic. I use FIAT to generate the basis function tabulation, so you have to configure with --download-fiat --download-scientificpython --download-generator and you need mesh generation and partitioning --download-triangle --download-chaco and then you can run SNES ex12 using Builder (which will make the header file) python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex12.c Jed and I are working on an all C version of tabulation which would mean that you could bypass the Python code generation step. Once the header is generated for the element you want, then you can just run the example as normal. Matt > Thanks a lot. > > Olivier Bonnefon > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel: +33 (0)4 32 72 21 58 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.bonnefon at avignon.inra.fr Thu Jul 18 08:17:17 2013 From: olivier.bonnefon at avignon.inra.fr (Olivier Bonnefon) Date: Thu, 18 Jul 2013 15:17:17 +0200 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: References: <51E7BEAF.4090901@avignon.inra.fr> Message-ID: <51E7EADD.6010401@avignon.inra.fr> It is what I wanted, it works. If I well understand the code, ex12.h contains the P1 implementation. To simulate an other system, with time dependences for examples (du/dt), I have to adapt the plugin functions. Thanks a lot. Olivier B On 07/18/2013 01:12 PM, Matthew Knepley wrote: > On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon > > wrote: > > Hello, > > I have a 2-d heat equation that I want to simulate with Finit > Element Method, to do this, I'm looking for an example solving 2D > poisson equation with FEM (DMDA or DMPlex). Is there an example > like this ? > > > There is, but there it is still somewhat problematic. I use FIAT to > generate the basis function tabulation, > so you have to configure with > > --download-fiat --download-scientificpython --download-generator > > and you need mesh generation and partitioning > > --download-triangle --download-chaco > > and then you can run SNES ex12 using Builder (which will make the > header file) > > python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex12.c > > Jed and I are working on an all C version of tabulation which would > mean that you could bypass > the Python code generation step. Once the header is generated for the > element you want, then > you can just run the example as normal. > > Matt > > Thanks a lot. > > Olivier Bonnefon > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel: +33 (0)4 32 72 21 58 > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -- Olivier Bonnefon INRA PACA-Avignon, Unit? BioSP Tel: +33 (0)4 32 72 21 58 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 18 08:26:29 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Jul 2013 08:26:29 -0500 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: <51E7EADD.6010401@avignon.inra.fr> References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> Message-ID: On Thu, Jul 18, 2013 at 8:17 AM, Olivier Bonnefon < olivier.bonnefon at avignon.inra.fr> wrote: > It is what I wanted, it works. > If I well understand the code, ex12.h contains the P1 implementation. To > simulate an other system, with time dependences for examples (du/dt), I > have to adapt the plugin functions. > The way I would add time dependence is to convert this from a SNES example into a TS example. I can help you do this since I want to start using TS by default. Does this sound reasonable? Thanks, Matt > Thanks a lot. > > Olivier B > > On 07/18/2013 01:12 PM, Matthew Knepley wrote: > > On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon < > olivier.bonnefon at avignon.inra.fr> wrote: > >> Hello, >> >> I have a 2-d heat equation that I want to simulate with Finit Element >> Method, to do this, I'm looking for an example solving 2D poisson equation >> with FEM (DMDA or DMPlex). Is there an example like this ? >> > > There is, but there it is still somewhat problematic. I use FIAT to > generate the basis function tabulation, > so you have to configure with > > --download-fiat --download-scientificpython --download-generator > > and you need mesh generation and partitioning > > --download-triangle --download-chaco > > and then you can run SNES ex12 using Builder (which will make the header > file) > > python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex12.c > > Jed and I are working on an all C version of tabulation which would mean > that you could bypass > the Python code generation step. Once the header is generated for the > element you want, then > you can just run the example as normal. > > Matt > > >> Thanks a lot. >> >> Olivier Bonnefon >> >> -- >> Olivier Bonnefon >> INRA PACA-Avignon, Unit? BioSP >> Tel: +33 (0)4 32 72 21 58 <%2B33%20%280%294%2032%2072%2021%2058> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel: +33 (0)4 32 72 21 58 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.bonnefon at avignon.inra.fr Thu Jul 18 08:39:29 2013 From: olivier.bonnefon at avignon.inra.fr (Olivier Bonnefon) Date: Thu, 18 Jul 2013 15:39:29 +0200 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> Message-ID: <51E7F011.2020408@avignon.inra.fr> On 07/18/2013 03:26 PM, Matthew Knepley wrote: > On Thu, Jul 18, 2013 at 8:17 AM, Olivier Bonnefon > > wrote: > > It is what I wanted, it works. > If I well understand the code, ex12.h contains the P1 > implementation. To simulate an other system, with time dependences > for examples (du/dt), I have to adapt the plugin functions. > > > The way I would add time dependence is to convert this from a SNES > example into a TS example. I can help you > do this since I want to start using TS by default. Does this sound > reasonable? Yes, of course. My goal is to simulate diffusive equation with non linear sources, for example Lotka-Voltera competion. Olivier B > > Thanks, > > Matt > > Thanks a lot. > > Olivier B > > On 07/18/2013 01:12 PM, Matthew Knepley wrote: >> On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon >> > > wrote: >> >> Hello, >> >> I have a 2-d heat equation that I want to simulate with Finit >> Element Method, to do this, I'm looking for an example >> solving 2D poisson equation with FEM (DMDA or DMPlex). Is >> there an example like this ? >> >> >> There is, but there it is still somewhat problematic. I use FIAT >> to generate the basis function tabulation, >> so you have to configure with >> >> --download-fiat --download-scientificpython --download-generator >> >> and you need mesh generation and partitioning >> >> --download-triangle --download-chaco >> >> and then you can run SNES ex12 using Builder (which will make the >> header file) >> >> python2.7 ./config/builder2.py check >> src/snes/examples/tutorials/ex12.c >> >> Jed and I are working on an all C version of tabulation which >> would mean that you could bypass >> the Python code generation step. Once the header is generated for >> the element you want, then >> you can just run the example as normal. >> >> Matt >> >> Thanks a lot. >> >> Olivier Bonnefon >> >> -- >> Olivier Bonnefon >> INRA PACA-Avignon, Unit? BioSP >> Tel: +33 (0)4 32 72 21 58 >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel:+33 (0)4 32 72 21 58 > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -- Olivier Bonnefon INRA PACA-Avignon, Unit? BioSP Tel: +33 (0)4 32 72 21 58 -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.gutheil at fz-juelich.de Thu Jul 18 09:07:21 2013 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Thu, 18 Jul 2013 16:07:21 +0200 Subject: [petsc-users] installing petsc with scalapack from mkl In-Reply-To: References: <51E563FE.6040803@purdue.edu> <51E64C27.3040709@fz-juelich.de> Message-ID: <51E7F699.4090403@fz-juelich.de> The configure finally worked when I added '--with-scalapack-include=/usr/local/intel/Composer/composer_xe_2011_sp1.10.319/mkl/include', There is an mkl_scalapck.h in that directory. It did not work with MKLPATH and with empty path I got a python error, perhaps I added a blank where I should not have done, I don't know how to use python correctly. Unfortunately I can't find out whether it compiles and runs because the compiler is in maintenance on the test-cluster where I have to try the installations. Thanks Inge Gutheil On 07/17/13 13:06, Matthew Knepley wrote: > On Wed, Jul 17, 2013 at 2:47 AM, Inge Gutheil > wrote: > > Hello, > I tried this and got the following error: > > > 1) Please do not send logs to petsc-users, send them to > petsc-maint at mcs.anl.gov > > 2) The logic is wrong here. We are requiring you to also specify an > empty --with-scalapack-include=[], > which you should not have to. Please confirm that this works. > > We will put a fix in the next patch update. > > Thanks, > > Matt > > ================================================================================ > TEST configureLibrary from > PETSc.packages.scalapack(/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py:464) > TESTING: configureLibrary from > PETSc.packages.scalapack(/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py:464) > Find an installation and check if it can work with PETSc > =============******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > ------------------------------------------------------------------------------- > You must specify a path for scalapack with > --with-scalapack-dir= > If you do not want scalapack, then give --with-scalapack=0 > You might also consider using --download-scalapack instead > ******************************************************************************* > File > "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/configure.py", > line > 293, in petsc_configure > framework.configure(out = sys.stdout) > File > "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/framework.py", > line 933, in configure > child.configure() > File > "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py", > line 556, in configure > self.executeTest(self.configureLibrary) > File > "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/base.py", > line 115, in executeTest > ret = apply(test, args,kargs) > File > "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py", > line 484, in configureLibrary > for location, directory, lib, incl in self.generateGuesses(): > File > "/lustre/jhome5/software/mathprod/PETSc/petsc-3.4.2/config/BuildSystem/config/package.py", > line 314, in generateGuesses > raise RuntimeError('You must specify a path for '+self.name > +' with > --with-'+self.package+'-dir=\nIf you do not want > '+self.name +', then give > --with-'+self.package+'=0\nYou might also > consider using --download-'+self.package+' instead') > ===================================================================== > See the attached complete configure.log > This is on an intel cluster with intel compiler. > Thanks > Inge Gutheil > > On 07/16/13 17:27, Satish Balay wrote: > > Try: > > --with-scalapack-lib="-L/opt/intel/mkl//lib/intel64 > -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64" > > > BLACS is now part of scalapack-2 [which petsc-3.4 uses] - but > mkl has blas/scalapack split. > So you would have to specify both libs with the > --with-scalapack-lib option. > > Satish > > On Tue, 16 Jul 2013, Michael Povolotskyi wrote: > > Dear Petsc developers and users, > I'm trying to configure petsc with scalapack from mkl library. > > From the configure.log (see attached) it seems that when > PETSc checks for the > scalapack functionality it does not link blacs. > Please advise. > Thank you, > Michael. > > > > > -- > -- > > Inge Gutheil > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-3135 > Fax: +49-2461-61-6656 > E-mail:i.gutheil at fz-juelich.de > > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > > Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. > September, von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de From armeliusc at gmail.com Thu Jul 18 12:09:01 2013 From: armeliusc at gmail.com (Armelius Cameron) Date: Thu, 18 Jul 2013 13:09:01 -0400 Subject: [petsc-users] Getting Different Solution from KSP with different resolution Message-ID: Hello, I am trying to work on getting to know PETSc by doing an example myself. Basically, I am trying to solve Ax = b using KSP where A is 1D laplacian operator (tri-diag banded matrix with {-1,2,-1} on the diagonal), and b is the forcing term, so it's just basically a simple 1-D poisson equation. The size of the matrix is n by n, and the vectors have size n. The issue I am getting is that when I change n, I get different answer for the vector x. When I plot the result x, the shape still looks like the shape of the potential I expect, except it's scaled somehow, and the scale is related to n, somehow. I am at a loss in trying to figure out what would cause this, so any help would be appreciated. I've attached the code (fortran) I have. Thank you. AC -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_poisson.F90 Type: application/octet-stream Size: 2847 bytes Desc: not available URL: From ling.zou at inl.gov Thu Jul 18 12:24:15 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 18 Jul 2013 11:24:15 -0600 Subject: [petsc-users] Getting Different Solution from KSP with different resolution In-Reply-To: References: Message-ID: I suppose you need at least one Dirichlet boundary condition for your problem? i.e., you could not do this: 2T(1) - T(2) = 0 and -T(N-1) + 2T(N) = 0 at the same time. On Thu, Jul 18, 2013 at 11:09 AM, Armelius Cameron wrote: > Hello, > I am trying to work on getting to know PETSc by doing an example myself. > Basically, I am trying to solve Ax = b using KSP where A is 1D laplacian > operator (tri-diag banded matrix with {-1,2,-1} on the diagonal), and b is > the forcing term, so it's just basically a simple 1-D poisson equation. > > The size of the matrix is n by n, and the vectors have size n. The issue I > am getting is that when I change n, I get different answer for the vector x. > When I plot the result x, the shape still looks like the shape of the > potential I expect, except it's scaled somehow, and the scale is related to > n, somehow. > > I am at a loss in trying to figure out what would cause this, so any help > would be appreciated. I've attached the code (fortran) I have. > Thank you. > > AC From armeliusc at gmail.com Thu Jul 18 12:48:09 2013 From: armeliusc at gmail.com (Armelius Cameron) Date: Thu, 18 Jul 2013 13:48:09 -0400 Subject: [petsc-users] Getting Different Solution from KSP with different resolution In-Reply-To: References: Message-ID: Thanks for the pointer; I include the Dirichlet boundary in the matrix operator now, but your comment also reminded me that I was missing the scale factor dx^2 ! (which is a function of the resolution nsize). So it seems all work as expected now. Thanks again. AC On Thu, Jul 18, 2013 at 1:24 PM, Zou (Non-US), Ling wrote: > I suppose you need at least one Dirichlet boundary condition for your > problem? > i.e., you could not do this: > 2T(1) - T(2) = 0 > and > -T(N-1) + 2T(N) = 0 > at the same time. > > On Thu, Jul 18, 2013 at 11:09 AM, Armelius Cameron > wrote: > > Hello, > > I am trying to work on getting to know PETSc by doing an example myself. > > Basically, I am trying to solve Ax = b using KSP where A is 1D laplacian > > operator (tri-diag banded matrix with {-1,2,-1} on the diagonal), and b > is > > the forcing term, so it's just basically a simple 1-D poisson equation. > > > > The size of the matrix is n by n, and the vectors have size n. The issue > I > > am getting is that when I change n, I get different answer for the > vector x. > > When I plot the result x, the shape still looks like the shape of the > > potential I expect, except it's scaled somehow, and the scale is related > to > > n, somehow. > > > > I am at a loss in trying to figure out what would cause this, so any help > > would be appreciated. I've attached the code (fortran) I have. > > Thank you. > > > > AC > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Thu Jul 18 12:51:03 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Thu, 18 Jul 2013 19:51:03 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: <87li5555oo.fsf@mcs.anl.gov> References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: Thanks Jed. I implemented a 2D case which worked when running the program with the options: -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point But I encountered some problems when using following options (this set of options is based on one of the tutorial slides in the website for multiphysics problem): -ksp_type fgmres -pc_type mg -mg_levels_ksp_type fgmres -mg_levels_ksp_max_it 2 -mg_levels_pc_type fieldsplit -mg_levels_pc_fieldsplit_detect_saddle_point ?mg_levels_pc_fieldsplit_type schur -mg_levels_pc_fieldsplit_factorization_type full -mg_levels_pc_fieldsplit_schur_precondition user -mg_levels_fieldsplit_0_ksp_type preonly -mg_levels_fieldsplit_0_pc_type sor -mg_levels_fieldsplit_0_pc_sor_forward -mg_levels_fieldsplit_0_ksp_type gmres -mg_levels_fieldsplit_0_pc_type none -mg_levels_fieldsplit_ksp_max_it 5 -mg_coarse_pc_type svd The relevant error messages: [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Invalid argument! [1]PETSC ERROR: Unknown logical value: ?mg_levels_pc_fieldsplit_type! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 ... [1]PETSC ERROR: PetscOptionsStringToBool() line 173 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c [1]PETSC ERROR: PetscOptionsGetBool() line 1530 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c [1]PETSC ERROR: PCSetFromOptions_FieldSplit() line 1060 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/fieldsplit/fieldsplit.c [1]PETSC ERROR: PCSetFromOptions() line 174 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/pcset.c [1]PETSC ERROR: KSPSetFromOptions() line 357 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itcl.c [1]PETSC ERROR: PCSetUp_MG() line 677 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/mg/mg.c [1]PETSC ERROR: PCSetUp() line 890 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/precon.c [1]PETSC ERROR: KSPSetUp() line 278 in /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itfunc.c ... And the warnings: WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-mg_coarse_pc_type value: svd Option left: name:-mg_levels_fieldsplit_0_ksp_type value: gmres Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) Option left: name:-mg_levels_fieldsplit_0_pc_type value: none Option left: name:-mg_levels_fieldsplit_ksp_max_it value: 5 Option left: name:-mg_levels_ksp_max_it value: 2 Option left: name:-mg_levels_ksp_type value: fgmres Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full Option left: name:-mg_levels_pc_fieldsplit_schur_precondition value: user Is it that -mg_levels_pc_fieldsplit_type should be followed by some "bool" value ? 1 ? true ? I tried using "true", then it did not give any error but the following warnings: WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-mg_coarse_pc_type value: svd Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full I ran the program for the small size. Now, before I implement the 3D case with this approach instead of trying the implementation of Augmented Lagrangian method (M2 in previous email) my question is: If I use the Petsc with the method you suggested (PCFieldSplit, multigrid for momentum etc), tentatively how much of computing resources and the corresponding time would it require to solve the problem of the size I want (around 250^3 grid sized domain) ? On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: > Bishesh Khanal writes: > > > Now, I implemented two different approaches, each for both 2D and 3D, in > > MATLAB. It works for the smaller sizes but I have problems solving it for > > the problem size I need (250^3 grid size). > > I use staggered grid with p on cell centers, and components of v on cell > > faces. Similar split up of K to cell center and faces to account for the > > variable viscosity case) > > Okay, you're using a staggered-grid finite difference discretization of > variable-viscosity Stokes. This is a common problem and I recommend > starting with PCFieldSplit with Schur complement reduction (make that > work first, then switch to block preconditioner). You can use PCLSC or > (probably better for you), assemble a preconditioning matrix containing > the inverse viscosity in the pressure-pressure block. This diagonal > matrix is a spectrally equivalent (or nearly so, depending on > discretization) approximation of the Schur complement. The velocity > block can be solved with algebraic multigrid. Read the PCFieldSplit > docs (follow papers as appropriate) and let us know if you get stuck. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 18 12:59:33 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Jul 2013 12:59:33 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Thu, Jul 18, 2013 at 12:51 PM, Bishesh Khanal wrote: > Thanks Jed. I implemented a 2D case which worked when running the program > with the options: -pc_type fieldsplit -pc_fieldsplit_type schur > -pc_fieldsplit_detect_saddle_point > But I encountered some problems when using following options (this set of > options is based on one of the tutorial slides in the website for > multiphysics problem): > -ksp_type fgmres -pc_type mg -mg_levels_ksp_type fgmres > -mg_levels_ksp_max_it 2 -mg_levels_pc_type fieldsplit > -mg_levels_pc_fieldsplit_detect_saddle_point ?mg_levels_pc_fieldsplit_type > schur -mg_levels_pc_fieldsplit_factorization_type full > -mg_levels_pc_fieldsplit_schur_precondition user > -mg_levels_fieldsplit_0_ksp_type preonly -mg_levels_fieldsplit_0_pc_type > sor -mg_levels_fieldsplit_0_pc_sor_forward -mg_levels_fieldsplit_0_ksp_type > gmres -mg_levels_fieldsplit_0_pc_type none -mg_levels_fieldsplit_ksp_max_it > 5 -mg_coarse_pc_type svd > > The relevant error messages: > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Invalid argument! > [1]PETSC ERROR: Unknown logical value: ?mg_levels_pc_fieldsplit_type! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 > ... > [1]PETSC ERROR: PetscOptionsStringToBool() line 173 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c > [1]PETSC ERROR: PetscOptionsGetBool() line 1530 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c > [1]PETSC ERROR: PCSetFromOptions_FieldSplit() line 1060 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/fieldsplit/fieldsplit.c > These line numbers do not match the source. How did you install? https://bitbucket.org/petsc/petsc/src/160ea6873d9fa631d01b6f8a2d8b12aece7dfb61/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=maint#cl-1060 Matt > [1]PETSC ERROR: PCSetFromOptions() line 174 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/pcset.c > [1]PETSC ERROR: KSPSetFromOptions() line 357 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itcl.c > [1]PETSC ERROR: PCSetUp_MG() line 677 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/mg/mg.c > [1]PETSC ERROR: PCSetUp() line 890 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in > /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itfunc.c > ... > And the warnings: > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-mg_coarse_pc_type value: svd > Option left: name:-mg_levels_fieldsplit_0_ksp_type value: gmres > Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) > Option left: name:-mg_levels_fieldsplit_0_pc_type value: none > Option left: name:-mg_levels_fieldsplit_ksp_max_it value: 5 > Option left: name:-mg_levels_ksp_max_it value: 2 > Option left: name:-mg_levels_ksp_type value: fgmres > Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full > Option left: name:-mg_levels_pc_fieldsplit_schur_precondition value: user > > Is it that -mg_levels_pc_fieldsplit_type should be followed by some "bool" > value ? 1 ? true ? > I tried using "true", then it did not give any error but the following > warnings: > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-mg_coarse_pc_type value: svd > Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) > Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full > > I ran the program for the small size. Now, before I implement the 3D case > with this approach instead of trying the implementation of Augmented > Lagrangian method (M2 in previous email) my question is: > If I use the Petsc with the method you suggested (PCFieldSplit, multigrid > for momentum etc), tentatively how much of computing resources and the > corresponding time would it require to solve the problem of the size I want > (around 250^3 grid sized domain) ? > > > > On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: > >> Bishesh Khanal writes: >> >> > Now, I implemented two different approaches, each for both 2D and 3D, in >> > MATLAB. It works for the smaller sizes but I have problems solving it >> for >> > the problem size I need (250^3 grid size). >> > I use staggered grid with p on cell centers, and components of v on cell >> > faces. Similar split up of K to cell center and faces to account for the >> > variable viscosity case) >> >> Okay, you're using a staggered-grid finite difference discretization of >> variable-viscosity Stokes. This is a common problem and I recommend >> starting with PCFieldSplit with Schur complement reduction (make that >> work first, then switch to block preconditioner). You can use PCLSC or >> (probably better for you), assemble a preconditioning matrix containing >> the inverse viscosity in the pressure-pressure block. This diagonal >> matrix is a spectrally equivalent (or nearly so, depending on >> discretization) approximation of the Schur complement. The velocity >> block can be solved with algebraic multigrid. Read the PCFieldSplit >> docs (follow papers as appropriate) and let us know if you get stuck. >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Fri Jul 19 03:56:51 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Fri, 19 Jul 2013 10:56:51 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Thu, Jul 18, 2013 at 7:59 PM, Matthew Knepley wrote: > On Thu, Jul 18, 2013 at 12:51 PM, Bishesh Khanal wrote: > >> Thanks Jed. I implemented a 2D case which worked when running the program >> with the options: -pc_type fieldsplit -pc_fieldsplit_type schur >> -pc_fieldsplit_detect_saddle_point >> But I encountered some problems when using following options (this set >> of options is based on one of the tutorial slides in the website for >> multiphysics problem): >> -ksp_type fgmres -pc_type mg -mg_levels_ksp_type fgmres >> -mg_levels_ksp_max_it 2 -mg_levels_pc_type fieldsplit >> -mg_levels_pc_fieldsplit_detect_saddle_point ?mg_levels_pc_fieldsplit_type >> schur -mg_levels_pc_fieldsplit_factorization_type full >> -mg_levels_pc_fieldsplit_schur_precondition user >> -mg_levels_fieldsplit_0_ksp_type preonly -mg_levels_fieldsplit_0_pc_type >> sor -mg_levels_fieldsplit_0_pc_sor_forward -mg_levels_fieldsplit_0_ksp_type >> gmres -mg_levels_fieldsplit_0_pc_type none -mg_levels_fieldsplit_ksp_max_it >> 5 -mg_coarse_pc_type svd >> >> The relevant error messages: >> [1]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: Invalid argument! >> [1]PETSC ERROR: Unknown logical value: ?mg_levels_pc_fieldsplit_type! >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 >> ... >> [1]PETSC ERROR: PetscOptionsStringToBool() line 173 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c >> [1]PETSC ERROR: PetscOptionsGetBool() line 1530 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c >> [1]PETSC ERROR: PCSetFromOptions_FieldSplit() line 1060 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/fieldsplit/fieldsplit.c >> > > These line numbers do not match the source. How did you install? > > > https://bitbucket.org/petsc/petsc/src/160ea6873d9fa631d01b6f8a2d8b12aece7dfb61/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=maint#cl-1060 > > I had installed it in pretty much a standard way: downloading the tarball from the petsc webpage, then using ./configure with a bunch of options. I checked the fieldsplit.c file in my local installation, it seemed the difference to be in the first line. My local copy the code starts at 2nd line with first being just a newline unlike the one in bitbucket. In any case, I tried to install a newer version: petsc-3.4.2 but I encountered some problems! I will send another email regarding installation issues at petsc-maint at mcs.anl.gov while keeping this thread intact for my stokes problem! > Matt > > >> [1]PETSC ERROR: PCSetFromOptions() line 174 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/pcset.c >> [1]PETSC ERROR: KSPSetFromOptions() line 357 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itcl.c >> [1]PETSC ERROR: PCSetUp_MG() line 677 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/mg/mg.c >> [1]PETSC ERROR: PCSetUp() line 890 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/precon.c >> [1]PETSC ERROR: KSPSetUp() line 278 in >> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itfunc.c >> ... >> And the warnings: >> >> WARNING! There are options you set that were not used! >> WARNING! could be spelling mistake, etc! >> Option left: name:-mg_coarse_pc_type value: svd >> Option left: name:-mg_levels_fieldsplit_0_ksp_type value: gmres >> Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) >> Option left: name:-mg_levels_fieldsplit_0_pc_type value: none >> Option left: name:-mg_levels_fieldsplit_ksp_max_it value: 5 >> Option left: name:-mg_levels_ksp_max_it value: 2 >> Option left: name:-mg_levels_ksp_type value: fgmres >> Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full >> Option left: name:-mg_levels_pc_fieldsplit_schur_precondition value: user >> >> Is it that -mg_levels_pc_fieldsplit_type should be followed by some >> "bool" value ? 1 ? true ? >> I tried using "true", then it did not give any error but the following >> warnings: >> WARNING! There are options you set that were not used! >> WARNING! could be spelling mistake, etc! >> Option left: name:-mg_coarse_pc_type value: svd >> Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) >> Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full >> >> I ran the program for the small size. Now, before I implement the 3D case >> with this approach instead of trying the implementation of Augmented >> Lagrangian method (M2 in previous email) my question is: >> If I use the Petsc with the method you suggested (PCFieldSplit, multigrid >> for momentum etc), tentatively how much of computing resources and the >> corresponding time would it require to solve the problem of the size I want >> (around 250^3 grid sized domain) ? >> >> >> >> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >> >>> Bishesh Khanal writes: >>> >>> > Now, I implemented two different approaches, each for both 2D and 3D, >>> in >>> > MATLAB. It works for the smaller sizes but I have problems solving it >>> for >>> > the problem size I need (250^3 grid size). >>> > I use staggered grid with p on cell centers, and components of v on >>> cell >>> > faces. Similar split up of K to cell center and faces to account for >>> the >>> > variable viscosity case) >>> >>> Okay, you're using a staggered-grid finite difference discretization of >>> variable-viscosity Stokes. This is a common problem and I recommend >>> starting with PCFieldSplit with Schur complement reduction (make that >>> work first, then switch to block preconditioner). You can use PCLSC or >>> (probably better for you), assemble a preconditioning matrix containing >>> the inverse viscosity in the pressure-pressure block. This diagonal >>> matrix is a spectrally equivalent (or nearly so, depending on >>> discretization) approximation of the Schur complement. The velocity >>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>> docs (follow papers as appropriate) and let us know if you get stuck. >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 19 06:41:24 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Jul 2013 06:41:24 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 19, 2013 at 3:56 AM, Bishesh Khanal wrote: > > > > On Thu, Jul 18, 2013 at 7:59 PM, Matthew Knepley wrote: > >> On Thu, Jul 18, 2013 at 12:51 PM, Bishesh Khanal wrote: >> >>> Thanks Jed. I implemented a 2D case which worked when running the >>> program with the options: -pc_type fieldsplit -pc_fieldsplit_type schur >>> -pc_fieldsplit_detect_saddle_point >>> But I encountered some problems when using following options (this set >>> of options is based on one of the tutorial slides in the website for >>> multiphysics problem): >>> -ksp_type fgmres -pc_type mg -mg_levels_ksp_type fgmres >>> -mg_levels_ksp_max_it 2 -mg_levels_pc_type fieldsplit >>> -mg_levels_pc_fieldsplit_detect_saddle_point ?mg_levels_pc_fieldsplit_type >>> schur -mg_levels_pc_fieldsplit_factorization_type full >>> -mg_levels_pc_fieldsplit_schur_precondition user >>> -mg_levels_fieldsplit_0_ksp_type preonly -mg_levels_fieldsplit_0_pc_type >>> sor -mg_levels_fieldsplit_0_pc_sor_forward -mg_levels_fieldsplit_0_ksp_type >>> gmres -mg_levels_fieldsplit_0_pc_type none -mg_levels_fieldsplit_ksp_max_it >>> 5 -mg_coarse_pc_type svd >>> >>> The relevant error messages: >>> [1]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [1]PETSC ERROR: Invalid argument! >>> [1]PETSC ERROR: Unknown logical value: ?mg_levels_pc_fieldsplit_type! >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013 >>> ... >>> [1]PETSC ERROR: PetscOptionsStringToBool() line 173 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c >>> [1]PETSC ERROR: PetscOptionsGetBool() line 1530 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/sys/objects/options.c >>> [1]PETSC ERROR: PCSetFromOptions_FieldSplit() line 1060 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>> >> >> These line numbers do not match the source. How did you install? >> >> >> https://bitbucket.org/petsc/petsc/src/160ea6873d9fa631d01b6f8a2d8b12aece7dfb61/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=maint#cl-1060 >> >> I had installed it in pretty much a standard way: downloading the tarball > from the petsc webpage, then using ./configure with a bunch of options. > I checked the fieldsplit.c file in my local installation, it seemed the > difference to be in the first line. My local copy the code starts at 2nd > line with first being just a newline unlike the one in bitbucket. > Okay, you have some kind of invisible character in there (maybe \r?) or the dash is another ASCII character instead of -, so it thinks that ?mg_levels_pc_fieldsplit_type is the argument. Retype the input by hand. Matt > In any case, I tried to install a newer version: petsc-3.4.2 but I > encountered some problems! I will send another email regarding installation > issues at petsc-maint at mcs.anl.gov while keeping this thread intact for > my stokes problem! > > > > >> Matt >> >> >>> [1]PETSC ERROR: PCSetFromOptions() line 174 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/pcset.c >>> [1]PETSC ERROR: KSPSetFromOptions() line 357 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itcl.c >>> [1]PETSC ERROR: PCSetUp_MG() line 677 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/impls/mg/mg.c >>> [1]PETSC ERROR: PCSetUp() line 890 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/pc/interface/precon.c >>> [1]PETSC ERROR: KSPSetUp() line 278 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/ksp/ksp/interface/itfunc.c >>> ... >>> And the warnings: >>> >>> WARNING! There are options you set that were not used! >>> WARNING! could be spelling mistake, etc! >>> Option left: name:-mg_coarse_pc_type value: svd >>> Option left: name:-mg_levels_fieldsplit_0_ksp_type value: gmres >>> Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) >>> Option left: name:-mg_levels_fieldsplit_0_pc_type value: none >>> Option left: name:-mg_levels_fieldsplit_ksp_max_it value: 5 >>> Option left: name:-mg_levels_ksp_max_it value: 2 >>> Option left: name:-mg_levels_ksp_type value: fgmres >>> Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full >>> Option left: name:-mg_levels_pc_fieldsplit_schur_precondition value: user >>> >>> Is it that -mg_levels_pc_fieldsplit_type should be followed by some >>> "bool" value ? 1 ? true ? >>> I tried using "true", then it did not give any error but the following >>> warnings: >>> WARNING! There are options you set that were not used! >>> WARNING! could be spelling mistake, etc! >>> Option left: name:-mg_coarse_pc_type value: svd >>> Option left: name:-mg_levels_fieldsplit_0_pc_sor_forward (no value) >>> Option left: name:-mg_levels_pc_fieldsplit_factorization_type value: full >>> >>> I ran the program for the small size. Now, before I implement the 3D >>> case with this approach instead of trying the implementation of Augmented >>> Lagrangian method (M2 in previous email) my question is: >>> If I use the Petsc with the method you suggested (PCFieldSplit, >>> multigrid for momentum etc), tentatively how much of computing resources >>> and the corresponding time would it require to solve the problem of the >>> size I want (around 250^3 grid sized domain) ? >>> >>> >>> >>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>> >>>> Bishesh Khanal writes: >>>> >>>> > Now, I implemented two different approaches, each for both 2D and 3D, >>>> in >>>> > MATLAB. It works for the smaller sizes but I have problems solving it >>>> for >>>> > the problem size I need (250^3 grid size). >>>> > I use staggered grid with p on cell centers, and components of v on >>>> cell >>>> > faces. Similar split up of K to cell center and faces to account for >>>> the >>>> > variable viscosity case) >>>> >>>> Okay, you're using a staggered-grid finite difference discretization of >>>> variable-viscosity Stokes. This is a common problem and I recommend >>>> starting with PCFieldSplit with Schur complement reduction (make that >>>> work first, then switch to block preconditioner). You can use PCLSC or >>>> (probably better for you), assemble a preconditioning matrix containing >>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>> matrix is a spectrally equivalent (or nearly so, depending on >>>> discretization) approximation of the Schur complement. The velocity >>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>> docs (follow papers as appropriate) and let us know if you get stuck. >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Jul 19 11:15:06 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 19 Jul 2013 11:15:06 -0500 (CDT) Subject: [petsc-users] error with PAMI (fwd) Message-ID: Forwarding to petsc-users - perhaps other BG users have more experience with PAMI. Satish ---------- Forwarded message ---------- Date: Fri, 19 Jul 2013 09:09:24 +0000 From: g.giangaspero at utwente.nl To: petsc-maint at mcs.anl.gov Subject: [petsc-maint] error with PAMI Dear Authors, I am trying to run PETSc 3.3-p5 on a IBM Blue Gene/Q machine which features PAMI as MPI implementation and the IBM xl compiler. I would like to solve a non-linear system but as soon as the function SNESSolve is called the code crashes and I get the following error message (MooseMBFlow is the executable): MooseMBFlow: /bgsys/source/srcV1R2M0.14091/comm/sys/buildtools/pami/common/bgq/Memregion.h:58: pami_result_t PAMI::Memregion::createMemregion_impl(size_t*, size_t, void*, uint64_t): Assertion `rc == 0' failed These lines are the only ones being printed therefore I cannot tell you much more about the error. I know this is not directly related to PETSc but I wonder if anybody had already experienced such a problem, I did not get any real help from the user support of that machine. Thank you very much for your help. Best Regards, Giorgio Giangaspero From luqiyue at gmail.com Fri Jul 19 15:40:07 2013 From: luqiyue at gmail.com (Lu Qiyue) Date: Fri, 19 Jul 2013 15:40:07 -0500 Subject: [petsc-users] Fwd: MatCreateSeqAIJ( ) Quesion In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Lu Qiyue Date: Fri, Jul 19, 2013 at 3:39 PM Subject: Re: [petsc-users] MatCreateSeqAIJ( ) Quesion To: Barry Smith Thanks Barry. I am trying to premalloc the correct value for each row. Assume the matrix is in COO format and N is the dimension, NNZ is total non-zeros. My workflow is as below: 1) generating a file holding all the correct number of non-zeros for each row and read-in as cnt from http://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex12.c.html It looks cnt should have N+1 dimension and the last value is the total non-zeros NNZ One question here is at line 45, why the dimension of matrix are set to be (n+1), should it be n ? In http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tests/ex72.c.html they are just set to m, n For ex72.c, the code copy the entries which not on the diagonal and make the symmetric COO matrix to be FULL. But when we do the MatCreateSeqAIJ, the cnt should holding number of non-zeros per row of FULL matrix, right? In one word, what's the content(and dimension) of cnt array and how to set MatCreateSeqAIJ() assuming a N dimension matrix? Thanks Qiyue Lu On Tue, Jul 9, 2013 at 8:56 PM, Barry Smith wrote: > > > On Jul 9, 2013, at 8:49 PM, Lu Qiyue wrote: > > > Dear All: > > I am using a modified version of ex72.c in > > /src/mat/examples/tests > > directory to create a matrix with COO format. > > > > In the line: > > ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n, > (m*n/nnz),PETSC_NULL,&A);CHKERRQ(ierr); > > > > The 'nz' is set to (m*n/nnz). > > Hmm, perhaps you are looking at an older version of the code. The > current version > http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tests/ex72.c.html has > > MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,nnz*2/m,0,&A); > > > > > > > And from documents, nz is: > > nz - number of nonzeros per row (same for all rows) > > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateSeqAIJ.html > > > > > > I am wondering, why this value is set to be m*n/nnz? Looks obvious this > is not the number of nonzeros per row. What's the rule for choosing nz? > > > > If only one value here, should it be the largest number of non-zeroes > among all rows? > > Yes it should be the largest, this will lead to the fastest mat assembly > (at an expense of using extra memory). We recommend preallocating the > correct value for each row except in the most trivial codes. > > Barry > > > > > Thanks > > > > Qiyue Lu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Fri Jul 19 19:56:39 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Fri, 19 Jul 2013 20:56:39 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner Message-ID: Hi, I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, [0]PETSC ERROR: --------------------- Error Message ------------------------------------[0]PETSC ERROR: Petsc has generated inconsistent data![0]PETSC ERROR: Unhandled case, must have at least two fields, not 0![0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using, -ch_solve is just a prefix. -ch_solve_pc_type fieldsplit-ch_solve_pc_fieldsplit_type schur-ch_solve_fieldsplit_block_size 2-ch_solve_fieldsplit_0_field 1-ch_solve_fieldsplit_1_field 0-ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre-ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg-ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks,Subramanya -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Fri Jul 19 20:12:23 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Fri, 19 Jul 2013 21:12:23 -0400 Subject: [petsc-users] [Libmesh-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: I forgot to mention it, but I am using this with the petsc virs solver. > From: potaman at outlook.com > To: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > Date: Fri, 19 Jul 2013 20:56:39 -0400 > Subject: [Libmesh-users] Trying to set up a field-split preconditioner > > > > > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, > > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------[0]PETSC ERROR: Petsc has generated inconsistent data![0]PETSC ERROR: Unhandled case, must have at least two fields, not 0![0]PETSC ERROR: ------------------------------------------------------------------------ > These are the options that I am using, -ch_solve is just a prefix. > > > -ch_solve_pc_type fieldsplit-ch_solve_pc_fieldsplit_type schur-ch_solve_fieldsplit_block_size 2-ch_solve_fieldsplit_0_field 1-ch_solve_fieldsplit_1_field 0-ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre-ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg-ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > Any ideas? > Thanks,Subramanya > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Libmesh-users mailing list > Libmesh-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/libmesh-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 19 20:54:22 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Jul 2013 20:54:22 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. Since this is new, I suspect you will need help, so mail back. Thanks, Matt > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Fri Jul 19 21:17:18 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Fri, 19 Jul 2013 22:17:18 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: , Message-ID: Hi Matt, I am using Libmesh so the DM stuff is actually in the background, and unfortunately the matrix doesn't have a saddle point, I thought that -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_fields 0 -ch_solve_fieldsplit_1_fields 1 would inform the solver of the structure. If this doesn't work owing to the fact that the problem is only being solved on a section of the mesh (because of the reduced space method), I guess I will have to use the PetscSection. Does that sound right? Thanks, Subramanya Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: Hi, I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. Since this is new, I suspect you will need help, so mail back. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Petsc has generated inconsistent data![0]PETSC ERROR: Unhandled case, must have at least two fields, not 0![0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using, -ch_solve is just a prefix. -ch_solve_pc_type fieldsplit -ch_solve_pc_fieldsplit_type schur-ch_solve_fieldsplit_block_size 2-ch_solve_fieldsplit_0_field 1-ch_solve_fieldsplit_1_field 0-ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre-ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg-ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks,Subramanya -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 19 21:33:11 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Jul 2013 21:33:11 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva wrote: > Hi Matt, > I am using Libmesh so the DM stuff is actually in the background, and > unfortunately the matrix doesn't have a saddle point, > I thought that > > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_fields 0 > -ch_solve_fieldsplit_1_fields 1 > The block_size argument presumes you are using a DA. Are you? The other two options just say select the first DM field as field 0 in this PC, and the same with the second field. The DM must inform the PC about the initial field decomposition. > would inform the solver of the structure. If this doesn't work owing to > the fact that the problem is only being solved on a section of the mesh > (because of the reduced space method), I guess I will have to use the > PetscSection. Does that sound right? > First, I think the right people to do this are the Libmesh people (we will of course help them). Second, you have not said whether you are using a structured or unstructured mesh. What DM class does the solver actually see? Thanks, Matt > Thanks, > Subramanya > > > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > > On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: > > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > > > You have to tell the PCFIELDSPLIT about the dofs in each field. So > > 1) You are probably not using a DA, since it would tell it automatically > > 2) If you have a saddle point, you can use > -pc_fieldsplit_detect_saddle_point > > 3) If none of those apply, you can set a PetscSection describing your > layout to the DM for the solver. > Since this is new, I suspect you will need help, so mail back. > > Thanks, > > Matt > > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Fri Jul 19 21:59:08 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Fri, 19 Jul 2013 22:59:08 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: , , , Message-ID: Hi Matt, The DM being created is here (this is from Libmesh code (petscdmlibmesh.C ) 01047 { 01048 PetscErrorCode ierr; 01049 PetscFunctionBegin; 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); 01053 PetscFunctionReturn(0); 01054 } This file has methods to access the variables assigned to the DM (this seems to be stored in a struct.) So , I guess one should be able to add a bit of code to create sections as you mentioned somewhere around here. Thanks, Subramanya Date: Fri, 19 Jul 2013 21:33:11 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva wrote: Hi Matt, I am using Libmesh so the DM stuff is actually in the background, and unfortunately the matrix doesn't have a saddle point, I thought that -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_fields 0 -ch_solve_fieldsplit_1_fields 1 The block_size argument presumes you are using a DA. Are you? The other two options just say select the first DM field as field 0 in this PC, and the same with the second field. The DM must inform the PC about the initial field decomposition. would inform the solver of the structure. If this doesn't work owing to the fact that the problem is only being solved on a section of the mesh (because of the reduced space method), I guess I will have to use the PetscSection. Does that sound right? First, I think the right people to do this are the Libmesh people (we will of course help them). Second, you have not said whether you are using a structured or unstructured mesh. What DM class does the solver actually see? Thanks, Matt Thanks, Subramanya Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: Hi, I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. Since this is new, I suspect you will need help, so mail back. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! [0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using, -ch_solve is just a prefix. -ch_solve_pc_type fieldsplit -ch_solve_pc_fieldsplit_type schur -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_field 1 -ch_solve_fieldsplit_1_field 0 -ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre -ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg -ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks, Subramanya -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 19 23:09:10 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Jul 2013 23:09:10 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: > Hi Matt, > The DM being created is here (this is from Libmesh code > (petscdmlibmesh.C ) > > 01047 { > 01048 PetscErrorCode ierr; > 01049 PetscFunctionBegin; > 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); > 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); > 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); > 01053 PetscFunctionReturn(0); > 01054 } > > > This file has methods to access the variables assigned to the DM (this > seems to be stored in a struct.) > So , I guess one should be able to add a bit of code to create sections as > you mentioned somewhere around here. > Okay, they have their own DM. It must implement one of the interfaces for field specification. They could provide http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html or at a lower level http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html which in turn can be constructed by specifying a default PetscSection http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html Matt > Thanks, > Subramanya > > > > > > Date: Fri, 19 Jul 2013 21:33:11 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva > wrote: > Hi Matt, > I am using Libmesh so the DM stuff is actually in the background, and > unfortunately the matrix doesn't have a saddle point, > I thought that > > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_fields 0 > -ch_solve_fieldsplit_1_fields 1 > > The block_size argument presumes you are using a DA. Are you? > > The other two options just say select the first DM field as field 0 in > this PC, and the same with the second field. The > DM must inform the PC about the initial field decomposition. > > would inform the solver of the structure. If this doesn't work owing to > the fact that the problem is only being solved on a section of the mesh > (because of the reduced space method), I guess I will have to use the > PetscSection. Does that sound right? > > First, I think the right people to do this are the Libmesh people (we will > of course help them). Second, you have not said > whether you are using a structured or unstructured mesh. What DM class > does the solver actually see? > > Thanks, > > Matt > > Thanks, > Subramanya > > > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > > On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva > wrote: > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > > You have to tell the PCFIELDSPLIT about the dofs in each field. So > > 1) You are probably not using a DA, since it would tell it automatically > > 2) If you have a saddle point, you can use > -pc_fieldsplit_detect_saddle_point > > 3) If none of those apply, you can set a PetscSection describing your > layout to the DM for the solver. > Since this is new, I suspect you will need help, so mail back. > > Thanks, > > Matt > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Fri Jul 19 23:50:58 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sat, 20 Jul 2013 00:50:58 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: , , , , , Message-ID: Hi Matt, I see that there is an implementation of the interface to DMCreateFieldDecomposition.html . So does this sound right?1. I get index sets ,and variable names from the dm create field decomposition 2. Once I have these , I create fieldsplits and name them using this.. 3. And I guess I should be ready to go.. One question that remains is that the fieldsplit is created on a full matrix. However, an algorithm such as VIRS operates only on a subset of this full DM. Will the fieldsplit and preconditioner created on the full DM carry over to the subsidiary DMs?Thanks for all the help!Subramanya Date: Fri, 19 Jul 2013 23:09:10 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: Hi Matt, The DM being created is here (this is from Libmesh code (petscdmlibmesh.C ) 01047 { 01048 PetscErrorCode ierr; 01049 PetscFunctionBegin; 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); 01053 PetscFunctionReturn(0); 01054 } This file has methods to access the variables assigned to the DM (this seems to be stored in a struct.) So , I guess one should be able to add a bit of code to create sections as you mentioned somewhere around here. Okay, they have their own DM. It must implement one of the interfaces for field specification. They could provide http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html or at a lower level http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html which in turn can be constructed by specifying a default PetscSection http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html Matt Thanks, Subramanya Date: Fri, 19 Jul 2013 21:33:11 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva wrote: Hi Matt, I am using Libmesh so the DM stuff is actually in the background, and unfortunately the matrix doesn't have a saddle point, I thought that -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_fields 0 -ch_solve_fieldsplit_1_fields 1 The block_size argument presumes you are using a DA. Are you? The other two options just say select the first DM field as field 0 in this PC, and the same with the second field. The DM must inform the PC about the initial field decomposition. would inform the solver of the structure. If this doesn't work owing to the fact that the problem is only being solved on a section of the mesh (because of the reduced space method), I guess I will have to use the PetscSection. Does that sound right? First, I think the right people to do this are the Libmesh people (we will of course help them). Second, you have not said whether you are using a structured or unstructured mesh. What DM class does the solver actually see? Thanks, Matt Thanks, Subramanya Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: Hi, I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. Since this is new, I suspect you will need help, so mail back. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! [0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using, -ch_solve is just a prefix. -ch_solve_pc_type fieldsplit -ch_solve_pc_fieldsplit_type schur -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_field 1 -ch_solve_fieldsplit_1_field 0 -ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre -ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg -ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks, Subramanya -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Jul 20 04:03:33 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 20 Jul 2013 01:03:33 -0800 Subject: [petsc-users] error with PAMI (fwd) In-Reply-To: References: Message-ID: <87ob9x8uxm.fsf@mcs.anl.gov> Satish Balay writes: > I am trying to run PETSc 3.3-p5 on a IBM Blue Gene/Q machine which > features PAMI as MPI implementation PAMI is a library that sits underneath the MPI implementation. PAMI errors are most likely either memory corruption, otherwise a bug in the lower level software. If this error is reproducible, I would (a) try Valgrind on a different machine, (b) try to reduce the circumstances for the crash, and (c) report the issue to IBM. > and the IBM xl compiler. I would like to solve a non-linear system but > as soon as the function SNESSolve is called the code crashes and I get > the following error message (MooseMBFlow is the executable): > > MooseMBFlow: /bgsys/source/srcV1R2M0.14091/comm/sys/buildtools/pami/common/bgq/Memregion.h:58: pami_result_t PAMI::Memregion::createMemregion_impl(size_t*, size_t, void*, uint64_t): Assertion `rc == 0' failed > > These lines are the only ones being printed therefore I cannot tell > you much more about the error. I know this is not directly related to > PETSc but I wonder if anybody had already experienced such a problem, > I did not get any real help from the user support of that machine. > > Thank you very much for your help. > > Best Regards, > Giorgio Giangaspero -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From knepley at gmail.com Sat Jul 20 06:07:47 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 20 Jul 2013 06:07:47 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva wrote: > > Hi Matt, > I see that there is an implementation of the interface to > DMCreateFieldDecomposition.html . > > So does this sound right? > Yes, that should automatically provide the field decomposition. Since this is not happening, something is wrong in the code. Is the DM set in your solver? > 1. I get index sets ,and variable names from the dm create field > decomposition > 2. Once I have these , I create fieldsplits and name them using this.. > 3. And I guess I should be ready to go.. > One question that remains is that the fieldsplit is created on a full > matrix. However, an algorithm such as VIRS operates only on a subset of > this full DM. Will the fieldsplit and preconditioner created on the full DM > carry over to the subsidiary DMs? > VI is still new, and I have not tested in this case, but it is supposed to work. Thanks, Matt > Thanks for all the help! > Subramanya > > > ------------------------------ > Date: Fri, 19 Jul 2013 23:09:10 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: > > Hi Matt, > The DM being created is here (this is from Libmesh code > (petscdmlibmesh.C ) > > 01047 { > 01048 PetscErrorCode ierr; > 01049 PetscFunctionBegin; > 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); > 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); > 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); > 01053 PetscFunctionReturn(0); > 01054 } > > > This file has methods to access the variables assigned to the DM (this > seems to be stored in a struct.) > So , I guess one should be able to add a bit of code to create sections as > you mentioned somewhere around here. > > > Okay, they have their own DM. It must implement one of the interfaces for > field specification. They could provide > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html > > or at a lower level > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html > > which in turn can be constructed by specifying a default PetscSection > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html > > Matt > > > Thanks, > Subramanya > > > > > > Date: Fri, 19 Jul 2013 21:33:11 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva > wrote: > Hi Matt, > I am using Libmesh so the DM stuff is actually in the background, and > unfortunately the matrix doesn't have a saddle point, > I thought that > > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_fields 0 > -ch_solve_fieldsplit_1_fields 1 > > The block_size argument presumes you are using a DA. Are you? > > The other two options just say select the first DM field as field 0 in > this PC, and the same with the second field. The > DM must inform the PC about the initial field decomposition. > > would inform the solver of the structure. If this doesn't work owing to > the fact that the problem is only being solved on a section of the mesh > (because of the reduced space method), I guess I will have to use the > PetscSection. Does that sound right? > > First, I think the right people to do this are the Libmesh people (we will > of course help them). Second, you have not said > whether you are using a structured or unstructured mesh. What DM class > does the solver actually see? > > Thanks, > > Matt > > Thanks, > Subramanya > > > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > > On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva > wrote: > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > > You have to tell the PCFIELDSPLIT about the dofs in each field. So > > 1) You are probably not using a DA, since it would tell it automatically > > 2) If you have a saddle point, you can use > -pc_fieldsplit_detect_saddle_point > > 3) If none of those apply, you can set a PetscSection describing your > layout to the DM for the solver. > Since this is new, I suspect you will need help, so mail back. > > Thanks, > > Matt > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Sat Jul 20 06:19:15 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sat, 20 Jul 2013 07:19:15 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: , , , , , , , Message-ID: Hi Matt, The DM is created by the LibMesh code. The only thing I do directly with Petsc is set the solver prefixes which libmesh doesn't have an interface for at present . I have been able to set most options directly through command line options. this is the one case where that is not helping, and it might just be that I don't know how. Let me see if I am able to get this working. Thanks, Subramanya Date: Sat, 20 Jul 2013 06:07:47 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva wrote: Hi Matt, I see that there is an implementation of the interface to DMCreateFieldDecomposition.html . So does this sound right? Yes, that should automatically provide the field decomposition. Since this is not happening, something iswrong in the code. Is the DM set in your solver? 1. I get index sets ,and variable names from the dm create field decomposition 2. Once I have these , I create fieldsplits and name them using this.. 3. And I guess I should be ready to go.. One question that remains is that the fieldsplit is created on a full matrix. However, an algorithm such as VIRS operates only on a subset of this full DM. Will the fieldsplit and preconditioner created on the full DM carry over to the subsidiary DMs? VI is still new, and I have not tested in this case, but it is supposed to work. Thanks, Matt Thanks for all the help!Subramanya Date: Fri, 19 Jul 2013 23:09:10 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: Hi Matt, The DM being created is here (this is from Libmesh code (petscdmlibmesh.C ) 01047 { 01048 PetscErrorCode ierr; 01049 PetscFunctionBegin; 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); 01053 PetscFunctionReturn(0); 01054 } This file has methods to access the variables assigned to the DM (this seems to be stored in a struct.) So , I guess one should be able to add a bit of code to create sections as you mentioned somewhere around here. Okay, they have their own DM. It must implement one of the interfaces for field specification. They could provide http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html or at a lower level http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html which in turn can be constructed by specifying a default PetscSection http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html Matt Thanks, Subramanya Date: Fri, 19 Jul 2013 21:33:11 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva wrote: Hi Matt, I am using Libmesh so the DM stuff is actually in the background, and unfortunately the matrix doesn't have a saddle point, I thought that -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_fields 0 -ch_solve_fieldsplit_1_fields 1 The block_size argument presumes you are using a DA. Are you? The other two options just say select the first DM field as field 0 in this PC, and the same with the second field. The DM must inform the PC about the initial field decomposition. would inform the solver of the structure. If this doesn't work owing to the fact that the problem is only being solved on a section of the mesh (because of the reduced space method), I guess I will have to use the PetscSection. Does that sound right? First, I think the right people to do this are the Libmesh people (we will of course help them). Second, you have not said whether you are using a structured or unstructured mesh. What DM class does the solver actually see? Thanks, Matt Thanks, Subramanya Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: Hi, I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. Since this is new, I suspect you will need help, so mail back. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! [0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using, -ch_solve is just a prefix. -ch_solve_pc_type fieldsplit -ch_solve_pc_fieldsplit_type schur -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_field 1 -ch_solve_fieldsplit_1_field 0 -ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre -ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg -ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks, Subramanya -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 20 06:22:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 20 Jul 2013 06:22:30 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: On Sat, Jul 20, 2013 at 6:19 AM, subramanya sadasiva wrote: > Hi Matt, > The DM is created by the LibMesh code. The only thing I do directly with > Petsc is set the solver prefixes which libmesh doesn't have an interface > for at present . I have been able to set most options directly through > command line options. this is the one case where that is not helping, and > it might just be that I don't know how. Let me see if I am able to get this > working. > So does Libmesh create the PETSc solver? If so, it should be calling KSPSetDM() or SNESSetDM() or TSSetDM(). This is all I want to know. It seems like this is not the case since the PCFIELDSPLIT says it has no fields. Either that, or they have a bug in the DMCreateFieldDecomposition() implementation. Either way, it seems like the thing to do is run with -start_in_debugger, and break in PCFieldSplitSetDefaults() where that is called. Matt > > Thanks, > Subramanya > > ------------------------------ > Date: Sat, 20 Jul 2013 06:07:47 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva > wrote: > > > Hi Matt, > I see that there is an implementation of the interface to > DMCreateFieldDecomposition.html . > > So does this sound right? > > > Yes, that should automatically provide the field decomposition. Since this > is not happening, something is > wrong in the code. Is the DM set in your solver? > > > 1. I get index sets ,and variable names from the dm create field > decomposition > 2. Once I have these , I create fieldsplits and name them using this.. > 3. And I guess I should be ready to go.. > One question that remains is that the fieldsplit is created on a full > matrix. However, an algorithm such as VIRS operates only on a subset of > this full DM. Will the fieldsplit and preconditioner created on the full DM > carry over to the subsidiary DMs? > > > VI is still new, and I have not tested in this case, but it is supposed to > work. > > Thanks, > > Matt > > > Thanks for all the help! > Subramanya > > > ------------------------------ > Date: Fri, 19 Jul 2013 23:09:10 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: > > Hi Matt, > The DM being created is here (this is from Libmesh code > (petscdmlibmesh.C ) > > 01047 { > 01048 PetscErrorCode ierr; > 01049 PetscFunctionBegin; > 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); > 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); > 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); > 01053 PetscFunctionReturn(0); > 01054 } > > > This file has methods to access the variables assigned to the DM (this > seems to be stored in a struct.) > So , I guess one should be able to add a bit of code to create sections as > you mentioned somewhere around here. > > > Okay, they have their own DM. It must implement one of the interfaces for > field specification. They could provide > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html > > or at a lower level > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html > > which in turn can be constructed by specifying a default PetscSection > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html > > Matt > > > Thanks, > Subramanya > > > > > > Date: Fri, 19 Jul 2013 21:33:11 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva > wrote: > Hi Matt, > I am using Libmesh so the DM stuff is actually in the background, and > unfortunately the matrix doesn't have a saddle point, > I thought that > > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_fields 0 > -ch_solve_fieldsplit_1_fields 1 > > The block_size argument presumes you are using a DA. Are you? > > The other two options just say select the first DM field as field 0 in > this PC, and the same with the second field. The > DM must inform the PC about the initial field decomposition. > > would inform the solver of the structure. If this doesn't work owing to > the fact that the problem is only being solved on a section of the mesh > (because of the reduced space method), I guess I will have to use the > PetscSection. Does that sound right? > > First, I think the right people to do this are the Libmesh people (we will > of course help them). Second, you have not said > whether you are using a structured or unstructured mesh. What DM class > does the solver actually see? > > Thanks, > > Matt > > Thanks, > Subramanya > > > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > > On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva > wrote: > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > > You have to tell the PCFIELDSPLIT about the dofs in each field. So > > 1) You are probably not using a DA, since it would tell it automatically > > 2) If you have a saddle point, you can use > -pc_fieldsplit_detect_saddle_point > > 3) If none of those apply, you can set a PetscSection describing your > layout to the DM for the solver. > Since this is new, I suspect you will need help, so mail back. > > Thanks, > > Matt > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Sat Jul 20 06:35:21 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sat, 20 Jul 2013 07:35:21 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: , , , , , , , , , Message-ID: Hi Matt,? Libmesh does create the DM. This is the relevant code.? ierr = DMCreateLibMesh(libMesh::COMM_WORLD, this->system(), &dm);CHKERRABORT(libMesh::COMM_WORLD, ierr); ierr = DMSetFromOptions(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); ierr = DMSetUp(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); ierr = SNESSetDM(this->_snes, dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); I am unable to tell from the code whether the DMCreateFieldDecomposition is run at all. I will see if running in the debugger helps me figure it out. Thanks, Subramanya? Date: Sat, 20 Jul 2013 06:22:30 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Sat, Jul 20, 2013 at 6:19 AM, subramanya sadasiva wrote: Hi Matt,? The DM is created by the LibMesh code. The only thing I do directly with Petsc is set the solver prefixes which libmesh doesn't have an interface for at present . I have been able to set most options directly through command line options. this is the one case ?where that is not helping, and it might just be that I don't know how. Let me see if I am able to get this working. So does Libmesh create the PETSc solver? If so, it should be calling KSPSetDM() or SNESSetDM() or TSSetDM(). This is all I want to know. It seems like this is not the case since the PCFIELDSPLIT says it has no fields. Either that, or they have a bug in the DMCreateFieldDecomposition() implementation. Either way, it seems like the thing to do is run with -start_in_debugger, and break in?PCFieldSplitSetDefaults() where that is called. ? ? Matt ? ? Thanks,? Subramanya? Date: Sat, 20 Jul 2013 06:07:47 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva wrote: Hi Matt,? I see that there is an implementation of the interface to?DMCreateFieldDecomposition.html?. ? So does this sound right? Yes, that should automatically provide the field decomposition. Since this is not happening, something is wrong in the code. Is the DM set in your solver? ? 1. I get index sets ,and variable names from the dm create field decomposition? 2. Once I have these , I create fieldsplits and name them using this..? 3. And I guess I should be ready to go..? One question that remains is that the fieldsplit is created on a full matrix. However, an algorithm such as VIRS operates only on a subset of this full DM. Will the fieldsplit and preconditioner created on the full DM carry over to the subsidiary DMs? VI is still new, and I have not tested in this case, but it is supposed to work. ? Thanks, ? ? ? Matt ? Thanks for all the help! Subramanya? ? Date: Fri, 19 Jul 2013 23:09:10 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: Hi Matt,? The DM being created is here (this is from Libmesh code? (petscdmlibmesh.C?)? ? ?01047 {? ? ?01048 PetscErrorCode ierr; ? ?01049 PetscFunctionBegin; ? ?01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); ? ?01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); ? ?01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); ? ?01053 PetscFunctionReturn(0); ? ?01054 } This file has methods to access the ?variables assigned to the DM (this seems to be stored in a struct.)? So , I guess one should be able to add a bit of code to create sections as you mentioned somewhere around here.? Okay, they have their own DM. It must implement one of the interfaces for field specification. They could provide ??http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html or at a lower level ??http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html which in turn can be constructed by specifying a default PetscSection ??http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html ? ? Matt ? Thanks,? Subramanya? Date: Fri, 19 Jul 2013 21:33:11 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva wrote: Hi Matt,? I am using Libmesh so the DM stuff is actually in the background, and unfortunately the matrix doesn't have a saddle point,? I thought that? ? -ch_solve_fieldsplit_block_size 2 ? -ch_solve_fieldsplit_0_fields 0 ? -ch_solve_fieldsplit_1_fields 1 The block_size argument presumes you are using a DA. Are you? The other two options just say select the first DM field as field 0 in this PC, and the same with the second field. The DM must inform the PC about the initial field decomposition. ? would inform the solver of the structure. If this doesn't work owing to the fact that the problem is only being solved on a section of the mesh (because of the reduced space method), I guess I will have to use the PetscSection. Does that sound right? First, I think the right people to do this are the Libmesh people (we will of course help them). Second, you have not said whether you are using a structured or unstructured mesh. What DM class does the solver actually see? ? Thanks, ? ? ? Matt ? Thanks,? Subramanya? ?? Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: Hi,? I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error,? You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. ? ? ?Since this is new, I suspect you will need help, so mail back. ? Thanks, ? ? ?Matt ? [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! [0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using,? -ch_solve is just a prefix.? -ch_solve_pc_type fieldsplit -ch_solve_pc_fieldsplit_type schur -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_field 1 -ch_solve_fieldsplit_1_field 0 -ch_solve_fieldsplit_0_ksp_type cg? -ch_solve_fieldsplit_0_pc_type hypre -ch_solve_fieldsplit_0_pc_type_hypre boomeramg? -ch_solve_fieldsplit_1_ksp_type cg -ch_solve_fieldsplit_1_pc_type hypre? -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks, Subramanya? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From zonexo at gmail.com Sat Jul 20 08:39:13 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 20 Jul 2013 15:39:13 +0200 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn Message-ID: <51EA9301.8000208@gmail.com> Hi, I'm trying to use GAMG to speed up solving of Poisson eqn. I used: call KSPSetOptionsPrefix(ksp,"poisson_",ierr) -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg I remember it used to work in some problems but now it can't work, with error: */[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero/**/ /**/[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger/**/ /**/[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors/**/ /**/[0]PETSC ERROR: likely location of problem given in stack below/**/ /**/[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------/**/ /**/[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,/**/ /**/[0]PETSC ERROR: INSTEAD the line number of the start of the function/**/ /**/[0]PETSC ERROR: is given./**/ /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c/**/ /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ /**/[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1295 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c/**/ /**/[0]PETSC ERROR: [0] PCSetUp_GAMG line 564 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c/**/ /**/[0]PETSC ERROR: [0] PCSetUp line 810 src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c/**/ /**/[0]PETSC ERROR: [0] KSPSetUp line 182 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ /**/[0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ /**/[0]PETSC ERROR: --------------------- Error Message ------------------------------------/**/ /**/[0]PETSC ERROR: Signal received!/**/ /**/[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: Petsc Development HG revision: 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 22:18:53 2012 -0500/**/ /**/[0]PETSC ERROR: See docs/changes/index.html for recent updates./**/ /**/[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting./**/ /**/[0]PETSC ERROR: See docs/index.html for manual pages./**/ /**/[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe on a petsc-3.3 named USER-PC by User Sat Jul 20 15:37:40 2013/**/ /**/[0]PETSC ERROR: Libraries linked from /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib/**/ /**/[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012/**/ /**/[0]PETSC ERROR: Configure options --with-cc="win32fe cl" --with-fc="win32fe ifort" --with-cxx="win32fe cl" --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 --useThreads=0/**/ /**/[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file/**/ /**/application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0/**/ /**/ /**/job aborted:/**/ /**/rank: node: exit code[: error message]/**/ /**/0: User-PC: 59: process 0 exited without calling finalize/*/ / I read in one of the threads that I can use: -pc_type gamg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_type richardson -mg_levels_pc_type sor It worked but I got the msg: WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-mg_levels_ksp_type value: richardson Option left: name:-mg_levels_pc_type value: sor Option left: name:-pc_gamg_agg_nsmooths value: 1 Option left: name:-pc_type value: gamg Press any key to continue . . . If I used this: -poisson_pc_type gamg -poisson_pc_gamg_agg_nsmooths 1 -poisson_mg_levels_ksp_type richardson -poisson_mg_levels_pc_type sor It aborts with error: */[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero/**/ /**/[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger/**/ /**/[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors/**/ /**/[0]PETSC ERROR: likely location of problem given in stack below/**/ /**/[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------/**/ /**/[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,/**/ /**/[0]PETSC ERROR: INSTEAD the line number of the start of the function/**/ /**/[0]PETSC ERROR: is given./**/ /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c/**/ /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ /**/[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1295 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c/**/ /**/[0]PETSC ERROR: [0] PCSetUp_GAMG line 564 src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c/**/ /**/[0]PETSC ERROR: [0] PCSetUp line 810 src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c/**/ /**/[0]PETSC ERROR: [0] KSPSetUp line 182 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ /**/[0]PETSC ERROR: [0] KSPSolve line 351 src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ /**/[0]PETSC ERROR: --------------------- Error Message ------------------------------------/**/ /**/[0]PETSC ERROR: Signal received!/**/ /**/[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: Petsc Development HG revision: 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 22:18:53 2012 -0500/**/ /**/[0]PETSC ERROR: See docs/changes/index.html for recent updates./**/ /**/[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting./**/ /**/[0]PETSC ERROR: See docs/index.html for manual pages./**/ /**/[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe on a petsc-3.3 named USER-PC by User Sat Jul 20 15:36:24 2013/**/ /**/[0]PETSC ERROR: Libraries linked from /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib/**/ /**/[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012/**/ /**/[0]PETSC ERROR: Configure options --with-cc="win32fe cl" --with-fc="win32fe ifort" --with-cxx="win32fe cl" --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 --useThreads=0/**/ /**/[0]PETSC ERROR: ------------------------------------------------------------------------/**/ /**/[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file/**/ /**/application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0/**/ /**/ /**/job aborted:/**/ /**/rank: node: exit code[: error message]/**/ /**/0: User-PC: 59: process 0 exited without calling finalize/**/ /* So is there a recommended test command or method? Thank! -- Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 20 09:08:23 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 20 Jul 2013 09:08:23 -0500 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <51EA9301.8000208@gmail.com> References: <51EA9301.8000208@gmail.com> Message-ID: On Sat, Jul 20, 2013 at 8:39 AM, TAY wee-beng wrote: > Hi, > > I'm trying to use GAMG to speed up solving of Poisson eqn. I used: > > call KSPSetOptionsPrefix(ksp,"poisson_",ierr) > > -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg > > I remember it used to work in some problems but now it can't work, with > error: > Upgrade to the latest release and we will help you debug this. Matt > *[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero** > **[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger** > **[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find > memory corruption errors** > **[0]PETSC ERROR: likely location of problem given in stack below** > **[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------** > **[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,** > **[0]PETSC ERROR: INSTEAD the line number of the start of the > function** > **[0]PETSC ERROR: is given.** > **[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 > src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c > ** > **[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1295 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c > ** > **[0]PETSC ERROR: [0] PCSetUp_GAMG line 564 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c > ** > **[0]PETSC ERROR: [0] PCSetUp line 810 > src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c > ** > **[0]PETSC ERROR: [0] KSPSetUp line 182 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: [0] KSPSolve line 351 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: --------------------- Error Message > ------------------------------------** > **[0]PETSC ERROR: Signal received!** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Petsc Development HG revision: > 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 22:18:53 2012 > -0500** > **[0]PETSC ERROR: See docs/changes/index.html for recent updates.** > **[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.** > **[0]PETSC ERROR: See docs/index.html for manual pages.** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: > C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe > on a petsc-3.3 named USER-PC by User Sat Jul 20 15:37:40 2013** > **[0]PETSC ERROR: Libraries linked from > /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib** > **[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012** > **[0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifort" --with-cxx="win32fe cl" > --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 > --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 > --useThreads=0** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file** > **application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0** > ** > **job aborted:** > **rank: node: exit code[: error message]** > **0: User-PC: 59: process 0 exited without calling finalize** > * > > I read in one of the threads that I can use: > > -pc_type gamg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_type richardson > -mg_levels_pc_type sor > > It worked but I got the msg: > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-mg_levels_ksp_type value: richardson > Option left: name:-mg_levels_pc_type value: sor > Option left: name:-pc_gamg_agg_nsmooths value: 1 > Option left: name:-pc_type value: gamg > Press any key to continue . . . > > If I used this: > > -poisson_pc_type gamg -poisson_pc_gamg_agg_nsmooths 1 > -poisson_mg_levels_ksp_type richardson -poisson_mg_levels_pc_type sor > > It aborts with error: > > *[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero** > **[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger** > **[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find > memory corruption errors** > **[0]PETSC ERROR: likely location of problem given in stack below** > **[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------** > **[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,** > **[0]PETSC ERROR: INSTEAD the line number of the start of the > function** > **[0]PETSC ERROR: is given.** > **[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line 24 > src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c > ** > **[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1295 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c > ** > **[0]PETSC ERROR: [0] PCSetUp_GAMG line 564 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c > ** > **[0]PETSC ERROR: [0] PCSetUp line 810 > src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c > ** > **[0]PETSC ERROR: [0] KSPSetUp line 182 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: [0] KSPSolve line 351 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c > ** > **[0]PETSC ERROR: --------------------- Error Message > ------------------------------------** > **[0]PETSC ERROR: Signal received!** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: Petsc Development HG revision: > 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 22:18:53 2012 > -0500** > **[0]PETSC ERROR: See docs/changes/index.html for recent updates.** > **[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.** > **[0]PETSC ERROR: See docs/index.html for manual pages.** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: > C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe > on a petsc-3.3 named USER-PC by User Sat Jul 20 15:36:24 2013** > **[0]PETSC ERROR: Libraries linked from > /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib** > **[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012** > **[0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifort" --with-cxx="win32fe cl" > --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 > --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 > --useThreads=0** > **[0]PETSC ERROR: > ------------------------------------------------------------------------** > **[0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file** > **application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0** > ** > **job aborted:** > **rank: node: exit code[: error message]** > **0: User-PC: 59: process 0 exited without calling finalize** > * > > > So is there a recommended test command or method? > > Thank! > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Jul 20 10:45:16 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 20 Jul 2013 17:45:16 +0200 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: References: <51EA9301.8000208@gmail.com> Message-ID: <51EAB08C.6080605@gmail.com> On 20/7/2013 4:08 PM, Matthew Knepley wrote: > On Sat, Jul 20, 2013 at 8:39 AM, TAY wee-beng > wrote: > > Hi, > > I'm trying to use GAMG to speed up solving of Poisson eqn. I used: > > call KSPSetOptionsPrefix(ksp,"poisson_",ierr) > > -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg > > I remember it used to work in some problems but now it can't work, > with error: > > > Upgrade to the latest release and we will help you debug this. > > Matt Hi Matt, I've used the latest 3.4.2. What else do I need to provide? Thanks! > > */[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero/**/ > /**/[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger/**/ > /**/[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors/**/ > /**/[0]PETSC ERROR: likely location of problem given in stack > below/**/ > /**/[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------/**/ > /**/[0]PETSC ERROR: Note: The EXACT line numbers in the stack are > not available,/**/ > /**/[0]PETSC ERROR: INSTEAD the line number of the start of > the function/**/ > /**/[0]PETSC ERROR: is given./**/ > /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line > 24 > src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c/**/ > /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ > /**/[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1295 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c/**/ > /**/[0]PETSC ERROR: [0] PCSetUp_GAMG line 564 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c/**/ > /**/[0]PETSC ERROR: [0] PCSetUp line 810 > src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c/**/ > /**/[0]PETSC ERROR: [0] KSPSetUp line 182 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ > /**/[0]PETSC ERROR: [0] KSPSolve line 351 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ > /**/[0]PETSC ERROR: --------------------- Error Message > ------------------------------------/**/ > /**/[0]PETSC ERROR: Signal received!/**/ > /**/[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: Petsc Development HG revision: > 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 > 22:18:53 2012 -0500/**/ > /**/[0]PETSC ERROR: See docs/changes/index.html for recent > updates./**/ > /**/[0]PETSC ERROR: See docs/faq.html for hints about trouble > shooting./**/ > /**/[0]PETSC ERROR: See docs/index.html for manual pages./**/ > /**/[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: > C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe > on a petsc-3.3 named USER-PC by User Sat Jul 20 15:37:40 2013/**/ > /**/[0]PETSC ERROR: Libraries linked from > /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib/**/ > /**/[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012/**/ > /**/[0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifort" --with-cxx="win32fe cl" > --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 > --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 > --with-debugging=1 --useThreads=0/**/ > /**/[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: User provided function() line 0 in unknown > directory unknown file/**/ > /**/application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0/**/ > /**/ > /**/job aborted:/**/ > /**/rank: node: exit code[: error message]/**/ > /**/0: User-PC: 59: process 0 exited without calling finalize/*/ > / > > I read in one of the threads that I can use: > > -pc_type gamg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_type > richardson -mg_levels_pc_type sor > > It worked but I got the msg: > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-mg_levels_ksp_type value: richardson > Option left: name:-mg_levels_pc_type value: sor > Option left: name:-pc_gamg_agg_nsmooths value: 1 > Option left: name:-pc_type value: gamg > Press any key to continue . . . > > If I used this: > > -poisson_pc_type gamg -poisson_pc_gamg_agg_nsmooths 1 > -poisson_mg_levels_ksp_type richardson -poisson_mg_levels_pc_type sor > > It aborts with error: > > */[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero/**/ > /**/[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger/**/ > /**/[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors/**/ > /**/[0]PETSC ERROR: likely location of problem given in stack > below/**/ > /**/[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------/**/ > /**/[0]PETSC ERROR: Note: The EXACT line numbers in the stack are > not available,/**/ > /**/[0]PETSC ERROR: INSTEAD the line number of the start of > the function/**/ > /**/[0]PETSC ERROR: is given./**/ > /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues_GMRES line > 24 > src/ksp/ksp/impls/gmres/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\impls\gmres\gmreig.c/**/ > /**/[0]PETSC ERROR: [0] KSPComputeExtremeSingularValues line 40 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ > /**/[0]PETSC ERROR: [0] PCGAMGOptprol_AGG line 1295 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\agg.c/**/ > /**/[0]PETSC ERROR: [0] PCSetUp_GAMG line 564 > src/ksp/pc/impls/gamg/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\impls\gamg\gamg.c/**/ > /**/[0]PETSC ERROR: [0] PCSetUp line 810 > src/ksp/pc/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\pc\INTERF~1\precon.c/**/ > /**/[0]PETSC ERROR: [0] KSPSetUp line 182 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ > /**/[0]PETSC ERROR: [0] KSPSolve line 351 > src/ksp/ksp/interface/C:\wtay\DOWNLO~1\Codes\PETSC-~1\src\ksp\ksp\INTERF~1\itfunc.c/**/ > /**/[0]PETSC ERROR: --------------------- Error Message > ------------------------------------/**/ > /**/[0]PETSC ERROR: Signal received!/**/ > /**/[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: Petsc Development HG revision: > 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 > 22:18:53 2012 -0500/**/ > /**/[0]PETSC ERROR: See docs/changes/index.html for recent > updates./**/ > /**/[0]PETSC ERROR: See docs/faq.html for hints about trouble > shooting./**/ > /**/[0]PETSC ERROR: See docs/index.html for manual pages./**/ > /**/[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: > C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe > on a petsc-3.3 named USER-PC by User Sat Jul 20 15:36:24 2013/**/ > /**/[0]PETSC ERROR: Libraries linked from > /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib/**/ > /**/[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012/**/ > /**/[0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifort" --with-cxx="win32fe cl" > --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 > --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 > --with-debugging=1 --useThreads=0/**/ > /**/[0]PETSC ERROR: > ------------------------------------------------------------------------/**/ > /**/[0]PETSC ERROR: User provided function() line 0 in unknown > directory unknown file/**/ > /**/application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0/**/ > /**/ > /**/job aborted:/**/ > /**/rank: node: exit code[: error message]/**/ > /**/0: User-PC: 59: process 0 exited without calling finalize/**/ > /* > > > So is there a recommended test command or method? > > Thank! > > -- > Yours sincerely, > > TAY wee-beng > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Jul 20 11:03:28 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 20 Jul 2013 08:03:28 -0800 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <51EAB08C.6080605@gmail.com> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> Message-ID: <87ip058bhr.fsf@mcs.anl.gov> TAY wee-beng writes: > I've used the latest 3.4.2. No, you are not. [0]PETSC ERROR: Petsc Development HG revision: 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 22:18:53 2012 -0500 /**/[0]PETSC ERROR: C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe on a petsc-3.3 named USER-PC by User Sat Jul 20 15:37:40 2013/**/ /**/[0]PETSC ERROR: Libraries linked from /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib/**/ /**/[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012/**/ /**/[0]PETSC ERROR: Configure options --with-cc="win32fe cl" --with-fc="win32fe ifort" --with-cxx="win32fe cl" --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 --with-debugging=1 --useThreads=0/**/ > What else do I need to provide? You are either intentionally or accidentally running an old PETSc. I suspect an environment problem before any sort of code problem, so please get your versions straight first. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From zonexo at gmail.com Sat Jul 20 12:53:39 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 20 Jul 2013 19:53:39 +0200 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <87ip058bhr.fsf@mcs.anl.gov> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> Message-ID: <51EACEA3.2010205@gmail.com> Ops, sorry, I just built the newest version but its environment variable is still not changed. I have restarted it. Now : -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg hangs there for more than 10 mins. Same for -poisson_pc_type gamg -poisson_pc_gamg_agg_nsmooths 1 -poisson_mg_levels_ksp_type richardson -poisson_mg_levels_pc_type sor and -poisson_pc_type gamg -poisson_pc_gamg_agg_nsmooths 1 -poisson_mg_levels_ksp_type richardson -poisson_mg_levels_pc_type sor Yours sincerely, TAY wee-beng On 20/7/2013 6:03 PM, Jed Brown wrote: > TAY wee-beng writes: > >> I've used the latest 3.4.2. > No, you are not. > > [0]PETSC ERROR: Petsc Development HG revision: > 9850aeb5d33f0b33bc931843c4b3b3b4f8df6a3b HG Date: Tue Oct 02 22:18:53 > 2012 -0500 > > /**/[0]PETSC ERROR: > C:\Obj_tmp\ibm3d_high_Re_staggered_AB2\Debug\ibm3d_high_Re_staggered_AB2.exe > on a petsc-3.3 named USER-PC by User Sat Jul 20 15:37:40 2013/**/ > /**/[0]PETSC ERROR: Libraries linked from > /cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008/lib/**/ > /**/[0]PETSC ERROR: Configure run at Thu Oct 4 10:01:13 2012/**/ > /**/[0]PETSC ERROR: Configure options --with-cc="win32fe cl" > --with-fc="win32fe ifort" --with-cxx="win32fe cl" > --with-mpi-dir=/cygdrive/c/MPICH2/ --download-f-blas-lapack=1 > --prefix=/cygdrive/d/wtay/Lib/petsc-3.3-dev_win32_vs2008 > --with-debugging=1 --useThreads=0/**/ > > >> What else do I need to provide? > You are either intentionally or accidentally running an old PETSc. I > suspect an environment problem before any sort of code problem, so > please get your versions straight first. From jedbrown at mcs.anl.gov Sat Jul 20 13:36:11 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 20 Jul 2013 10:36:11 -0800 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <51EACEA3.2010205@gmail.com> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> <51EACEA3.2010205@gmail.com> Message-ID: <87fvv984f8.fsf@mcs.anl.gov> TAY wee-beng writes: > Ops, sorry, I just built the newest version but its environment variable > is still not changed. I have restarted it. Now : > > -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg > > hangs there for more than 10 mins. Use a debugger to find out where it has hung. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Sat Jul 20 14:08:06 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 20 Jul 2013 11:08:06 -0800 Subject: [petsc-users] Fwd: MatCreateSeqAIJ( ) Quesion In-Reply-To: References: Message-ID: <874nbp82y1.fsf@mcs.anl.gov> Lu Qiyue writes: > ---------- Forwarded message ---------- > From: Lu Qiyue > Date: Fri, Jul 19, 2013 at 3:39 PM > Subject: Re: [petsc-users] MatCreateSeqAIJ( ) Quesion > To: Barry Smith > > > Thanks Barry. > I am trying to premalloc the correct value for each row. Assume the matrix > is in COO format and N is the dimension, NNZ is total non-zeros. My > workflow is as below: > 1) generating a file holding all the correct number of non-zeros for each > row and read-in as cnt > > from > http://www.mcs.anl.gov/petsc/petsc-current/src/mat/examples/tutorials/ex12.c.html > It looks cnt should have N+1 dimension and the last value is the total > non-zeros NNZ > One question here is at line 45, why the dimension of matrix are set to be > (n+1), should it be n ? > In http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tests/ex72.c.html > they are just set to m, n That example is creating a matrix with an extra row and column. > For ex72.c, I don't know which ex72 you are looking at. > the code copy the entries which not on the diagonal and make the > symmetric COO matrix to be FULL. But when we do the MatCreateSeqAIJ, > the cnt should holding number of non-zeros per row of FULL matrix, > right? > > In one word, what's the content(and dimension) of cnt array and how to set > MatCreateSeqAIJ() assuming a N dimension matrix? MatCreateSeqAIJ(PETSC_COMM_SELF,N,N,0,cnt,&A); where cnt[] is an array of length N, with cnt[i] containing the number of nonzeros in row i. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From karpeev at mcs.anl.gov Sat Jul 20 16:41:03 2013 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Sat, 20 Jul 2013 16:41:03 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: What version of petsc are you using? DMlibMesh needs to be updated to serve the splits via DMCreateFieldDecomposition() correctly to petsc-3.4. Note that DMCreateFieldDecomposition() is called by PCFieldSplit from PETSc, so you wouldn't see those calls in the libMesh source. Dmitry. On Sat, Jul 20, 2013 at 6:35 AM, subramanya sadasiva wrote: > Hi Matt, > Libmesh does create the DM. This is the relevant code. > > ierr = DMCreateLibMesh(libMesh::COMM_WORLD, this->system(), > &dm);CHKERRABORT(libMesh::COMM_WORLD, ierr); > ierr = DMSetFromOptions(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); > ierr = DMSetUp(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); > ierr = SNESSetDM(this->_snes, dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); > > I am unable to tell from the code whether the DMCreateFieldDecomposition > is run at all. I will see if running in the debugger helps me figure it out. > Thanks, > Subramanya > > > > Date: Sat, 20 Jul 2013 06:22:30 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Sat, Jul 20, 2013 at 6:19 AM, subramanya sadasiva > wrote: > Hi Matt, > The DM is created by the LibMesh code. The only thing I do directly with > Petsc is set the solver prefixes which libmesh doesn't have an interface > for at present . I have been able to set most options directly through > command line options. this is the one case where that is not helping, and > it might just be that I don't know how. Let me see if I am able to get this > working. > > So does Libmesh create the PETSc solver? If so, it should be calling > KSPSetDM() or SNESSetDM() or TSSetDM(). This > is all I want to know. It seems like this is not the case since the > PCFIELDSPLIT says it has no fields. Either that, or they have > a bug in the DMCreateFieldDecomposition() implementation. > > Either way, it seems like the thing to do is run with -start_in_debugger, > and break in PCFieldSplitSetDefaults() where that is called. > > Matt > > > Thanks, > Subramanya > > Date: Sat, 20 Jul 2013 06:07:47 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva > wrote: > > Hi Matt, > I see that there is an implementation of the interface > to DMCreateFieldDecomposition.html . > So does this sound right? > > Yes, that should automatically provide the field decomposition. Since this > is not happening, something is > wrong in the code. Is the DM set in your solver? > > 1. I get index sets ,and variable names from the dm create field > decomposition > 2. Once I have these , I create fieldsplits and name them using this.. > 3. And I guess I should be ready to go.. > One question that remains is that the fieldsplit is created on a full > matrix. However, an algorithm such as VIRS operates only on a subset of > this full DM. Will the fieldsplit and preconditioner created on the full DM > carry over to the subsidiary DMs? > > VI is still new, and I have not tested in this case, but it is supposed to > work. > > Thanks, > > Matt > > Thanks for all the help! > Subramanya > > > Date: Fri, 19 Jul 2013 23:09:10 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva > wrote: > Hi Matt, > The DM being created is here (this is from Libmesh code > (petscdmlibmesh.C ) > > 01047 { > 01048 PetscErrorCode ierr; > 01049 PetscFunctionBegin; > 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); > 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); > 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); > 01053 PetscFunctionReturn(0); > 01054 } > > > This file has methods to access the variables assigned to the DM (this > seems to be stored in a struct.) > So , I guess one should be able to add a bit of code to create sections as > you mentioned somewhere around here. > > Okay, they have their own DM. It must implement one of the interfaces for > field specification. They could provide > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html > > or at a lower level > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html > > which in turn can be constructed by specifying a default PetscSection > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html > > Matt > > Thanks, > Subramanya > > > > > > Date: Fri, 19 Jul 2013 21:33:11 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva > wrote: > Hi Matt, > I am using Libmesh so the DM stuff is actually in the background, and > unfortunately the matrix doesn't have a saddle point, > I thought that > > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_fields 0 > -ch_solve_fieldsplit_1_fields 1 > > The block_size argument presumes you are using a DA. Are you? > > The other two options just say select the first DM field as field 0 in > this PC, and the same with the second field. The > DM must inform the PC about the initial field decomposition. > > would inform the solver of the structure. If this doesn't work owing to > the fact that the problem is only being solved on a section of the mesh > (because of the reduced space method), I guess I will have to use the > PetscSection. Does that sound right? > > First, I think the right people to do this are the Libmesh people (we will > of course help them). Second, you have not said > whether you are using a structured or unstructured mesh. What DM class > does the solver actually see? > > Thanks, > > Matt > > Thanks, > Subramanya > > > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > > On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva > wrote: > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > > You have to tell the PCFIELDSPLIT about the dofs in each field. So > > 1) You are probably not using a DA, since it would tell it automatically > > 2) If you have a saddle point, you can use > -pc_fieldsplit_detect_saddle_point > > 3) If none of those apply, you can set a PetscSection describing your > layout to the DM for the solver. > Since this is new, I suspect you will need help, so mail back. > > Thanks, > > Matt > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From potaman at outlook.com Sat Jul 20 16:57:51 2013 From: potaman at outlook.com (subramanya sadasiva) Date: Sat, 20 Jul 2013 17:57:51 -0400 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: , , , , , , Message-ID: Hi Dmitry, I am using petsc 3.4.2 . I should have been clearer about the problem that I had. I was unable to see a call to DMCreateFieldDecomposition in gdb when I ran the code with a field split preconditioner specified. If the routine has not yet been updated, I can understand why it is causing problems.Thanks, Subramanya From: karpeev at mcs.anl.gov Date: Sat, 20 Jul 2013 16:42:50 -0500 Subject: Fwd: [petsc-users] Trying to set up a field-split preconditioner To: potaman at outlook.com; libmesh-users at lists.sourceforge.net What version of petsc are you using?DMlibMesh needs to be updated to serve the splits via DMCreateFieldDecomposition() correctly to petsc-3.4. Note that DMCreateFieldDecomposition() is called by PCFieldSplit from PETSc, so you wouldn't see those calls in the libMesh source. Dmitry. On Sat, Jul 20, 2013 at 6:35 AM, subramanya sadasiva wrote: Hi Matt, Libmesh does create the DM. This is the relevant code. ierr = DMCreateLibMesh(libMesh::COMM_WORLD, this->system(), &dm);CHKERRABORT(libMesh::COMM_WORLD, ierr); ierr = DMSetFromOptions(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); ierr = DMSetUp(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); ierr = SNESSetDM(this->_snes, dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); I am unable to tell from the code whether the DMCreateFieldDecomposition is run at all. I will see if running in the debugger helps me figure it out. Thanks, Subramanya Date: Sat, 20 Jul 2013 06:22:30 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Sat, Jul 20, 2013 at 6:19 AM, subramanya sadasiva wrote: Hi Matt, The DM is created by the LibMesh code. The only thing I do directly with Petsc is set the solver prefixes which libmesh doesn't have an interface for at present . I have been able to set most options directly through command line options. this is the one case where that is not helping, and it might just be that I don't know how. Let me see if I am able to get this working. So does Libmesh create the PETSc solver? If so, it should be calling KSPSetDM() or SNESSetDM() or TSSetDM(). This is all I want to know. It seems like this is not the case since the PCFIELDSPLIT says it has no fields. Either that, or they have a bug in the DMCreateFieldDecomposition() implementation. Either way, it seems like the thing to do is run with -start_in_debugger, and break in PCFieldSplitSetDefaults() where that is called. Matt Thanks, Subramanya Date: Sat, 20 Jul 2013 06:07:47 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva wrote: Hi Matt, I see that there is an implementation of the interface to DMCreateFieldDecomposition.html . So does this sound right? Yes, that should automatically provide the field decomposition. Since this is not happening, something is wrong in the code. Is the DM set in your solver? 1. I get index sets ,and variable names from the dm create field decomposition 2. Once I have these , I create fieldsplits and name them using this.. 3. And I guess I should be ready to go.. One question that remains is that the fieldsplit is created on a full matrix. However, an algorithm such as VIRS operates only on a subset of this full DM. Will the fieldsplit and preconditioner created on the full DM carry over to the subsidiary DMs? VI is still new, and I have not tested in this case, but it is supposed to work. Thanks, Matt Thanks for all the help! Subramanya Date: Fri, 19 Jul 2013 23:09:10 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva wrote: Hi Matt, The DM being created is here (this is from Libmesh code (petscdmlibmesh.C ) 01047 { 01048 PetscErrorCode ierr; 01049 PetscFunctionBegin; 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); 01053 PetscFunctionReturn(0); 01054 } This file has methods to access the variables assigned to the DM (this seems to be stored in a struct.) So , I guess one should be able to add a bit of code to create sections as you mentioned somewhere around here. Okay, they have their own DM. It must implement one of the interfaces for field specification. They could provide http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html or at a lower level http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html which in turn can be constructed by specifying a default PetscSection http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html Matt Thanks, Subramanya Date: Fri, 19 Jul 2013 21:33:11 -0500 Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva wrote: Hi Matt, I am using Libmesh so the DM stuff is actually in the background, and unfortunately the matrix doesn't have a saddle point, I thought that -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_fields 0 -ch_solve_fieldsplit_1_fields 1 The block_size argument presumes you are using a DA. Are you? The other two options just say select the first DM field as field 0 in this PC, and the same with the second field. The DM must inform the PC about the initial field decomposition. would inform the solver of the structure. If this doesn't work owing to the fact that the problem is only being solved on a section of the mesh (because of the reduced space method), I guess I will have to use the PetscSection. Does that sound right? First, I think the right people to do this are the Libmesh people (we will of course help them). Second, you have not said whether you are using a structured or unstructured mesh. What DM class does the solver actually see? Thanks, Matt Thanks, Subramanya Subject: Re: [petsc-users] Trying to set up a field-split preconditioner From: knepley at gmail.com To: potaman at outlook.com CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva wrote: Hi, I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard solver and I get the following error, You have to tell the PCFIELDSPLIT about the dofs in each field. So 1) You are probably not using a DA, since it would tell it automatically 2) If you have a saddle point, you can use -pc_fieldsplit_detect_saddle_point 3) If none of those apply, you can set a PetscSection describing your layout to the DM for the solver. Since this is new, I suspect you will need help, so mail back. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! [0]PETSC ERROR: ------------------------------------------------------------------------ These are the options that I am using, -ch_solve is just a prefix. -ch_solve_pc_type fieldsplit -ch_solve_pc_fieldsplit_type schur -ch_solve_fieldsplit_block_size 2 -ch_solve_fieldsplit_0_field 1 -ch_solve_fieldsplit_1_field 0 -ch_solve_fieldsplit_0_ksp_type cg -ch_solve_fieldsplit_0_pc_type hypre -ch_solve_fieldsplit_0_pc_type_hypre boomeramg -ch_solve_fieldsplit_1_ksp_type cg -ch_solve_fieldsplit_1_pc_type hypre -ch_solve_fieldsplit_1_pc_type_hypre boomeramg Any ideas? Thanks, Subramanya -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Sat Jul 20 17:29:42 2013 From: karpeev at mcs.anl.gov (Dmitry Karpeyev) Date: Sat, 20 Jul 2013 17:29:42 -0500 Subject: [petsc-users] Trying to set up a field-split preconditioner In-Reply-To: References: Message-ID: Are you running with --use-petsc-dm? Otherwise a DM might not be getting attached to SNES, hence, no call to DMCreateFieldDecomposition(). I need to fix that API in DMlibMesh -- if fell behind the analogous DMMoose API, which works correctly with petsc-3.4. It's unlikely to make it into HEAD before the release, though. In the meantime you can work around this, if you can use -node_major_dofs (I think) to interlace the degrees of freedom, if all your variables are of the same type. Then you can use the more basic PETSc field definitions on the command line. Dmitry. On Sat, Jul 20, 2013 at 4:57 PM, subramanya sadasiva wrote: > Hi Dmitry, > I am using petsc 3.4.2 . I should have been clearer about the problem > that I had. I was unable to see a call to DMCreateFieldDecomposition in > gdb when I ran the code with a field split preconditioner specified. If the > routine has not yet been updated, I can understand why it is causing > problems. > Thanks, > Subramanya > > ------------------------------ > From: karpeev at mcs.anl.gov > Date: Sat, 20 Jul 2013 16:42:50 -0500 > Subject: Fwd: [petsc-users] Trying to set up a field-split preconditioner > To: potaman at outlook.com; libmesh-users at lists.sourceforge.net > > > > > What version of petsc are you using? > DMlibMesh needs to be updated to serve the splits via > DMCreateFieldDecomposition() correctly to petsc-3.4. Note that > DMCreateFieldDecomposition() is called by PCFieldSplit from PETSc, so you > wouldn't see those calls in the libMesh source. > > Dmitry. > > > On Sat, Jul 20, 2013 at 6:35 AM, subramanya sadasiva wrote: > > Hi Matt, > Libmesh does create the DM. This is the relevant code. > > ierr = DMCreateLibMesh(libMesh::COMM_WORLD, this->system(), > &dm);CHKERRABORT(libMesh::COMM_WORLD, ierr); > ierr = DMSetFromOptions(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); > ierr = DMSetUp(dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); > ierr = SNESSetDM(this->_snes, dm); CHKERRABORT(libMesh::COMM_WORLD, ierr); > > I am unable to tell from the code whether the DMCreateFieldDecomposition > is run at all. I will see if running in the debugger helps me figure it out. > Thanks, > Subramanya > > > > Date: Sat, 20 Jul 2013 06:22:30 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Sat, Jul 20, 2013 at 6:19 AM, subramanya sadasiva > wrote: > Hi Matt, > The DM is created by the LibMesh code. The only thing I do directly with > Petsc is set the solver prefixes which libmesh doesn't have an interface > for at present . I have been able to set most options directly through > command line options. this is the one case where that is not helping, and > it might just be that I don't know how. Let me see if I am able to get this > working. > > So does Libmesh create the PETSc solver? If so, it should be calling > KSPSetDM() or SNESSetDM() or TSSetDM(). This > is all I want to know. It seems like this is not the case since the > PCFIELDSPLIT says it has no fields. Either that, or they have > a bug in the DMCreateFieldDecomposition() implementation. > > Either way, it seems like the thing to do is run with -start_in_debugger, > and break in PCFieldSplitSetDefaults() where that is called. > > Matt > > > Thanks, > Subramanya > > Date: Sat, 20 Jul 2013 06:07:47 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: libmesh-users at lists.sourceforge.net; petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 11:50 PM, subramanya sadasiva > wrote: > > Hi Matt, > I see that there is an implementation of the interface > to DMCreateFieldDecomposition.html . > So does this sound right? > > Yes, that should automatically provide the field decomposition. Since this > is not happening, something is > wrong in the code. Is the DM set in your solver? > > 1. I get index sets ,and variable names from the dm create field > decomposition > 2. Once I have these , I create fieldsplits and name them using this.. > 3. And I guess I should be ready to go.. > One question that remains is that the fieldsplit is created on a full > matrix. However, an algorithm such as VIRS operates only on a subset of > this full DM. Will the fieldsplit and preconditioner created on the full DM > carry over to the subsidiary DMs? > > VI is still new, and I have not tested in this case, but it is supposed to > work. > > Thanks, > > Matt > > Thanks for all the help! > Subramanya > > > Date: Fri, 19 Jul 2013 23:09:10 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:59 PM, subramanya sadasiva > wrote: > Hi Matt, > The DM being created is here (this is from Libmesh code > (petscdmlibmesh.C ) > > 01047 { > 01048 PetscErrorCode ierr; > 01049 PetscFunctionBegin; > 01050 ierr = DMCreate(comm, dm); CHKERRQ(ierr); > 01051 ierr = DMSetType(*dm, DMLIBMESH); CHKERRQ(ierr); > 01052 ierr = DMLibMeshSetSystem(*dm, sys); CHKERRQ(ierr); > 01053 PetscFunctionReturn(0); > 01054 } > > > This file has methods to access the variables assigned to the DM (this > seems to be stored in a struct.) > So , I guess one should be able to add a bit of code to create sections as > you mentioned somewhere around here. > > Okay, they have their own DM. It must implement one of the interfaces for > field specification. They could provide > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateFieldDecomposition.html > > or at a lower level > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateSubDM.html > > which in turn can be constructed by specifying a default PetscSection > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMSetDefaultSection.html > > Matt > > Thanks, > Subramanya > > > > > > Date: Fri, 19 Jul 2013 21:33:11 -0500 > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov > > On Fri, Jul 19, 2013 at 9:17 PM, subramanya sadasiva > wrote: > Hi Matt, > I am using Libmesh so the DM stuff is actually in the background, and > unfortunately the matrix doesn't have a saddle point, > I thought that > > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_fields 0 > -ch_solve_fieldsplit_1_fields 1 > > The block_size argument presumes you are using a DA. Are you? > > The other two options just say select the first DM field as field 0 in > this PC, and the same with the second field. The > DM must inform the PC about the initial field decomposition. > > would inform the solver of the structure. If this doesn't work owing to > the fact that the problem is only being solved on a section of the mesh > (because of the reduced space method), I guess I will have to use the > PetscSection. Does that sound right? > > First, I think the right people to do this are the Libmesh people (we will > of course help them). Second, you have not said > whether you are using a structured or unstructured mesh. What DM class > does the solver actually see? > > Thanks, > > Matt > > Thanks, > Subramanya > > > Subject: Re: [petsc-users] Trying to set up a field-split preconditioner > From: knepley at gmail.com > To: potaman at outlook.com > CC: petsc-users at mcs.anl.gov; libmesh-users at lists.sourceforge.net > > On Fri, Jul 19, 2013 at 7:56 PM, subramanya sadasiva > wrote: > Hi, > I am trying to set up a fieldsplit preconditioner for my Cahn Hilliard > solver and I get the following error, > > You have to tell the PCFIELDSPLIT about the dofs in each field. So > > 1) You are probably not using a DA, since it would tell it automatically > > 2) If you have a saddle point, you can use > -pc_fieldsplit_detect_saddle_point > > 3) If none of those apply, you can set a PetscSection describing your > layout to the DM for the solver. > Since this is new, I suspect you will need help, so mail back. > > Thanks, > > Matt > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 0! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > These are the options that I am using, > -ch_solve is just a prefix. > > > > -ch_solve_pc_type fieldsplit > -ch_solve_pc_fieldsplit_type schur > -ch_solve_fieldsplit_block_size 2 > -ch_solve_fieldsplit_0_field 1 > -ch_solve_fieldsplit_1_field 0 > -ch_solve_fieldsplit_0_ksp_type cg > -ch_solve_fieldsplit_0_pc_type hypre > -ch_solve_fieldsplit_0_pc_type_hypre boomeramg > -ch_solve_fieldsplit_1_ksp_type cg > -ch_solve_fieldsplit_1_pc_type hypre > -ch_solve_fieldsplit_1_pc_type_hypre boomeramg > > Any ideas? > > Thanks, > Subramanya > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Sun Jul 21 10:28:02 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Sun, 21 Jul 2013 09:28:02 -0600 Subject: [petsc-users] SNES and TS Message-ID: Dear All, I've been playing with TS with my transient problem. It so far works fine and I was able to deal with forward Euler and backward Euler time integration scheme very easy pretty much without changing anything in my code. I understand that TS works fine when the non-linear equations look like: du/dt = f dv/dt = g However, I currently have a non-linear equation system like h(du/dt, dv/dt) = f k(du/dt, dv/dt) = g both h and k are nonlinear function of du/dt and dv/dt. I wonder for this kind of situation, is TS still the best option for time integration. For my understanding, I can solve the transient system using SNES directly like: h(du/dt, dv/dt) - f = 0 k(du/dt, dv/dt) - g = 0 and handle the time integration manually as it also would give me flexibility to use my own time integration schemes. I'd appreciate it if anyone could give some suggestions, and examples if possible (I love examples :) Best, Ling From knepley at gmail.com Sun Jul 21 13:37:45 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 21 Jul 2013 13:37:45 -0500 Subject: [petsc-users] SNES and TS In-Reply-To: References: Message-ID: On Sun, Jul 21, 2013 at 10:28 AM, Zou (Non-US), Ling wrote: > Dear All, > > I've been playing with TS with my transient problem. It so far works > fine and I was able to deal with forward Euler and backward Euler time > integration scheme very easy pretty much without changing anything in > my code. > > I understand that TS works fine when the non-linear equations look like: > du/dt = f > dv/dt = g > > However, I currently have a non-linear equation system like > h(du/dt, dv/dt) = f > k(du/dt, dv/dt) = g > > both h and k are nonlinear function of du/dt and dv/dt. I wonder for > this kind of situation, is TS still the best option for time > Yes, the implicit interface (IFunction and IJacobian) are the right way to address this. TS ex22 is an example that uses this interface, even though it is linear in the derivatives. Matt > integration. For my understanding, I can solve the transient system > using SNES directly like: > h(du/dt, dv/dt) - f = 0 > k(du/dt, dv/dt) - g = 0 > and handle the time integration manually as it also would give me > flexibility to use my own time integration schemes. > > I'd appreciate it if anyone could give some suggestions, and examples > if possible (I love examples :) > > Best, > > Ling > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From heikki.a.virtanen at hotmail.com Sun Jul 21 15:53:35 2013 From: heikki.a.virtanen at hotmail.com (Heikki Virtanen) Date: Sun, 21 Jul 2013 23:53:35 +0300 Subject: [petsc-users] MatMPIAIJSetPreallocationCSR and complex scalars Message-ID: Hi, I have tried to solve generalized complex eigenvalue problem using SLEPc and PETSc. Both matrices are also sparse and I know with very good accuracy where I have non-zero entries. So, it sounds that function MatMPIAIJSetPreallocationCSR() is very good in my case. I have a test problem and my code works well with real scalars, but when I change real scalars to complex I get errors like Floating point exception! [0]PETSC ERROR: Inserting -nan+iG at matrix entry (0,-524288)! [0]PETSC ERROR: MatSetValues() line 1092 in /opt/petsc-3.4.2/src/mat/interface/matrix.c [0]PETSC ERROR: MatAXPY_BasicWithPreallocation() line 122 in /opt/petsc-3.4.2/src/mat/utils/axpy.c [0]PETSC ERROR: MatAXPY_MPIAIJ() line 2343 in /opt/petsc-3.4.2/src/mat/impls/aij/mpi/mpiaij.c [0]PETSC ERROR: MatAXPY() line 39 in /opt/petsc-3.4.2/src/mat/utils/axpy.c [0]PETSC ERROR: STMatGAXPY_Private() line 366 in /opt/slepc-3.4.0/src/st/interface/stsolve.c [0]PETSC ERROR: STSetUp_Shift() line 113 in /opt/slepc-3.4.0/src/st/impls/shift/shift.c [0]PETSC ERROR: STSetUp() line 285 in /opt/slepc-3.4.0/src/st/interface/stsolve.c [0]PETSC ERROR: EPSSetUp() line 215 in /opt/slepc-3.4.0/src/eps/interface/setup.c [0]PETSC ERROR: EPSSolve() line 90 in /opt/slepc-3.4.0/src/eps/interface/solve.c I have also printed out the matrices, after these are assembled using MatSetValues(), in complex and real case and both are ok. ( there is nothing suspicious and PETSc does not try to access entries which are not initialized.) Also MatZeroRowsColumns() works when I apply boundary conditions. I don't know but I would guess that something is wrong when I initialize matrices. Basically, I do it in this way. ierr = MatMPIAIJSetPreallocationCSR (matrix,i,j,0); CHKERRQ(ierr); ierr = MatAssemblyBegin (matrix,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd (matrix,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatSetOption (matrix, MAT_NEW_NONZERO_LOCATIONS, PETSC_FALSE); CHKERRQ(ierr); ierr = MatSetOption (matrix, MAT_KEEP_NONZERO_PATTERN, PETSC_TRUE);CHKERRQ(ierr); i and j arrays work fine with real case so they should be ok. Is there any examples where MatMPIAIJSetPreallocationCSR() is used to initialize matrices of eigenvalue problem. -Heikki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Jul 22 05:10:56 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 22 Jul 2013 12:10:56 +0200 Subject: [petsc-users] MatMPIAIJSetPreallocationCSR and complex scalars In-Reply-To: References: Message-ID: Could you send the test program to slepc-maint? Jose El 21/07/2013, a las 22:53, Heikki Virtanen escribi?: > Hi, I have tried to solve generalized complex eigenvalue problem using SLEPc and PETSc. Both matrices are also sparse and I know with very good accuracy where I have non-zero entries. So, it sounds that function MatMPIAIJSetPreallocationCSR() is very good in my case. I have a test problem and my code works well with real scalars, but when I change real scalars to complex I get errors like > > Floating point exception! > [0]PETSC ERROR: Inserting -nan+iG at matrix entry (0,-524288)! > [0]PETSC ERROR: MatSetValues() line 1092 in /opt/petsc-3.4.2/src/mat/interface/matrix.c > [0]PETSC ERROR: MatAXPY_BasicWithPreallocation() line 122 in /opt/petsc-3.4.2/src/mat/utils/axpy.c > [0]PETSC ERROR: MatAXPY_MPIAIJ() line 2343 in /opt/petsc-3.4.2/src/mat/impls/aij/mpi/mpiaij.c > [0]PETSC ERROR: MatAXPY() line 39 in /opt/petsc-3.4.2/src/mat/utils/axpy.c > [0]PETSC ERROR: STMatGAXPY_Private() line 366 in /opt/slepc-3.4.0/src/st/interface/stsolve.c > [0]PETSC ERROR: STSetUp_Shift() line 113 in /opt/slepc-3.4.0/src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 285 in /opt/slepc-3.4.0/src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 215 in /opt/slepc-3.4.0/src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 90 in /opt/slepc-3.4.0/src/eps/interface/solve.c > > I have also printed out the matrices, after these are assembled using MatSetValues(), in complex and real case and both are ok. ( there is nothing suspicious and PETSc does not try to access entries which are not initialized.) Also MatZeroRowsColumns() works when I apply boundary conditions. I don't know but I would guess that something is wrong when I initialize matrices. Basically, I do it in this way. > > ierr = MatMPIAIJSetPreallocationCSR (matrix,i,j,0); CHKERRQ(ierr); > ierr = MatAssemblyBegin (matrix,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd (matrix,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatSetOption (matrix, MAT_NEW_NONZERO_LOCATIONS, PETSC_FALSE); CHKERRQ(ierr); > ierr = MatSetOption (matrix, MAT_KEEP_NONZERO_PATTERN, PETSC_TRUE);CHKERRQ(ierr); > > i and j arrays work fine with real case so they should be ok. Is there any examples where MatMPIAIJSetPreallocationCSR() is used to initialize matrices of eigenvalue problem. > > -Heikki > From i.gutheil at fz-juelich.de Tue Jul 23 01:17:54 2013 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Tue, 23 Jul 2013 08:17:54 +0200 Subject: [petsc-users] error with PAMI (fwd) In-Reply-To: References: Message-ID: <51EE2012.5020208@fz-juelich.de> Hello, a user here in Juelich got the same error and he said he only got it on BlueGene with PETSc, so I believe there is a problem with PAMI and PETSc at least PETSc 3.3-p5. Im am just now installing PETSc 3.4.2 and want to ask the user whether the problem still occurs. I also had thought of posting it to the PETSc list but I do not have the complete error code, the user only sent us the PAMI error, and it was exactely the same place. If it is still there with PETSc 3.4.2, it might be a problem with PAMI, but IBM won't help if we can't give them a small reproducible example. Anyway, nice to hear that we are not alone in Juelich. Inge On 07/19/13 18:15, Satish Balay wrote: > Forwarding to petsc-users - perhaps other BG users have more > experience with PAMI. > > Satish > > ---------- Forwarded message ---------- > Date: Fri, 19 Jul 2013 09:09:24 +0000 > From: g.giangaspero at utwente.nl > To: petsc-maint at mcs.anl.gov > Subject: [petsc-maint] error with PAMI > > Dear Authors, > > I am trying to run PETSc 3.3-p5 on a IBM Blue Gene/Q machine which features PAMI as MPI implementation and the IBM xl compiler. I would like to solve a non-linear system but as soon as the function SNESSolve is called the code crashes and I get the following error message (MooseMBFlow is the executable): > > MooseMBFlow: /bgsys/source/srcV1R2M0.14091/comm/sys/buildtools/pami/common/bgq/Memregion.h:58: pami_result_t PAMI::Memregion::createMemregion_impl(size_t*, size_t, void*, uint64_t): Assertion `rc == 0' failed > > These lines are the only ones being printed therefore I cannot tell you much more about the error. > I know this is not directly related to PETSc but I wonder if anybody had already experienced such a problem, I did not get any real help from the user support of that machine. > > Thank you very much for your help. > > Best Regards, > Giorgio Giangaspero > > -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de From i.gutheil at fz-juelich.de Tue Jul 23 02:06:34 2013 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Tue, 23 Jul 2013 09:06:34 +0200 Subject: [petsc-users] error with PAMI (fwd) In-Reply-To: <51EE2012.5020208@fz-juelich.de> References: <51EE2012.5020208@fz-juelich.de> Message-ID: <51EE2B7A.5060803@fz-juelich.de> Sorry, I just found out that it is the user of our BlueGene/Q who sent the error message. Inge On 07/23/13 08:17, Inge Gutheil wrote: > Hello, > a user here in Juelich got the same error and he said he only got it on > BlueGene with PETSc, so I believe there is a problem with PAMI and PETSc > at least PETSc 3.3-p5. Im am just now installing PETSc 3.4.2 and want to > ask the user whether the problem still occurs. I also had thought of > posting it to the PETSc list but I do not have the complete error code, > the user only sent us the PAMI error, and it was exactely the same > place. If it is still there with PETSc 3.4.2, it might be a problem with > PAMI, but IBM won't help if we can't give them a small reproducible > example. Anyway, nice to hear that we are not alone in Juelich. > > Inge > > On 07/19/13 18:15, Satish Balay wrote: >> Forwarding to petsc-users - perhaps other BG users have more >> experience with PAMI. >> >> Satish >> >> ---------- Forwarded message ---------- >> Date: Fri, 19 Jul 2013 09:09:24 +0000 >> From: g.giangaspero at utwente.nl >> To: petsc-maint at mcs.anl.gov >> Subject: [petsc-maint] error with PAMI >> >> Dear Authors, >> >> I am trying to run PETSc 3.3-p5 on a IBM Blue Gene/Q machine which >> features PAMI as MPI implementation and the IBM xl compiler. I would >> like to solve a non-linear system but as soon as the function >> SNESSolve is called the code crashes and I get the following error >> message (MooseMBFlow is the executable): >> >> MooseMBFlow: >> /bgsys/source/srcV1R2M0.14091/comm/sys/buildtools/pami/common/bgq/Memregion.h:58: >> pami_result_t PAMI::Memregion::createMemregion_impl(size_t*, size_t, >> void*, uint64_t): Assertion `rc == 0' failed >> >> These lines are the only ones being printed therefore I cannot tell >> you much more about the error. >> I know this is not directly related to PETSc but I wonder if anybody >> had already experienced such a problem, I did not get any real help >> from the user support of that machine. >> >> Thank you very much for your help. >> >> Best Regards, >> Giorgio Giangaspero >> >> > > > -- > -- > > Inge Gutheil > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-3135 > Fax: +49-2461-61-6656 > E-mail:i.gutheil at fz-juelich.de > > > > ------------------------------------------------------------------------------------------------ > > ------------------------------------------------------------------------------------------------ > > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > > ------------------------------------------------------------------------------------------------ > > > Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, > von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de From jedbrown at mcs.anl.gov Tue Jul 23 08:45:55 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 23 Jul 2013 08:45:55 -0500 Subject: [petsc-users] error with PAMI (fwd) In-Reply-To: <51EE2012.5020208@fz-juelich.de> References: <51EE2012.5020208@fz-juelich.de> Message-ID: <87zjtd75kc.fsf@mcs.anl.gov> Inge Gutheil writes: > Hello, > a user here in Juelich got the same error and he said he only got it on > BlueGene with PETSc, so I believe there is a problem with PAMI and PETSc > at least PETSc 3.3-p5. We ran on BG/Q from before petsc-3.3 was released until now and have not seen this error message. You'll have to find a way to reproduce before we can do anything else, but I'm skeptical of it being a PETSc problem. >> MooseMBFlow: /bgsys/source/srcV1R2M0.14091/comm/sys/buildtools/pami/common/bgq/Memregion.h:58: pami_result_t PAMI::Memregion::createMemregion_impl(size_t*, size_t, void*, uint64_t): Assertion `rc == 0' failed This is the relevant bit of code (from V1R2M0): // Determine the physical address of the source buffer. uint32_t rc; rc = Kernel_CreateMemoryRegion (&memregion, base, bytes_in); PAMI_assert ( rc == 0 ); You can use a debugger to determine what arguments were used. Either the arguments are invalid (application corruption) or there is some deeper corruption (or a kernel bug). -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From suyan0 at gmail.com Tue Jul 23 20:59:12 2013 From: suyan0 at gmail.com (Su Yan) Date: Tue, 23 Jul 2013 20:59:12 -0500 Subject: [petsc-users] From AIJ to CSR Message-ID: Hi, is there any way that I can extract the index information and element array from a AIJ matrix? Say, I have a Mat A, which is created using the CSR format with MatCreateSeqAIJ(). How can I get int *ia, int *ja, and double *array out of A? Thanks, Su -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 23 21:03:50 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 23 Jul 2013 21:03:50 -0500 Subject: [petsc-users] From AIJ to CSR In-Reply-To: References: Message-ID: <28AAA2B2-1FF7-4A67-AF46-911D44DDE3B3@mcs.anl.gov> On Jul 23, 2013, at 8:59 PM, Su Yan wrote: > Hi, is there any way that I can extract the index information and element array from a AIJ matrix? Say, I have a Mat A, which is created using the CSR format with MatCreateSeqAIJ(). How can I get int *ia, int *ja, and double *array out of A? MatSeqAIJGetArray() for array and MatGetIJ() for ia and ja but we hope that generally users will never need to get access to these low-level underlying structures. Perhaps if you tell us why you want them we may suggest alternatives. Barry > > Thanks, > Su From suyan0 at gmail.com Tue Jul 23 21:09:30 2013 From: suyan0 at gmail.com (Su Yan) Date: Tue, 23 Jul 2013 21:09:30 -0500 Subject: [petsc-users] From AIJ to CSR In-Reply-To: <28AAA2B2-1FF7-4A67-AF46-911D44DDE3B3@mcs.anl.gov> References: <28AAA2B2-1FF7-4A67-AF46-911D44DDE3B3@mcs.anl.gov> Message-ID: Thanks, Barry. The reason I need to access these structures is that I am trying to interface PETSc with MKL through the PCSHELL interface. I need the access to the basic CSR structure in order to invoke the ILUT subroutine in the MKL package. BTW, should it be MatGetRowIJ() instead of MatGetIJ() ? Thanks, Su On Tue, Jul 23, 2013 at 9:03 PM, Barry Smith wrote: > > On Jul 23, 2013, at 8:59 PM, Su Yan wrote: > > > Hi, is there any way that I can extract the index information and > element array from a AIJ matrix? Say, I have a Mat A, which is created > using the CSR format with MatCreateSeqAIJ(). How can I get int *ia, int > *ja, and double *array out of A? > > MatSeqAIJGetArray() for array and MatGetIJ() for ia and ja but we hope > that generally users will never need to get access to these low-level > underlying structures. Perhaps if you tell us why you want them we may > suggest alternatives. > > Barry > > > > > Thanks, > > Su > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 23 21:12:04 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 23 Jul 2013 21:12:04 -0500 Subject: [petsc-users] From AIJ to CSR In-Reply-To: References: <28AAA2B2-1FF7-4A67-AF46-911D44DDE3B3@mcs.anl.gov> Message-ID: <138838DF-5372-4B81-B275-6A4754993708@mcs.anl.gov> On Jul 23, 2013, at 9:09 PM, Su Yan wrote: > Thanks, Barry. > > The reason I need to access these structures is that I am trying to interface PETSc with MKL through the PCSHELL interface. I need the access to the basic CSR structure in order to invoke the ILUT subroutine in the MKL package. Ok > > BTW, should it be MatGetRowIJ() instead of MatGetIJ() ? Yes, sorry. Barry > > Thanks, > Su > > > > On Tue, Jul 23, 2013 at 9:03 PM, Barry Smith wrote: > > On Jul 23, 2013, at 8:59 PM, Su Yan wrote: > > > Hi, is there any way that I can extract the index information and element array from a AIJ matrix? Say, I have a Mat A, which is created using the CSR format with MatCreateSeqAIJ(). How can I get int *ia, int *ja, and double *array out of A? > > MatSeqAIJGetArray() for array and MatGetIJ() for ia and ja but we hope that generally users will never need to get access to these low-level underlying structures. Perhaps if you tell us why you want them we may suggest alternatives. > > Barry > > > > > Thanks, > > Su > > From i.gutheil at fz-juelich.de Wed Jul 24 09:00:02 2013 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Wed, 24 Jul 2013 16:00:02 +0200 Subject: [petsc-users] Fortran Example problem with MPI_COMM_WORLD In-Reply-To: <51EE2B7A.5060803@fz-juelich.de> References: <51EE2012.5020208@fz-juelich.de> <51EE2B7A.5060803@fz-juelich.de> Message-ID: <51EFDDE2.2010503@fz-juelich.de> Hello, in petsc-3.4.2/src/snes/examples/tests/ex12f.F it says 8 ! In this example the application context is a Fortran integer array: 9 ! ctx(1) = da - distributed array 10 ! 2 = F - global vector where the function is stored 11 ! 3 = xl - local work vector 12 ! 4 = comm - MPI communictor 13 ! 5 = unused 14 ! 6 = N - system size This array is declared as 34 PetscFortranAddr ctx(6) thus for a 64-Bit machine this will be 8-Byte integer. Now the address of the distributed array and of the vector F should be 64-Bit, but comm, the MPI communicator must not be 64Bit on many machines, for example on BlueGene/Q, where it always results in an invalid communicator if you have an INTEGER*8 ICOMM ICOMM=MPI_COMM_WORLD call MPI_COMM_SIZE(ICOMM,size,ierr) The communicator must be given to the subroutines as INTEGER*4, even on some 64-Bit machines. Also the system size N is INTEGER *4 if you did not compile with PETSC_INT=8, so I think you will need two arrays , one with the addresses and one with the normal PetscInt for comm and N. Regards Inge Gutheil -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Das Forschungszentrum oeffnet seine Tueren am Sonntag, 29. September, von 10:00 bis 17:00 Uhr: http://www.tagderneugier.de From jedbrown at mcs.anl.gov Wed Jul 24 09:11:49 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 24 Jul 2013 09:11:49 -0500 Subject: [petsc-users] Fortran Example problem with MPI_COMM_WORLD In-Reply-To: <51EFDDE2.2010503@fz-juelich.de> References: <51EE2012.5020208@fz-juelich.de> <51EE2B7A.5060803@fz-juelich.de> <51EFDDE2.2010503@fz-juelich.de> Message-ID: <87ppu8avyy.fsf@mcs.anl.gov> Inge Gutheil writes: > Hello, > in > > petsc-3.4.2/src/snes/examples/tests/ex12f.F it says > 8 ! In this example the application context is a Fortran integer array: > 9 ! ctx(1) = da - distributed array > 10 ! 2 = F - global vector where the function is stored > 11 ! 3 = xl - local work vector > 12 ! 4 = comm - MPI communictor > 13 ! 5 = unused > 14 ! 6 = N - system size > > This array is declared as > 34 PetscFortranAddr ctx(6) True, this is not portable. F77 is terrible, but we can fix this example by getting rid of the last three fields, instead opting for MPI_Comm comm call PetscObjectGetComm(da,comm,ierr) call VecGetSize(F,N,ierr) Also, this is totally bogus: ! Write results if first processor if (ctx(4) .eq. 0) then write(6,100) its endif Thanks for finding this. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From stephen.wornom at inria.fr Wed Jul 24 11:27:04 2013 From: stephen.wornom at inria.fr (Stephen Wornom) Date: Wed, 24 Jul 2013 18:27:04 +0200 Subject: [petsc-users] How to read/write unstructured mesh in parallel In-Reply-To: References: Message-ID: <51F00058.6060204@inria.fr> 2 MillionNodes I cannot execute as the RAM/core is 3.4Gb which is sufficient to execute but the virtual memory is too large and I get an error. Three questions: 1- If I used mpi-io to write the file, would that reduce the virtual memory needed and permit me to run larger meshes? 2- How would I use the mpi-io routines in the oetsc library to write the files? The mesh is partitioned using metis. 3- Is there a coding example of mpi-io writes that I could use to understand how petsc functions? Thanks if advance, Stephen -- stephen.wornom at inria.fr 2004 route des lucioles - BP93 Sophia Antipolis 06902 CEDEX Tel: 04 92 38 50 54 Fax: 04 97 15 53 51 -------------- next part -------------- A non-text attachment was scrubbed... Name: stephen_wornom.vcf Type: text/x-vcard Size: 160 bytes Desc: not available URL: From knepley at gmail.com Wed Jul 24 11:48:58 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Jul 2013 11:48:58 -0500 Subject: [petsc-users] How to read/write unstructured mesh in parallel In-Reply-To: <51F00058.6060204@inria.fr> References: <51F00058.6060204@inria.fr> Message-ID: On Wed, Jul 24, 2013 at 11:27 AM, Stephen Wornom wrote: > > The unstructured code I use collects the different core solution on > processor 0 then writes the global solution file to be used at a restart. > When I have a mesh > 2 MillionNodes > I cannot execute as the RAM/core is 3.4Gb which is sufficient to execute > but the virtual memory is too large and I get an error. > > Three questions: > 1- If I used mpi-io to write the file, would that reduce the virtual > memory needed and permit me to run larger meshes? > Yes, since you would not be using only the memory on one process > 2- How would I use the mpi-io routines in the oetsc library to write the > files? The mesh is partitioned using metis. > Make a Vec with your information, and use VecView(). I recognize that this only writes doubles. We do not have infrastructure for MPI/IO with other data types. > 3- Is there a coding example of mpi-io writes that I could use to > understand how petsc functions? > Should not be necessary. Matt > Thanks if advance, > Stephen > > -- > stephen.wornom at inria.fr > 2004 route des lucioles - BP93 > Sophia Antipolis > 06902 CEDEX > > Tel: 04 92 38 50 54 > Fax: 04 97 15 53 51 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaakoub at tacc.utexas.edu Wed Jul 24 17:13:10 2013 From: yaakoub at tacc.utexas.edu (Yaakoub El Khamra) Date: Wed, 24 Jul 2013 18:13:10 -0400 Subject: [petsc-users] undefined reference to MPI_Comm_f2c Message-ID: With petsc 3.4.1, --download-parmetis=1 I keep getting undefined references to MPI_Comm_f2c in the configure.log. Checking the mpi.h for mvapich2 1.9, this is what I find: #define MPI_Comm_f2c(comm) (MPI_Comm)(comm) Even with using mpi compilers, I keep getting undefined reference. Am I doing something wrong? Regards Yaakoub El Khamra -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jul 24 17:39:49 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 24 Jul 2013 17:39:49 -0500 (CDT) Subject: [petsc-users] undefined reference to MPI_Comm_f2c In-Reply-To: References: Message-ID: On Wed, 24 Jul 2013, Yaakoub El Khamra wrote: > With petsc 3.4.1, --download-parmetis=1 I keep getting undefined references > to MPI_Comm_f2c in the configure.log. Checking the mpi.h for mvapich2 1.9, > this is what I find: > > #define MPI_Comm_f2c(comm) (MPI_Comm)(comm) this is same as mpich - which works for many of us. > > Even with using mpi compilers, I keep getting undefined reference. Am I > doing something wrong? send us configure.log [preferably compressed] Satish From fd.kong at siat.ac.cn Wed Jul 24 22:12:17 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Thu, 25 Jul 2013 11:12:17 +0800 Subject: [petsc-users] How to know the communication time for solver? Message-ID: Hi all, With option -log_summary, I can get the time of KSPSolve(). I also want to know how much time is used for communicating data in the KSPSolve(). Regards, Fande Kong, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 24 22:29:55 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 24 Jul 2013 22:29:55 -0500 Subject: [petsc-users] How to know the communication time for solver? In-Reply-To: References: Message-ID: <87k3kfb9l8.fsf@mcs.anl.gov> Fande Kong writes: > Hi all, > > With option -log_summary, I can get the time of KSPSolve(). I also want to > know how much time is used for communicating data in the KSPSolve(). When communication is in progress (MPI_Irecv and MPI_Isend posted), but you are also doing something, would you like to call that communication or computation? Look at the reductions (VecDot, VecNorm, etc) and the other major communication events like VecScatterEnd and MatAssemblyBegin/End to get a sense for how much time is spent in communication. Note that -log_summary contains various measures of how much communication occurred in each event. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From T.W.Searle at sms.ed.ac.uk Thu Jul 25 04:38:32 2013 From: T.W.Searle at sms.ed.ac.uk (Toby) Date: Thu, 25 Jul 2013 10:38:32 +0100 Subject: [petsc-users] Zero pivot in LU factorisation Message-ID: <51F0F218.1080203@sms.ed.ac.uk> Dear all, I have a generalised matrix problem: A x = lambda B x. My matrix B is diagonal and positive semi-definite. My matrix A is a non-hermitian complex matrix. My problem is essentially that when using the SLEPc generalised eigenvalue solver I get the error "zero pivot in LU factorisation". The rest of the below is details about the problem and things I have tried so far. The matrix will be at its largest about 48000 by 48000, and I want to find the eigenvalues. The eigenvalues I am interested in are ones with the largest real part near 0+0i. Ideally, I want to be able to find them even if they are internal (i.e when there are other eigenvalues with larger positive real part in the spectrum). However, I would be happy if I could get it to work for problems where all eigenvalues have real parts < 0 apart from the eigenvalue of interest. At the moment I have used the scipy linalg.eig and sparse.eigs functions. As far as I know, these use LAPACK and ARPACK respectively to do the heavy lifting. I have decided to see if I can achieve better performance through using the SLEPc library. If this is a bad decision, let me know! I want to move onto using PETSc with the SLEPc eigenvalue solvers. I have been trying out SLEPc using the examples provided as part of the tutorial. Exercise 7 reads matricies A and B from a file and outputs the solutions. I got this to work fine using the matrices provided. However, if I substitute a smaller sized test version of my problem (6000x6000), I get a variety of errors depending on the command line arguments I supply. The main problem I have is the error: "zero pivot in LU factorisation!" when I use the default settings. I think this might be related to the fact that B contains rows of zeros, although my understanding of linear algebra is somewhat basic. Is this true? I have tried setting the options suggested on the petsc website, -pc_factor_shift_type NONZERO etc but all I get is an additional warning that these options were not used I assumed that this was a problem with the preconditioner, so I tried setting -eps_target to 0.1 and both with and without specifying -st_type sinvert and shift. Still I get the same error. Then I tried -st_pc_type jacobi and st_pc_type bjacobi. jacobi runs, but does not produce any eigenvalues. Block jacobi does an LU factorisation and gives me the same error again. The default method is krylov-schur, so I have experimented with the -eps_type gd and -eps_type jd options. Unfortunately these seem to produce nonsense eigenvalues, which do not appear on the spectrum at all when I solve using LAPACK in scipy. I know my matrix problem is not singular, because I can solve it using scipy. Do you know of any books/guides I might need to read besides the PETSC and SLEPC manuals to understand the behaviour of all these different solvers? The output from the case with no command line options is given below. Thanks a lot for taking the time to read this! Kind Regards, Toby tobymac:SLEPC toby$ mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -eps_view Generalized eigenproblem stored in file. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Detected zero pivot in LU factorization: see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 in permuted ordering 3600! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 CST 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul 25 10:20:40 2013 [0]PETSC ERROR: Libraries linked from /opt/local/lib [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 [0]PETSC ERROR: Configure options --prefix=/opt/local --with-valgrind-dir=/opt/local --with-shared-libraries --with-scalar-type=complex --with-clanguage=C++ --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 -mtune=native" --CXXFLAGS="-O2 -mtune=native" [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_LU() line 135 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/factor/lu/lu.c Number of iterations of the method: 0 Number of linear iterations of the method: 0 Number of requested eigenvalues: 1 Stopping condition: tol=1e-08, maxit=750 Number of converged approximate eigenpairs: 0 [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCSetUp_Redundant() line 176 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/redundant/redundant.c [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c tobymac:SLEPC toby$ -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From knepley at gmail.com Thu Jul 25 05:26:03 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Jul 2013 05:26:03 -0500 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: <51F0F218.1080203@sms.ed.ac.uk> References: <51F0F218.1080203@sms.ed.ac.uk> Message-ID: On Thu, Jul 25, 2013 at 4:38 AM, Toby wrote: > Dear all, > > I have a generalised matrix problem: A x = lambda B x. My matrix B is > diagonal and positive semi-definite. My matrix A is a non-hermitian complex > matrix. > > My problem is essentially that when using the SLEPc generalised eigenvalue > solver I get the error "zero pivot in LU factorisation". The rest of the > below is details about the problem and things I have tried so far. > > > > The matrix will be at its largest about 48000 by 48000, and I want to find > the eigenvalues. The eigenvalues I am interested in are ones with the > largest real part near 0+0i. Ideally, I want to be able to find them even > if they are internal (i.e when there are other eigenvalues with larger > positive real part in the spectrum). However, I would be happy if I could > get it to work for problems where all eigenvalues have real parts < 0 apart > from the eigenvalue of interest. > > At the moment I have used the scipy linalg.eig and sparse.eigs functions. > As far as I know, these use LAPACK and ARPACK respectively to do the heavy > lifting. I have decided to see if I can achieve better performance through > using the SLEPc library. If this is a bad decision, let me know! > > I want to move onto using PETSc with the SLEPc eigenvalue solvers. I have > been trying out SLEPc using the examples provided as part of the tutorial. > Exercise 7 reads matricies A and B from a file and outputs the solutions. I > got this to work fine using the matrices provided. However, if I substitute > a smaller sized test version of my problem (6000x6000), I get a variety of > errors depending on the command line arguments I supply. > > The main problem I have is the error: "zero pivot in LU factorisation!" > when I use the default settings. > > I think this might be related to the fact that B contains rows of zeros, > although my understanding of linear algebra is somewhat basic. Is this true? > > I have tried setting the options suggested on the petsc website, > -pc_factor_shift_type NONZERO etc but all I get is an additional warning > that these options were not used > 1) You probably need the correct prefix for this options, e.g. -st_pc_factor_shift_type NONZERO 2) We would like to see the output of -st_ksp_view, but you probably need -st_pc_type jacobi for it to finish Matt > I assumed that this was a problem with the preconditioner, so I tried > setting -eps_target to 0.1 and both with and without specifying -st_type > sinvert and shift. Still I get the same error. > > Then I tried -st_pc_type jacobi and st_pc_type bjacobi. jacobi runs, but > does not produce any eigenvalues. Block jacobi does an LU factorisation and > gives me the same error again. > > The default method is krylov-schur, so I have experimented with the > -eps_type gd and -eps_type jd options. Unfortunately these seem to produce > nonsense eigenvalues, which do not appear on the spectrum at all when I > solve using LAPACK in scipy. > > I know my matrix problem is not singular, because I can solve it using > scipy. > > Do you know of any books/guides I might need to read besides the PETSC and > SLEPC manuals to understand the behaviour of all these different solvers? > > The output from the case with no command line options is given below. > > Thanks a lot for taking the time to read this! > > Kind Regards, > Toby > > > tobymac:SLEPC toby$ mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc > -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -eps_view > > Generalized eigenproblem stored in file. > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------**------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/**documentation/faq.html#**ZeroPivot > ! > [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 in > permuted ordering 3600! > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 > CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul 25 > 10:20:40 2013 > [0]PETSC ERROR: Libraries linked from /opt/local/lib > [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --with-valgrind-dir=/opt/local --with-shared-libraries > --with-scalar-type=complex --with-clanguage=C++ > --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local > --with-scalapack-dir=/opt/**local --with-mumps-dir=/opt/local > --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --COPTFLAGS=-O2 > --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 > -mtune=native" --CXXFLAGS="-O2 -mtune=native" > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/mat/** > impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/mat/** > interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 135 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**impls/factor/lu/lu.c > Number of iterations of the method: 0 > Number of linear iterations of the method: 0 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 0 > > [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/ksp/**interface/itfunc.c > [0]PETSC ERROR: PCSetUp_Redundant() line 176 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/ksp/pc/** > impls/redundant/redundant.c > [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/ksp/**interface/itfunc.c > [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c > tobymac:SLEPC toby$ > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 25 05:44:06 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 25 Jul 2013 12:44:06 +0200 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: <51F0F218.1080203@sms.ed.ac.uk> References: <51F0F218.1080203@sms.ed.ac.uk> Message-ID: <5E1E43BF-BF81-4D68-8191-FA25394F1166@dsic.upv.es> The default is to solve as B^{-1} A x = lambda x With -st_type sinvert -eps_target 0 then it is solved as A^{-1} B x = theta x The latter is probably what you need (or with a different target). Are you sure your A-matrix is non-singular? Jose El 25/07/2013, a las 11:38, Toby escribi?: > Dear all, > > I have a generalised matrix problem: A x = lambda B x. My matrix B is diagonal and positive semi-definite. My matrix A is a non-hermitian complex matrix. > > My problem is essentially that when using the SLEPc generalised eigenvalue solver I get the error "zero pivot in LU factorisation". The rest of the below is details about the problem and things I have tried so far. > > > > The matrix will be at its largest about 48000 by 48000, and I want to find the eigenvalues. The eigenvalues I am interested in are ones with the largest real part near 0+0i. Ideally, I want to be able to find them even if they are internal (i.e when there are other eigenvalues with larger positive real part in the spectrum). However, I would be happy if I could get it to work for problems where all eigenvalues have real parts < 0 apart from the eigenvalue of interest. > > At the moment I have used the scipy linalg.eig and sparse.eigs functions. As far as I know, these use LAPACK and ARPACK respectively to do the heavy lifting. I have decided to see if I can achieve better performance through using the SLEPc library. If this is a bad decision, let me know! > > I want to move onto using PETSc with the SLEPc eigenvalue solvers. I have been trying out SLEPc using the examples provided as part of the tutorial. Exercise 7 reads matricies A and B from a file and outputs the solutions. I got this to work fine using the matrices provided. However, if I substitute a smaller sized test version of my problem (6000x6000), I get a variety of errors depending on the command line arguments I supply. > > The main problem I have is the error: "zero pivot in LU factorisation!" when I use the default settings. > > I think this might be related to the fact that B contains rows of zeros, although my understanding of linear algebra is somewhat basic. Is this true? > > I have tried setting the options suggested on the petsc website, -pc_factor_shift_type NONZERO etc but all I get is an additional warning that these options were not used > > I assumed that this was a problem with the preconditioner, so I tried setting -eps_target to 0.1 and both with and without specifying -st_type sinvert and shift. Still I get the same error. > > Then I tried -st_pc_type jacobi and st_pc_type bjacobi. jacobi runs, but does not produce any eigenvalues. Block jacobi does an LU factorisation and gives me the same error again. > > The default method is krylov-schur, so I have experimented with the -eps_type gd and -eps_type jd options. Unfortunately these seem to produce nonsense eigenvalues, which do not appear on the spectrum at all when I solve using LAPACK in scipy. > > I know my matrix problem is not singular, because I can solve it using scipy. > > Do you know of any books/guides I might need to read besides the PETSC and SLEPC manuals to understand the behaviour of all these different solvers? > > The output from the case with no command line options is given below. > > Thanks a lot for taking the time to read this! > > Kind Regards, > Toby > From T.W.Searle at sms.ed.ac.uk Thu Jul 25 06:30:25 2013 From: T.W.Searle at sms.ed.ac.uk (Toby) Date: Thu, 25 Jul 2013 12:30:25 +0100 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: References: <51F0F218.1080203@sms.ed.ac.uk> Message-ID: <51F10C51.4000305@sms.ed.ac.uk> Hi Matt, Thanks for the speedy reply. I tried using the options you suggested: mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 But I still get the warnings: Option left: name:-st_pc_factor_shift_type value: NONZERO Option left: name:-st_pc_shift_type_amount value: 1 I have tried it with just the -st_pc_type jacobi and -st_ksp_view. This gives me some eigenvalues, but I don't believe them (they do not appear on the spectrum which I solve using LAPACK where all eigenvalues have a real part less than 0). The output was very large, but consists of repetitions of this: mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type jacobi -st_ksp_view Generalized eigenproblem stored in file. Reading COMPLEX matrices from binary files... KSP Object:(st_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object:(st_) 1 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=6000, cols=6000 total: nonzeros=3600, allocated nonzeros=3600 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 4080 nodes, limit used is 5 ... Number of iterations of the method: 189 Number of linear iterations of the method: 1520 Number of requested eigenvalues: 1 Stopping condition: tol=1e-08, maxit=750 Number of converged approximate eigenpairs: 2 k ||Ax-kBx||/||kx|| ----------------- ------------------ 1388.774454+0.001823 i 0.751726 1388.774441+0.001820 i 0.912665 Thanks again, Toby On 25/07/2013 11:26, Matthew Knepley wrote: > On Thu, Jul 25, 2013 at 4:38 AM, Toby > wrote: > > Dear all, > > I have a generalised matrix problem: A x = lambda B x. My matrix B > is diagonal and positive semi-definite. My matrix A is a > non-hermitian complex matrix. > > My problem is essentially that when using the SLEPc generalised > eigenvalue solver I get the error "zero pivot in LU > factorisation". The rest of the below is details about the problem > and things I have tried so far. > > > > The matrix will be at its largest about 48000 by 48000, and I want > to find the eigenvalues. The eigenvalues I am interested in are > ones with the largest real part near 0+0i. Ideally, I want to be > able to find them even if they are internal (i.e when there are > other eigenvalues with larger positive real part in the spectrum). > However, I would be happy if I could get it to work for problems > where all eigenvalues have real parts < 0 apart from the > eigenvalue of interest. > > At the moment I have used the scipy linalg.eig and sparse.eigs > functions. As far as I know, these use LAPACK and ARPACK > respectively to do the heavy lifting. I have decided to see if I > can achieve better performance through using the SLEPc library. If > this is a bad decision, let me know! > > I want to move onto using PETSc with the SLEPc eigenvalue solvers. > I have been trying out SLEPc using the examples provided as part > of the tutorial. Exercise 7 reads matricies A and B from a file > and outputs the solutions. I got this to work fine using the > matrices provided. However, if I substitute a smaller sized test > version of my problem (6000x6000), I get a variety of errors > depending on the command line arguments I supply. > > The main problem I have is the error: "zero pivot in LU > factorisation!" when I use the default settings. > > I think this might be related to the fact that B contains rows of > zeros, although my understanding of linear algebra is somewhat > basic. Is this true? > > I have tried setting the options suggested on the petsc website, > -pc_factor_shift_type NONZERO etc but all I get is an additional > warning that these options were not used > > > 1) You probably need the correct prefix for this options, e.g. > -st_pc_factor_shift_type NONZERO > > 2) We would like to see the output of -st_ksp_view, but you probably > need -st_pc_type jacobi for it to finish > > Matt > > I assumed that this was a problem with the preconditioner, so I > tried setting -eps_target to 0.1 and both with and without > specifying -st_type sinvert and shift. Still I get the same error. > > Then I tried -st_pc_type jacobi and st_pc_type bjacobi. jacobi > runs, but does not produce any eigenvalues. Block jacobi does an > LU factorisation and gives me the same error again. > > The default method is krylov-schur, so I have experimented with > the -eps_type gd and -eps_type jd options. Unfortunately these > seem to produce nonsense eigenvalues, which do not appear on the > spectrum at all when I solve using LAPACK in scipy. > > I know my matrix problem is not singular, because I can solve it > using scipy. > > Do you know of any books/guides I might need to read besides the > PETSC and SLEPC manuals to understand the behaviour of all these > different solvers? > > The output from the case with no command line options is given below. > > Thanks a lot for taking the time to read this! > > Kind Regards, > Toby > > > tobymac:SLEPC toby$ mpiexec ./ex7 -f1 > LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 > RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -eps_view > > Generalized eigenproblem stored in file. > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 > in permuted ordering 3600! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 > 15:10:41 CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul > 25 10:20:40 2013 > [0]PETSC ERROR: Libraries linked from /opt/local/lib > [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --with-valgrind-dir=/opt/local --with-shared-libraries > --with-scalar-type=complex --with-clanguage=C++ > --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local > --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local > --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local > --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 > --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 -mtune=native" > --CXXFLAGS="-O2 -mtune=native" > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 135 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/factor/lu/lu.c > Number of iterations of the method: 0 > Number of linear iterations of the method: 0 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 0 > > [0]PETSC ERROR: PCSetUp() line 832 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUp_Redundant() line 176 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/redundant/redundant.c > [0]PETSC ERROR: PCSetUp() line 832 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c > tobymac:SLEPC toby$ > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From jroman at dsic.upv.es Thu Jul 25 06:34:03 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 25 Jul 2013 13:34:03 +0200 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: <51F10C51.4000305@sms.ed.ac.uk> References: <51F0F218.1080203@sms.ed.ac.uk> <51F10C51.4000305@sms.ed.ac.uk> Message-ID: El 25/07/2013, a las 13:30, Toby escribi?: > Hi Matt, > > Thanks for the speedy reply. I tried using the options you suggested: > > mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 > > But I still get the warnings: > Option left: name:-st_pc_factor_shift_type value: NONZERO > Option left: name:-st_pc_shift_type_amount value: 1 By default, ST is using PCREDUNDANT. In order to use the above options you must change it to e.g. PCLU. That is, add -st_pc_type lu Jose > > I have tried it with just the -st_pc_type jacobi and -st_ksp_view. This gives me some eigenvalues, but I don't believe them (they do not appear on the spectrum which I solve using LAPACK where all eigenvalues have a real part less than 0). The output was very large, but consists of repetitions of this: > > mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type jacobi -st_ksp_view > > Generalized eigenproblem stored in file. > > Reading COMPLEX matrices from binary files... > KSP Object:(st_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object:(st_) 1 MPI processes > type: jacobi > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=6000, cols=6000 > total: nonzeros=3600, allocated nonzeros=3600 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 4080 nodes, limit used is 5 > > ... > > Number of iterations of the method: 189 > Number of linear iterations of the method: 1520 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 2 > > k ||Ax-kBx||/||kx|| > ----------------- ------------------ > 1388.774454+0.001823 i 0.751726 > 1388.774441+0.001820 i 0.912665 > > > Thanks again, > Toby From T.W.Searle at sms.ed.ac.uk Thu Jul 25 06:37:07 2013 From: T.W.Searle at sms.ed.ac.uk (Toby) Date: Thu, 25 Jul 2013 12:37:07 +0100 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: <5E1E43BF-BF81-4D68-8191-FA25394F1166@dsic.upv.es> References: <51F0F218.1080203@sms.ed.ac.uk> <5E1E43BF-BF81-4D68-8191-FA25394F1166@dsic.upv.es> Message-ID: <51F10DE3.4060602@sms.ed.ac.uk> Hi Jose, Thanks for the help. I am sure my matrix is non-singular because I have used the eigenvalue solver from scipy for this particular problem. I can send you the spectrum if that would be helpful. -st_type sinvert with -eps_target -0.2 (which is where a large number of eigenvalues ought to be) does not work either. Similar to before I get: piexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_type sinvert -eps_target -0.2 Generalized eigenproblem stored in file. Reading COMPLEX matrices from binary files... Read in matrices... Creating Eigensolver... Solving system... [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Detected zero pivot in LU factorization: see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! [0]PETSC ERROR: Zero pivot row 0 value 0 tolerance 2.22045e-14! [0]PETSC ERROR: ---------------------------------------------------------------- ... Kind Regards, Toby On 25/07/2013 11:44, Jose E. Roman wrote: > The default is to solve as B^{-1} A x = lambda x > With -st_type sinvert -eps_target 0 then it is solved as A^{-1} B x = theta x > The latter is probably what you need (or with a different target). Are you sure your A-matrix is non-singular? > > Jose > > > El 25/07/2013, a las 11:38, Toby escribi?: > >> Dear all, >> >> I have a generalised matrix problem: A x = lambda B x. My matrix B is diagonal and positive semi-definite. My matrix A is a non-hermitian complex matrix. >> >> My problem is essentially that when using the SLEPc generalised eigenvalue solver I get the error "zero pivot in LU factorisation". The rest of the below is details about the problem and things I have tried so far. >> >> >> >> The matrix will be at its largest about 48000 by 48000, and I want to find the eigenvalues. The eigenvalues I am interested in are ones with the largest real part near 0+0i. Ideally, I want to be able to find them even if they are internal (i.e when there are other eigenvalues with larger positive real part in the spectrum). However, I would be happy if I could get it to work for problems where all eigenvalues have real parts < 0 apart from the eigenvalue of interest. >> >> At the moment I have used the scipy linalg.eig and sparse.eigs functions. As far as I know, these use LAPACK and ARPACK respectively to do the heavy lifting. I have decided to see if I can achieve better performance through using the SLEPc library. If this is a bad decision, let me know! >> >> I want to move onto using PETSc with the SLEPc eigenvalue solvers. I have been trying out SLEPc using the examples provided as part of the tutorial. Exercise 7 reads matricies A and B from a file and outputs the solutions. I got this to work fine using the matrices provided. However, if I substitute a smaller sized test version of my problem (6000x6000), I get a variety of errors depending on the command line arguments I supply. >> >> The main problem I have is the error: "zero pivot in LU factorisation!" when I use the default settings. >> >> I think this might be related to the fact that B contains rows of zeros, although my understanding of linear algebra is somewhat basic. Is this true? >> >> I have tried setting the options suggested on the petsc website, -pc_factor_shift_type NONZERO etc but all I get is an additional warning that these options were not used >> >> I assumed that this was a problem with the preconditioner, so I tried setting -eps_target to 0.1 and both with and without specifying -st_type sinvert and shift. Still I get the same error. >> >> Then I tried -st_pc_type jacobi and st_pc_type bjacobi. jacobi runs, but does not produce any eigenvalues. Block jacobi does an LU factorisation and gives me the same error again. >> >> The default method is krylov-schur, so I have experimented with the -eps_type gd and -eps_type jd options. Unfortunately these seem to produce nonsense eigenvalues, which do not appear on the spectrum at all when I solve using LAPACK in scipy. >> >> I know my matrix problem is not singular, because I can solve it using scipy. >> >> Do you know of any books/guides I might need to read besides the PETSC and SLEPC manuals to understand the behaviour of all these different solvers? >> >> The output from the case with no command line options is given below. >> >> Thanks a lot for taking the time to read this! >> >> Kind Regards, >> Toby >> > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From T.W.Searle at sms.ed.ac.uk Thu Jul 25 06:46:12 2013 From: T.W.Searle at sms.ed.ac.uk (Toby) Date: Thu, 25 Jul 2013 12:46:12 +0100 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: References: <51F0F218.1080203@sms.ed.ac.uk> <51F10C51.4000305@sms.ed.ac.uk> Message-ID: <51F11004.6000408@sms.ed.ac.uk> Ok, it looks like the -st_pc_factor_shift_type option is now being used, thanks. Unfortunately it hasn't fixed the problem. if I use the NONZERO type and a shift amount, the shift amount option is not recognised and I still get the zero pivot error. If I use the POSITIVE_DEFINITE type I also get the pivot error. Output is below Thanks, Toby tobymac:SLEPC toby$ mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type lu -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 Generalized eigenproblem stored in file. Reading COMPLEX matrices from binary files... [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Detected zero pivot in LU factorization: see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 in permuted ordering 3600! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 CST 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul 25 12:41:57 2013 [0]PETSC ERROR: Libraries linked from /opt/local/lib [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 [0]PETSC ERROR: Configure options --prefix=/opt/local --with-valgrind-dir=/opt/local --with-shared-libraries --with-scalar-type=complex --with-clanguage=C++ --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 -mtune=native" --CXXFLAGS="-O2 -mtune=native" [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_LU() line 135 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c Number of iterations of the method: 0 Number of linear iterations of the method: 0 Number of requested eigenvalues: 1 Stopping condition: tol=1e-08, maxit=750 Number of converged approximate eigenpairs: 0 WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-st_pc_shift_type_amount value: 1 mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type lu -st_pc_factor_shift_type POSITIVE_DEFINITE Generalized eigenproblem stored in file. Reading COMPLEX matrices from binary files... [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Detected zero pivot in LU factorization: see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 in permuted ordering 3600! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 CST 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul 25 12:40:20 2013 [0]PETSC ERROR: Libraries linked from /opt/local/lib [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 [0]PETSC ERROR: Configure options --prefix=/opt/local --with-valgrind-dir=/opt/local --with-shared-libraries --with-scalar-type=complex --with-clanguage=C++ --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 -mtune=native" --CXXFLAGS="-O2 -mtune=native" [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_LU() line 135 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c Number of iterations of the method: 0 Number of linear iterations of the method: 0 Number of requested eigenvalues: 1 Stopping condition: tol=1e-08, maxit=750 Number of converged approximate eigenpairs: 0 On 25/07/2013 12:34, Jose E. Roman wrote: > El 25/07/2013, a las 13:30, Toby escribi?: > >> Hi Matt, >> >> Thanks for the speedy reply. I tried using the options you suggested: >> >> mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 >> >> But I still get the warnings: >> Option left: name:-st_pc_factor_shift_type value: NONZERO >> Option left: name:-st_pc_shift_type_amount value: 1 > By default, ST is using PCREDUNDANT. In order to use the above options you must change it to e.g. PCLU. > That is, add -st_pc_type lu > > Jose > > >> I have tried it with just the -st_pc_type jacobi and -st_ksp_view. This gives me some eigenvalues, but I don't believe them (they do not appear on the spectrum which I solve using LAPACK where all eigenvalues have a real part less than 0). The output was very large, but consists of repetitions of this: >> >> mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type jacobi -st_ksp_view >> >> Generalized eigenproblem stored in file. >> >> Reading COMPLEX matrices from binary files... >> KSP Object:(st_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-08, absolute=1e-50, divergence=10000 >> left preconditioning >> using NONE norm type for convergence test >> PC Object:(st_) 1 MPI processes >> type: jacobi >> linear system matrix = precond matrix: >> Matrix Object: 1 MPI processes >> type: seqaij >> rows=6000, cols=6000 >> total: nonzeros=3600, allocated nonzeros=3600 >> total number of mallocs used during MatSetValues calls =0 >> using I-node routines: found 4080 nodes, limit used is 5 >> >> ... >> >> Number of iterations of the method: 189 >> Number of linear iterations of the method: 1520 >> Number of requested eigenvalues: 1 >> Stopping condition: tol=1e-08, maxit=750 >> Number of converged approximate eigenpairs: 2 >> >> k ||Ax-kBx||/||kx|| >> ----------------- ------------------ >> 1388.774454+0.001823 i 0.751726 >> 1388.774441+0.001820 i 0.912665 >> >> >> Thanks again, >> Toby > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From knepley at gmail.com Thu Jul 25 06:51:24 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Jul 2013 06:51:24 -0500 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: <51F11004.6000408@sms.ed.ac.uk> References: <51F0F218.1080203@sms.ed.ac.uk> <51F10C51.4000305@sms.ed.ac.uk> <51F11004.6000408@sms.ed.ac.uk> Message-ID: On Thu, Jul 25, 2013 at 6:46 AM, Toby wrote: > > Ok, it looks like the -st_pc_factor_shift_type option is now being used, > thanks. Unfortunately it hasn't fixed the problem. > > if I use the NONZERO type and a shift amount, the shift amount option is > not recognised and I still get the zero pivot error. If I use the > POSITIVE_DEFINITE type I also get the pivot error. Output is below > > Thanks, > Toby > > tobymac:SLEPC toby$ mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc > -f2 RHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -st_pc_type lu > -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 > > Generalized eigenproblem stored in file. > > Reading COMPLEX matrices from binary files... > [0]PETSC ERROR: --------------------- Error Message > ------------------------------**------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/**documentation/faq.html#**ZeroPivot > ! > [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 in > permuted ordering 3600! > 1) You misspelled the option: -st_pc_factor_shift_amount 1 2) It was used, and fixed you problem on row 0, now you have a problem on row 2395, namely that it is missing. No LU factorization routine will succeed here. You must at least put a diagonal 0.0 on all rows. 3) Same goes for the run below. 4) A completely empty row means your matrix is indeed rank deficient. I have no idea what results are coming out of scipy, but they are not using factorization. Matt > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 > CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul 25 > 12:41:57 2013 > [0]PETSC ERROR: Libraries linked from /opt/local/lib > [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --with-valgrind-dir=/opt/local --with-shared-libraries > --with-scalar-type=complex --with-clanguage=C++ > --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local > --with-scalapack-dir=/opt/**local --with-mumps-dir=/opt/local > --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --COPTFLAGS=-O2 > --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 > -mtune=native" --CXXFLAGS="-O2 -mtune=native" > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/mat/** > impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/mat/** > interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 135 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/ksp/**interface/itfunc.c > [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c > Number of iterations of the method: 0 > Number of linear iterations of the method: 0 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 0 > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-st_pc_shift_type_amount value: 1 > > mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -f2 > RHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -st_pc_type lu > -st_pc_factor_shift_type POSITIVE_DEFINITE > > Generalized eigenproblem stored in file. > > Reading COMPLEX matrices from binary files... > [0]PETSC ERROR: --------------------- Error Message > ------------------------------**------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/**documentation/faq.html#**ZeroPivot > ! > [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 in > permuted ordering 3600! > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 > CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul 25 > 12:40:20 2013 > [0]PETSC ERROR: Libraries linked from /opt/local/lib > [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --with-valgrind-dir=/opt/local --with-shared-libraries > --with-scalar-type=complex --with-clanguage=C++ > --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local > --with-scalapack-dir=/opt/**local --with-mumps-dir=/opt/local > --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local --COPTFLAGS=-O2 > --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 > -mtune=native" --CXXFLAGS="-O2 -mtune=native" > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/mat/** > impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in > /opt/local/var/macports/build/**_Users_toby_MyPorts_** > scienceports_math_petsc/petsc/**work/petsc-3.3-p5/src/mat/** > interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 135 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/pc/**interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /opt/local/var/macports/build/** > _Users_toby_MyPorts_**scienceports_math_petsc/petsc/** > work/petsc-3.3-p5/src/ksp/ksp/**interface/itfunc.c > [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c > Number of iterations of the method: 0 > Number of linear iterations of the method: 0 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 0 > > > > > On 25/07/2013 12:34, Jose E. Roman wrote: > >> El 25/07/2013, a las 13:30, Toby escribi?: >> >> Hi Matt, >>> >>> Thanks for the speedy reply. I tried using the options you suggested: >>> >>> mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -f2 >>> RHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -st_pc_factor_shift_type >>> NONZERO -st_pc_shift_type_amount 1 >>> >>> But I still get the warnings: >>> Option left: name:-st_pc_factor_shift_type value: NONZERO >>> Option left: name:-st_pc_shift_type_amount value: 1 >>> >> By default, ST is using PCREDUNDANT. In order to use the above options >> you must change it to e.g. PCLU. >> That is, add -st_pc_type lu >> >> Jose >> >> >> I have tried it with just the -st_pc_type jacobi and -st_ksp_view. This >>> gives me some eigenvalues, but I don't believe them (they do not appear on >>> the spectrum which I solve using LAPACK where all eigenvalues have a real >>> part less than 0). The output was very large, but consists of repetitions >>> of this: >>> >>> mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -f2 >>> RHS-N7-M40-Re0.0-b0.1-Wi5.0-**amp0.02.petsc -st_pc_type jacobi >>> -st_ksp_view >>> >>> Generalized eigenproblem stored in file. >>> >>> Reading COMPLEX matrices from binary files... >>> KSP Object:(st_) 1 MPI processes >>> type: preonly >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-08, absolute=1e-50, divergence=10000 >>> left preconditioning >>> using NONE norm type for convergence test >>> PC Object:(st_) 1 MPI processes >>> type: jacobi >>> linear system matrix = precond matrix: >>> Matrix Object: 1 MPI processes >>> type: seqaij >>> rows=6000, cols=6000 >>> total: nonzeros=3600, allocated nonzeros=3600 >>> total number of mallocs used during MatSetValues calls =0 >>> using I-node routines: found 4080 nodes, limit used is 5 >>> >>> ... >>> >>> Number of iterations of the method: 189 >>> Number of linear iterations of the method: 1520 >>> Number of requested eigenvalues: 1 >>> Stopping condition: tol=1e-08, maxit=750 >>> Number of converged approximate eigenpairs: 2 >>> >>> k ||Ax-kBx||/||kx|| >>> ----------------- ------------------ >>> 1388.774454+0.001823 i 0.751726 >>> 1388.774441+0.001820 i 0.912665 >>> >>> >>> Thanks again, >>> Toby >>> >> >> > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Jul 25 07:57:25 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 25 Jul 2013 14:57:25 +0200 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <87fvv984f8.fsf@mcs.anl.gov> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> <51EACEA3.2010205@gmail.com> <87fvv984f8.fsf@mcs.anl.gov> Message-ID: <51F120B5.7000407@gmail.com> Hi, Part of my code is : */do ... call MatSetValues(A_mat,1,II,1,JJ,big_A(ijk,kk),ADD_VALUES,ierr)/**/ /**//**/ /**/end do/**//**/ /**/ /**/ call MatAssemblyBegin(A_mat,MAT_FINAL_ASSEMBLY,ierr)/**/ /**/ /**/ call MatAssemblyEnd(A_mat,MAT_FINAL_ASSEMBLY,ierr)/**/ /**//**/ /**/ call MatSetOption(A_mat,MAT_STRUCTURALLY_SYMMETRIC,PETSC_TRUE,ierr)/**/ /**/ /**/ call MatSetOption(A_mat,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ierr)/**/ /**//**/ /**/ call KSPSetOperators(ksp,A_mat,A_mat,SAME_NONZERO_PATTERN,ierr)/**/ /**/ /**/ call KSPGetPC(ksp,pc,ierr) /**/ /**//**/ /**/ call KSPSetOptionsPrefix(ksp,"poisson_",ierr)/**/ /**/ /**/ ksptype=KSPBCGS/**/ /**/ /**/ call KSPSetType(ksp,ksptype,ierr)/**/ /**/ /**/ !call PCSetType(pc,PCGAMG,ierr)/**/ /**/ /**/ /**/ call KSPSetFromOptions(ksp,ierr)/**/ /**/ /**/ tol=1.e-5/**/ /**/ /**/ call KSPSetTolerances(ksp,tol,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,ierr)/**/ /**/ do ... /**/call VecSetValue(b_rhs,II,q_p(ijk),INSERT_VALUES,ierr)/**/ end do /**/ /*/*c*/*/all VecAssemblyBegin(b_rhs,ierr)/**/ /**/ /**/call VecAssemblyEnd(b_rhs,ierr)/**/ /**/ /**/call VecAssemblyBegin(xx,ierr)/**/ /**/ /**/call VecAssemblyEnd(xx,ierr)/**/ /**/ /**/call KSPSolve(ksp,b_rhs,xx,ierr)/**/- hang /**/ /**/call KSPGetConvergedReason(ksp,reason,ierr)/* It hangs at */KSPSolve/*, when used with the option : -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg Yours sincerely, TAY wee-beng On 20/7/2013 8:36 PM, Jed Brown wrote: > -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 25 08:16:51 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 25 Jul 2013 08:16:51 -0500 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <51F120B5.7000407@gmail.com> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> <51EACEA3.2010205@gmail.com> <87fvv984f8.fsf@mcs.anl.gov> <51F120B5.7000407@gmail.com> Message-ID: <87r4emaif0.fsf@mcs.anl.gov> TAY wee-beng writes: > /**/call KSPSolve(ksp,b_rhs,xx,ierr)/**/- hang > /**/ > /**/call KSPGetConvergedReason(ksp,reason,ierr)/* If there was any question, this commenting style is insane. > It hangs at */KSPSolve/*, when used with the option : > > -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg Run in a debugger so you can find where it "hangs". Add -ksp_monitor -pc_gamg_verbose so you can get more information about what setup steps complete successfully. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From T.W.Searle at sms.ed.ac.uk Thu Jul 25 08:44:27 2013 From: T.W.Searle at sms.ed.ac.uk (Toby) Date: Thu, 25 Jul 2013 14:44:27 +0100 Subject: [petsc-users] Zero pivot in LU factorisation In-Reply-To: References: <51F0F218.1080203@sms.ed.ac.uk> <51F10C51.4000305@sms.ed.ac.uk> <51F11004.6000408@sms.ed.ac.uk> Message-ID: <51F12BBB.4080707@sms.ed.ac.uk> Thanks, I have corrected the shift amount option and I get the same error as when using the -st_pc_factor_shift_amount 1 I have been using the PETSc IO module to convert my matrix from sparse coordinate format into PETSc binary. I am guessing there is some problem with this step. I use the function: PetscBinaryIO.PetscBinaryIO().writeMatSciPy( fh, matrix) to convert into Petsc binary format. Is there something wrong with this? Do I need to manually put zeros into the diagonal elements of zero rows of B? When I solve the problem using scipy I use dense matrices. There are definitely entries in the matrix A at row 2395. I used python to confirm this before I convert it to petsc format: column | value LHS: 1795 10.9955742876j 1198 (76+0j) 1196 (72+0j) 595 0.02j However, the matrix is all zeros at row 2395 in B. As far as I understand, this doesn't make my problem singular. I am sure that the problem is not rank deficient when I solve it in scipy. Thanks again, Toby On 25/07/2013 12:51, Matthew Knepley wrote: > On Thu, Jul 25, 2013 at 6:46 AM, Toby > wrote: > > > Ok, it looks like the -st_pc_factor_shift_type option is now being > used, thanks. Unfortunately it hasn't fixed the problem. > > if I use the NONZERO type and a shift amount, the shift amount > option is not recognised and I still get the zero pivot error. If > I use the POSITIVE_DEFINITE type I also get the pivot error. > Output is below > > Thanks, > Toby > > tobymac:SLEPC toby$ mpiexec ./ex7 -f1 > LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 > RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type lu > -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 > > Generalized eigenproblem stored in file. > > Reading COMPLEX matrices from binary files... > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 > in permuted ordering 3600! > > > 1) You misspelled the option: -st_pc_factor_shift_amount 1 > > 2) It was used, and fixed you problem on row 0, now you have a problem > on row 2395, namely that it is missing. No > LU factorization routine will succeed here. You must at least put > a diagonal 0.0 on all rows. > > 3) Same goes for the run below. > > 4) A completely empty row means your matrix is indeed rank deficient. > I have no idea what results are coming out of > scipy, but they are not using factorization. > > Matt > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 > 15:10:41 CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul > 25 12:41:57 2013 > [0]PETSC ERROR: Libraries linked from /opt/local/lib > [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --with-valgrind-dir=/opt/local --with-shared-libraries > --with-scalar-type=complex --with-clanguage=C++ > --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local > --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local > --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local > --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 > --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 -mtune=native" > --CXXFLAGS="-O2 -mtune=native" > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 135 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c > Number of iterations of the method: 0 > Number of linear iterations of the method: 0 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 0 > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-st_pc_shift_type_amount value: 1 > > mpiexec ./ex7 -f1 LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 > RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type lu > -st_pc_factor_shift_type POSITIVE_DEFINITE > > Generalized eigenproblem stored in file. > > Reading COMPLEX matrices from binary files... > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [0]PETSC ERROR: Empty row in matrix: row in original ordering 2395 > in permuted ordering 3600! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 > 15:10:41 CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex7 on a arch-darw named tobymac by toby Thu Jul > 25 12:40:20 2013 > [0]PETSC ERROR: Libraries linked from /opt/local/lib > [0]PETSC ERROR: Configure run at Tue Jul 23 15:11:27 2013 > [0]PETSC ERROR: Configure options --prefix=/opt/local > --with-valgrind-dir=/opt/local --with-shared-libraries > --with-scalar-type=complex --with-clanguage=C++ > --with-superlu-dir=/opt/local --with-blacs-dir=/opt/local > --with-scalapack-dir=/opt/local --with-mumps-dir=/opt/local > --with-metis-dir=/opt/local --with-parmetis-dir=/opt/local > --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 --FOPTFLAGS=-O2 > --LDFLAGS=-L/opt/local/lib --CFLAGS="-O2 -mtune=native" > --CXXFLAGS="-O2 -mtune=native" > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 334 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2750 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 135 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /opt/local/var/macports/build/_Users_toby_MyPorts_scienceports_math_petsc/petsc/work/petsc-3.3-p5/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: STSetUp_Shift() line 94 in src/st/impls/shift/shift.c > [0]PETSC ERROR: STSetUp() line 280 in src/st/interface/stsolve.c > [0]PETSC ERROR: EPSSetUp() line 204 in src/eps/interface/setup.c > [0]PETSC ERROR: EPSSolve() line 109 in src/eps/interface/solve.c > Number of iterations of the method: 0 > Number of linear iterations of the method: 0 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 0 > > > > > On 25/07/2013 12:34, Jose E. Roman wrote: > > El 25/07/2013, a las 13:30, Toby escribi?: > > Hi Matt, > > Thanks for the speedy reply. I tried using the options you > suggested: > > mpiexec ./ex7 -f1 > LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 > RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc > -st_pc_factor_shift_type NONZERO -st_pc_shift_type_amount 1 > > But I still get the warnings: > Option left: name:-st_pc_factor_shift_type value: NONZERO > Option left: name:-st_pc_shift_type_amount value: 1 > > By default, ST is using PCREDUNDANT. In order to use the above > options you must change it to e.g. PCLU. > That is, add -st_pc_type lu > > Jose > > > I have tried it with just the -st_pc_type jacobi and > -st_ksp_view. This gives me some eigenvalues, but I don't > believe them (they do not appear on the spectrum which I > solve using LAPACK where all eigenvalues have a real part > less than 0). The output was very large, but consists of > repetitions of this: > > mpiexec ./ex7 -f1 > LHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -f2 > RHS-N7-M40-Re0.0-b0.1-Wi5.0-amp0.02.petsc -st_pc_type > jacobi -st_ksp_view > > Generalized eigenproblem stored in file. > > Reading COMPLEX matrices from binary files... > KSP Object:(st_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-50, > divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object:(st_) 1 MPI processes > type: jacobi > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=6000, cols=6000 > total: nonzeros=3600, allocated nonzeros=3600 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 4080 nodes, limit used > is 5 > > ... > > Number of iterations of the method: 189 > Number of linear iterations of the method: 1520 > Number of requested eigenvalues: 1 > Stopping condition: tol=1e-08, maxit=750 > Number of converged approximate eigenpairs: 2 > > k ||Ax-kBx||/||kx|| > ----------------- ------------------ > 1388.774454+0.001823 i 0.751726 > 1388.774441+0.001820 i 0.912665 > > > Thanks again, > Toby > > > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From zonexo at gmail.com Thu Jul 25 08:57:29 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 25 Jul 2013 15:57:29 +0200 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <87r4emaif0.fsf@mcs.anl.gov> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> <51EACEA3.2010205@gmail.com> <87fvv984f8.fsf@mcs.anl.gov> <51F120B5.7000407@gmail.com> <87r4emaif0.fsf@mcs.anl.gov> Message-ID: <51F12EC9.7090503@gmail.com> Dear Jed, I'm sorry if I did it wrongly, and thank you for the enlightenment. I am running the debugger under VS2008 and after adding the options: -poisson_ksp_monitor -poisson_pc_gamg_verbose or -ksp_monitor -pc_gamg_verbose I still did not get any additional information. Do you have any suggestion? Yours sincerely, TAY wee-beng On 25/7/2013 3:16 PM, Jed Brown wrote: > TAY wee-beng writes: >> /**/call KSPSolve(ksp,b_rhs,xx,ierr)/**/- hang >> /**/ >> /**/call KSPGetConvergedReason(ksp,reason,ierr)/* > If there was any question, this commenting style is insane. > >> It hangs at */KSPSolve/*, when used with the option : >> >> -poisson_pc_gamg_agg_nsmooths 1 -poisson_pc_type gamg > Run in a debugger so you can find where it "hangs". Add -ksp_monitor > -pc_gamg_verbose so you can get more information about what setup steps > complete successfully. From jedbrown at mcs.anl.gov Thu Jul 25 09:52:40 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 25 Jul 2013 09:52:40 -0500 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <51F12EC9.7090503@gmail.com> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> <51EACEA3.2010205@gmail.com> <87fvv984f8.fsf@mcs.anl.gov> <51F120B5.7000407@gmail.com> <87r4emaif0.fsf@mcs.anl.gov> <51F12EC9.7090503@gmail.com> Message-ID: <87iozyadzb.fsf@mcs.anl.gov> TAY wee-beng writes: > Dear Jed, > > I'm sorry if I did it wrongly, and thank you for the enlightenment. I am > running the debugger under VS2008 and after adding the options: > > -poisson_ksp_monitor -poisson_pc_gamg_verbose > > or > > -ksp_monitor -pc_gamg_verbose > > I still did not get any additional information. Do you have any suggestion? Yeah, break the debugger so you can get a stack trace and look around. What do you normally do with a debugger? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From l.nagy at ed.ac.uk Thu Jul 25 09:53:20 2013 From: l.nagy at ed.ac.uk (Lesleis Nagy) Date: Thu, 25 Jul 2013 15:53:20 +0100 Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS Message-ID: Hello, I currently use PETSc (version: 3.3-p7) and petsc4py along with the Sundials interface to perform time stepping for some FEM code that I have written (for completeness I make use of FEniCS). I would like to pass options to PETSc from the command line using the PETSC_OPTIONS environment variable. I set it with the following values: $ export PETSC_OPTIONS="-ts_sundials_type bdf -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps -options_table -options_left" I then run the code using: $ mpiexec -n 2 python script.py Everything appears to work and my code terminates. However at the end of the run, '-options_table' / '-options_left' reports that #PETSc Option Table entries: -options_left -options_table -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps -ts_sundials_type bdf #End of PETSc Option Table entries There are 3 unused database options. They are: Option left: name:-ts_sundials_gramschmidt_type value: modified Option left: name:-ts_sundials_monitor_steps no value Option left: name:-ts_sundials_type value: bdf Could someone explain to me why PETSc is reading the Sundials options that I've supplied but is not using them? Kind regards Les -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From knepley at gmail.com Thu Jul 25 10:54:19 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Jul 2013 10:54:19 -0500 Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: References: Message-ID: On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: > Hello, > > I currently use PETSc (version: 3.3-p7) and petsc4py along with the > Sundials interface to perform time stepping for some FEM code that I have > written (for completeness I make use of FEniCS). I would like to pass > options to PETSc from the command line using the PETSC_OPTIONS environment > variable. I set it with the following values: > > $ export PETSC_OPTIONS="-ts_sundials_type bdf > -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps > -options_table -options_left" > > I then run the code using: > > $ mpiexec -n 2 python script.py > > Everything appears to work and my code terminates. However at the end of > the run, '-options_table' / '-options_left' reports that > > #PETSc Option Table entries: > -options_left > -options_table > -ts_sundials_gramschmidt_type modified > -ts_sundials_monitor_steps > -ts_sundials_type bdf > #End of PETSc Option Table entries > There are 3 unused database options. They are: > Option left: name:-ts_sundials_gramschmidt_type value: modified > Option left: name:-ts_sundials_monitor_steps no value > Option left: name:-ts_sundials_type value: bdf > > Could someone explain to me why PETSc is reading the Sundials options that > I've supplied but is not using them? > 1) Your solver might have an options prefix 2) You might forget to call TSSetFromOptions() The easiest thing to do is run with -start_in_debugger and set a breakpoint in TSSetFromOptions() Matt > Kind regards > Les > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Jul 25 11:15:56 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 25 Jul 2013 11:15:56 -0500 (CDT) Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: References: Message-ID: On Thu, 25 Jul 2013, Matthew Knepley wrote: > On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: > > > Hello, > > > > I currently use PETSc (version: 3.3-p7) and petsc4py along with the > > Sundials interface to perform time stepping for some FEM code that I have > > written (for completeness I make use of FEniCS). I would like to pass > > options to PETSc from the command line using the PETSC_OPTIONS environment > > variable. I set it with the following values: > > > > $ export PETSC_OPTIONS="-ts_sundials_type bdf > > -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps > > -options_table -options_left" > > > > I then run the code using: > > > > $ mpiexec -n 2 python script.py > > > > Everything appears to work and my code terminates. However at the end of > > the run, '-options_table' / '-options_left' reports that > > > > #PETSc Option Table entries: > > -options_left > > -options_table > > -ts_sundials_gramschmidt_type modified > > -ts_sundials_monitor_steps > > -ts_sundials_type bdf > > #End of PETSc Option Table entries > > There are 3 unused database options. They are: > > Option left: name:-ts_sundials_gramschmidt_type value: modified > > Option left: name:-ts_sundials_monitor_steps no value > > Option left: name:-ts_sundials_type value: bdf > > > > Could someone explain to me why PETSc is reading the Sundials options that > > I've supplied but is not using them? > > > > 1) Your solver might have an options prefix > > 2) You might forget to call TSSetFromOptions() > > The easiest thing to do is run with -start_in_debugger and set a breakpoint > in TSSetFromOptions() Also use -ts_view to check if sundials is actually getting set/used.. Satish > > Matt > > > > Kind regards > > Les > > -- > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > > > > > > From l.nagy at ed.ac.uk Thu Jul 25 11:27:37 2013 From: l.nagy at ed.ac.uk (Lesleis Nagy) Date: Thu, 25 Jul 2013 17:27:37 +0100 Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: References: Message-ID: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> Hi Satish, Matthew, I'm just about to compile a version of PETSc with debugging symbols included, so that I can use gdb. Satish, I've passed the option that you suggest and the output is as follows: #PETSc Option Table entries: -options_left -options_table -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps -ts_sundials_type bdf -ts_view #End of PETSc Option Table entries There are 7 unused database options. They are: Option left: name:-ts_sundials_gramschmidt_type value: modified Option left: name:-ts_sundials_monitor_steps no value Option left: name:-ts_sundials_type value: bdf Option left: name:-ts_view no value It appears as though PETSc ignores the 'ts_view' option. Les On 25 Jul 2013, at 17:15, Satish Balay wrote: > On Thu, 25 Jul 2013, Matthew Knepley wrote: > >> On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: >> >>> Hello, >>> >>> I currently use PETSc (version: 3.3-p7) and petsc4py along with the >>> Sundials interface to perform time stepping for some FEM code that I have >>> written (for completeness I make use of FEniCS). I would like to pass >>> options to PETSc from the command line using the PETSC_OPTIONS environment >>> variable. I set it with the following values: >>> >>> $ export PETSC_OPTIONS="-ts_sundials_type bdf >>> -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps >>> -options_table -options_left" >>> >>> I then run the code using: >>> >>> $ mpiexec -n 2 python script.py >>> >>> Everything appears to work and my code terminates. However at the end of >>> the run, '-options_table' / '-options_left' reports that >>> >>> #PETSc Option Table entries: >>> -options_left >>> -options_table >>> -ts_sundials_gramschmidt_type modified >>> -ts_sundials_monitor_steps >>> -ts_sundials_type bdf >>> #End of PETSc Option Table entries >>> There are 3 unused database options. They are: >>> Option left: name:-ts_sundials_gramschmidt_type value: modified >>> Option left: name:-ts_sundials_monitor_steps no value >>> Option left: name:-ts_sundials_type value: bdf >>> >>> Could someone explain to me why PETSc is reading the Sundials options that >>> I've supplied but is not using them? >>> >> >> 1) Your solver might have an options prefix >> >> 2) You might forget to call TSSetFromOptions() >> >> The easiest thing to do is run with -start_in_debugger and set a breakpoint >> in TSSetFromOptions() > > Also use -ts_view to check if sundials is actually getting set/used.. > > Satish > >> >> Matt >> >> >>> Kind regards >>> Les >>> -- >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> >>> >> >> >> > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From balay at mcs.anl.gov Thu Jul 25 11:29:25 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 25 Jul 2013 11:29:25 -0500 (CDT) Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> References: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> Message-ID: Then as Matt refered to - TSSetFromOptions() isn't getting called. Satish On Thu, 25 Jul 2013, Lesleis Nagy wrote: > Hi Satish, Matthew, > > I'm just about to compile a version of PETSc with debugging symbols included, so that I can use gdb. Satish, I've passed the option that you suggest and the output is as follows: > > #PETSc Option Table entries: > -options_left > -options_table > -ts_sundials_gramschmidt_type modified > -ts_sundials_monitor_steps > -ts_sundials_type bdf > -ts_view > #End of PETSc Option Table entries > There are 7 unused database options. They are: > Option left: name:-ts_sundials_gramschmidt_type value: modified > Option left: name:-ts_sundials_monitor_steps no value > Option left: name:-ts_sundials_type value: bdf > Option left: name:-ts_view no value > > It appears as though PETSc ignores the 'ts_view' option. > > Les > > On 25 Jul 2013, at 17:15, Satish Balay wrote: > > > On Thu, 25 Jul 2013, Matthew Knepley wrote: > > > >> On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: > >> > >>> Hello, > >>> > >>> I currently use PETSc (version: 3.3-p7) and petsc4py along with the > >>> Sundials interface to perform time stepping for some FEM code that I have > >>> written (for completeness I make use of FEniCS). I would like to pass > >>> options to PETSc from the command line using the PETSC_OPTIONS environment > >>> variable. I set it with the following values: > >>> > >>> $ export PETSC_OPTIONS="-ts_sundials_type bdf > >>> -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps > >>> -options_table -options_left" > >>> > >>> I then run the code using: > >>> > >>> $ mpiexec -n 2 python script.py > >>> > >>> Everything appears to work and my code terminates. However at the end of > >>> the run, '-options_table' / '-options_left' reports that > >>> > >>> #PETSc Option Table entries: > >>> -options_left > >>> -options_table > >>> -ts_sundials_gramschmidt_type modified > >>> -ts_sundials_monitor_steps > >>> -ts_sundials_type bdf > >>> #End of PETSc Option Table entries > >>> There are 3 unused database options. They are: > >>> Option left: name:-ts_sundials_gramschmidt_type value: modified > >>> Option left: name:-ts_sundials_monitor_steps no value > >>> Option left: name:-ts_sundials_type value: bdf > >>> > >>> Could someone explain to me why PETSc is reading the Sundials options that > >>> I've supplied but is not using them? > >>> > >> > >> 1) Your solver might have an options prefix > >> > >> 2) You might forget to call TSSetFromOptions() > >> > >> The easiest thing to do is run with -start_in_debugger and set a breakpoint > >> in TSSetFromOptions() > > > > Also use -ts_view to check if sundials is actually getting set/used.. > > > > Satish > > > >> > >> Matt > >> > >> > >>> Kind regards > >>> Les > >>> -- > >>> The University of Edinburgh is a charitable body, registered in > >>> Scotland, with registration number SC005336. > >>> > >>> > >> > >> > >> > > > > > > > From l.nagy at ed.ac.uk Thu Jul 25 11:59:09 2013 From: l.nagy at ed.ac.uk (Lesleis Nagy) Date: Thu, 25 Jul 2013 17:59:09 +0100 Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: References: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> Message-ID: Hi Satish, Matthew, Thank you very much for your help. Everything seems to work with the exception that PETSc now doesn't pick up the 'ts_view' option! For completeness, when using PETSc / petsc4py / FEniCS I simply called 'ts.setFromOptions()' on my on my petsc4py time stepper object (called 'ts'). Is the appearance of 'ts_view' in the unused database options list something I should be concerned about? When I call the 'view()' method on my time stepper object it seems to output information that I would expect to see. Many thanks Les On 25 Jul 2013, at 17:29, Satish Balay wrote: > Then as Matt refered to - TSSetFromOptions() isn't getting called. > > Satish > > On Thu, 25 Jul 2013, Lesleis Nagy wrote: > >> Hi Satish, Matthew, >> >> I'm just about to compile a version of PETSc with debugging symbols included, so that I can use gdb. Satish, I've passed the option that you suggest and the output is as follows: >> >> #PETSc Option Table entries: >> -options_left >> -options_table >> -ts_sundials_gramschmidt_type modified >> -ts_sundials_monitor_steps >> -ts_sundials_type bdf >> -ts_view >> #End of PETSc Option Table entries >> There are 7 unused database options. They are: >> Option left: name:-ts_sundials_gramschmidt_type value: modified >> Option left: name:-ts_sundials_monitor_steps no value >> Option left: name:-ts_sundials_type value: bdf >> Option left: name:-ts_view no value >> >> It appears as though PETSc ignores the 'ts_view' option. >> >> Les >> >> On 25 Jul 2013, at 17:15, Satish Balay wrote: >> >>> On Thu, 25 Jul 2013, Matthew Knepley wrote: >>> >>>> On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: >>>> >>>>> Hello, >>>>> >>>>> I currently use PETSc (version: 3.3-p7) and petsc4py along with the >>>>> Sundials interface to perform time stepping for some FEM code that I have >>>>> written (for completeness I make use of FEniCS). I would like to pass >>>>> options to PETSc from the command line using the PETSC_OPTIONS environment >>>>> variable. I set it with the following values: >>>>> >>>>> $ export PETSC_OPTIONS="-ts_sundials_type bdf >>>>> -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps >>>>> -options_table -options_left" >>>>> >>>>> I then run the code using: >>>>> >>>>> $ mpiexec -n 2 python script.py >>>>> >>>>> Everything appears to work and my code terminates. However at the end of >>>>> the run, '-options_table' / '-options_left' reports that >>>>> >>>>> #PETSc Option Table entries: >>>>> -options_left >>>>> -options_table >>>>> -ts_sundials_gramschmidt_type modified >>>>> -ts_sundials_monitor_steps >>>>> -ts_sundials_type bdf >>>>> #End of PETSc Option Table entries >>>>> There are 3 unused database options. They are: >>>>> Option left: name:-ts_sundials_gramschmidt_type value: modified >>>>> Option left: name:-ts_sundials_monitor_steps no value >>>>> Option left: name:-ts_sundials_type value: bdf >>>>> >>>>> Could someone explain to me why PETSc is reading the Sundials options that >>>>> I've supplied but is not using them? >>>>> >>>> >>>> 1) Your solver might have an options prefix >>>> >>>> 2) You might forget to call TSSetFromOptions() >>>> >>>> The easiest thing to do is run with -start_in_debugger and set a breakpoint >>>> in TSSetFromOptions() >>> >>> Also use -ts_view to check if sundials is actually getting set/used.. >>> >>> Satish >>> >>>> >>>> Matt >>>> >>>> >>>>> Kind regards >>>>> Les >>>>> -- >>>>> The University of Edinburgh is a charitable body, registered in >>>>> Scotland, with registration number SC005336. >>>>> >>>>> >>>> >>>> >>>> >>> >>> >> >> >> > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From suyan0 at gmail.com Thu Jul 25 12:01:01 2013 From: suyan0 at gmail.com (Su Yan) Date: Thu, 25 Jul 2013 12:01:01 -0500 Subject: [petsc-users] PCSHELL Message-ID: Hi, can I use PCSetType(myPC, PCSHELL); together with PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? Will the reordering still take effect? Thanks, Su -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Jul 25 12:02:13 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 25 Jul 2013 12:02:13 -0500 (CDT) Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: References: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> Message-ID: Hm - ts_view should be checked in TSSolve() - so somehow thats not hapenning? Satish On Thu, 25 Jul 2013, Lesleis Nagy wrote: > Hi Satish, Matthew, > > Thank you very much for your help. Everything seems to work with the exception that PETSc now doesn't pick up the 'ts_view' option! > > For completeness, when using PETSc / petsc4py / FEniCS I simply called 'ts.setFromOptions()' on my on my petsc4py time stepper object (called 'ts'). > > Is the appearance of 'ts_view' in the unused database options list something I should be concerned about? When I call the 'view()' method on my time stepper object it seems to output information that I would expect to see. > > Many thanks > Les > > On 25 Jul 2013, at 17:29, Satish Balay wrote: > > > Then as Matt refered to - TSSetFromOptions() isn't getting called. > > > > Satish > > > > On Thu, 25 Jul 2013, Lesleis Nagy wrote: > > > >> Hi Satish, Matthew, > >> > >> I'm just about to compile a version of PETSc with debugging symbols included, so that I can use gdb. Satish, I've passed the option that you suggest and the output is as follows: > >> > >> #PETSc Option Table entries: > >> -options_left > >> -options_table > >> -ts_sundials_gramschmidt_type modified > >> -ts_sundials_monitor_steps > >> -ts_sundials_type bdf > >> -ts_view > >> #End of PETSc Option Table entries > >> There are 7 unused database options. They are: > >> Option left: name:-ts_sundials_gramschmidt_type value: modified > >> Option left: name:-ts_sundials_monitor_steps no value > >> Option left: name:-ts_sundials_type value: bdf > >> Option left: name:-ts_view no value > >> > >> It appears as though PETSc ignores the 'ts_view' option. > >> > >> Les > >> > >> On 25 Jul 2013, at 17:15, Satish Balay wrote: > >> > >>> On Thu, 25 Jul 2013, Matthew Knepley wrote: > >>> > >>>> On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: > >>>> > >>>>> Hello, > >>>>> > >>>>> I currently use PETSc (version: 3.3-p7) and petsc4py along with the > >>>>> Sundials interface to perform time stepping for some FEM code that I have > >>>>> written (for completeness I make use of FEniCS). I would like to pass > >>>>> options to PETSc from the command line using the PETSC_OPTIONS environment > >>>>> variable. I set it with the following values: > >>>>> > >>>>> $ export PETSC_OPTIONS="-ts_sundials_type bdf > >>>>> -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps > >>>>> -options_table -options_left" > >>>>> > >>>>> I then run the code using: > >>>>> > >>>>> $ mpiexec -n 2 python script.py > >>>>> > >>>>> Everything appears to work and my code terminates. However at the end of > >>>>> the run, '-options_table' / '-options_left' reports that > >>>>> > >>>>> #PETSc Option Table entries: > >>>>> -options_left > >>>>> -options_table > >>>>> -ts_sundials_gramschmidt_type modified > >>>>> -ts_sundials_monitor_steps > >>>>> -ts_sundials_type bdf > >>>>> #End of PETSc Option Table entries > >>>>> There are 3 unused database options. They are: > >>>>> Option left: name:-ts_sundials_gramschmidt_type value: modified > >>>>> Option left: name:-ts_sundials_monitor_steps no value > >>>>> Option left: name:-ts_sundials_type value: bdf > >>>>> > >>>>> Could someone explain to me why PETSc is reading the Sundials options that > >>>>> I've supplied but is not using them? > >>>>> > >>>> > >>>> 1) Your solver might have an options prefix > >>>> > >>>> 2) You might forget to call TSSetFromOptions() > >>>> > >>>> The easiest thing to do is run with -start_in_debugger and set a breakpoint > >>>> in TSSetFromOptions() > >>> > >>> Also use -ts_view to check if sundials is actually getting set/used.. > >>> > >>> Satish > >>> > >>>> > >>>> Matt > >>>> > >>>> > >>>>> Kind regards > >>>>> Les > >>>>> -- > >>>>> The University of Edinburgh is a charitable body, registered in > >>>>> Scotland, with registration number SC005336. > >>>>> > >>>>> > >>>> > >>>> > >>>> > >>> > >>> > >> > >> > >> > > > > > > > From l.nagy at ed.ac.uk Thu Jul 25 12:05:36 2013 From: l.nagy at ed.ac.uk (Lesleis Nagy) Date: Thu, 25 Jul 2013 18:05:36 +0100 Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: References: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> Message-ID: <04D11310-3E49-4B4C-B0E5-BAE1729A5DDB@ed.ac.uk> Ah, I don't make a call to TSSolve(). I have a loop and then make calls to TSStep until my exit criteria is met. On 25 Jul 2013, at 18:02, Satish Balay wrote: > Hm - ts_view should be checked in TSSolve() - so somehow thats not hapenning? > > Satish > > On Thu, 25 Jul 2013, Lesleis Nagy wrote: > >> Hi Satish, Matthew, >> >> Thank you very much for your help. Everything seems to work with the exception that PETSc now doesn't pick up the 'ts_view' option! >> >> For completeness, when using PETSc / petsc4py / FEniCS I simply called 'ts.setFromOptions()' on my on my petsc4py time stepper object (called 'ts'). >> >> Is the appearance of 'ts_view' in the unused database options list something I should be concerned about? When I call the 'view()' method on my time stepper object it seems to output information that I would expect to see. >> >> Many thanks >> Les >> >> On 25 Jul 2013, at 17:29, Satish Balay wrote: >> >>> Then as Matt refered to - TSSetFromOptions() isn't getting called. >>> >>> Satish >>> >>> On Thu, 25 Jul 2013, Lesleis Nagy wrote: >>> >>>> Hi Satish, Matthew, >>>> >>>> I'm just about to compile a version of PETSc with debugging symbols included, so that I can use gdb. Satish, I've passed the option that you suggest and the output is as follows: >>>> >>>> #PETSc Option Table entries: >>>> -options_left >>>> -options_table >>>> -ts_sundials_gramschmidt_type modified >>>> -ts_sundials_monitor_steps >>>> -ts_sundials_type bdf >>>> -ts_view >>>> #End of PETSc Option Table entries >>>> There are 7 unused database options. They are: >>>> Option left: name:-ts_sundials_gramschmidt_type value: modified >>>> Option left: name:-ts_sundials_monitor_steps no value >>>> Option left: name:-ts_sundials_type value: bdf >>>> Option left: name:-ts_view no value >>>> >>>> It appears as though PETSc ignores the 'ts_view' option. >>>> >>>> Les >>>> >>>> On 25 Jul 2013, at 17:15, Satish Balay wrote: >>>> >>>>> On Thu, 25 Jul 2013, Matthew Knepley wrote: >>>>> >>>>>> On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy wrote: >>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> I currently use PETSc (version: 3.3-p7) and petsc4py along with the >>>>>>> Sundials interface to perform time stepping for some FEM code that I have >>>>>>> written (for completeness I make use of FEniCS). I would like to pass >>>>>>> options to PETSc from the command line using the PETSC_OPTIONS environment >>>>>>> variable. I set it with the following values: >>>>>>> >>>>>>> $ export PETSC_OPTIONS="-ts_sundials_type bdf >>>>>>> -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps >>>>>>> -options_table -options_left" >>>>>>> >>>>>>> I then run the code using: >>>>>>> >>>>>>> $ mpiexec -n 2 python script.py >>>>>>> >>>>>>> Everything appears to work and my code terminates. However at the end of >>>>>>> the run, '-options_table' / '-options_left' reports that >>>>>>> >>>>>>> #PETSc Option Table entries: >>>>>>> -options_left >>>>>>> -options_table >>>>>>> -ts_sundials_gramschmidt_type modified >>>>>>> -ts_sundials_monitor_steps >>>>>>> -ts_sundials_type bdf >>>>>>> #End of PETSc Option Table entries >>>>>>> There are 3 unused database options. They are: >>>>>>> Option left: name:-ts_sundials_gramschmidt_type value: modified >>>>>>> Option left: name:-ts_sundials_monitor_steps no value >>>>>>> Option left: name:-ts_sundials_type value: bdf >>>>>>> >>>>>>> Could someone explain to me why PETSc is reading the Sundials options that >>>>>>> I've supplied but is not using them? >>>>>>> >>>>>> >>>>>> 1) Your solver might have an options prefix >>>>>> >>>>>> 2) You might forget to call TSSetFromOptions() >>>>>> >>>>>> The easiest thing to do is run with -start_in_debugger and set a breakpoint >>>>>> in TSSetFromOptions() >>>>> >>>>> Also use -ts_view to check if sundials is actually getting set/used.. >>>>> >>>>> Satish >>>>> >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Kind regards >>>>>>> Les >>>>>>> -- >>>>>>> The University of Edinburgh is a charitable body, registered in >>>>>>> Scotland, with registration number SC005336. >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>>> >>> >>> >> >> >> > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From knepley at gmail.com Thu Jul 25 12:18:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Jul 2013 12:18:30 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: References: Message-ID: On Thu, Jul 25, 2013 at 12:01 PM, Su Yan wrote: > Hi, can I use PCSetType(myPC, PCSHELL); together > with PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? > > Will the reordering still take effect? > No. Thanks, Matt > Thanks, > Su > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Thu Jul 25 12:29:01 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Thu, 25 Jul 2013 19:29:01 +0200 Subject: [petsc-users] Schur complement with a preconditioner Message-ID: <51F1605D.5060104@tudelft.nl> Dear all, I have a system that I would like to solve for multiple rhs, represented in block notation as [ A C ] x1 b1 = [ C^T B ] x2 b2 I could solve the system (B - C^TA^{-1}C)x2 = bupdated with Minres algorithm in MATLAB by using the Incomplete Factorization of B in decent iteration counts, like 43. The problem is that B is not SPD and it has one negative eigenvalue. That is the reason to use MINRES. Just as a try, I saved the matrix represented by (B - C^TA^{-1}C) in sparse format and used the hypre euclid preconditioner in PETSc which resulted in 25 iterations to convergence. But since for large problems, this approach is not viable, I was wondering if that is possible to use the complete cholesky factorization of B+alpha*diag(B) where alpha is given as alpha = max(sum(abs(A),2)./diag(A))-2 as a preconditioner for the above schur complement. Or in general use an external preconditioner for the matrix operator. Any pointers are appreciated. Best, Umut -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 25 13:07:48 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Jul 2013 13:07:48 -0500 Subject: [petsc-users] Schur complement with a preconditioner In-Reply-To: <51F1605D.5060104@tudelft.nl> References: <51F1605D.5060104@tudelft.nl> Message-ID: On Thu, Jul 25, 2013 at 12:29 PM, Umut Tabak wrote: > Dear all, > > I have a system that I would like to solve for multiple rhs, represented > in block notation as > > [ A C ] x1 b1 > = > [ C^T B ] x2 b2 > > I could solve the system > > (B - C^TA^{-1}C)x2 = bupdated > > with Minres algorithm in MATLAB by using the Incomplete Factorization of B > in decent iteration counts, like 43. The problem is that B is not SPD and > it has one negative eigenvalue. That is the reason to use MINRES. > > Just as a try, I saved the matrix represented by (B - C^TA^{-1}C) in > sparse format and used the hypre euclid preconditioner in PETSc which > resulted in 25 iterations to convergence. But since for large problems, > this approach is not viable, I was wondering if that is possible to use the > complete cholesky factorization of B+alpha*diag(B) where alpha is given > as > > alpha = max(sum(abs(A),2)./diag(A))-2 > > as a preconditioner for the above schur complement. Or in general use an external preconditioner > for the matrix operator. > > You can use PCFIELDSPLIT as the outer PC and then -pc_fieldsplit_type schur, and then -pc_fieldsplit_schur_preconditioner a11 which will use B to form the preconditioner for S or -pc_fieldsplit_schur_precondition user for which you could provide B+alpha*diag(B) Thanks, Matt > > Any pointers are appreciated. > Best, > Umut > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Jul 25 13:48:16 2013 From: mfadams at lbl.gov (Mark F. Adams) Date: Thu, 25 Jul 2013 14:48:16 -0400 Subject: [petsc-users] Using GAMG to speed up solving of Poisson eqn In-Reply-To: <87iozyadzb.fsf@mcs.anl.gov> References: <51EA9301.8000208@gmail.com> <51EAB08C.6080605@gmail.com> <87ip058bhr.fsf@mcs.anl.gov> <51EACEA3.2010205@gmail.com> <87fvv984f8.fsf@mcs.anl.gov> <51F120B5.7000407@gmail.com> <87r4emaif0.fsf@mcs.anl.gov> <51F12EC9.7090503@gmail.com> <87iozyadzb.fsf@mcs.anl.gov> Message-ID: <46723B2D-C09F-449C-AFF5-7FF545756C17@lbl.gov> >> >> >> -poisson_ksp_monitor -poisson_pc_gamg_verbose "verbose" takes an integer so give it > 0. 0 is the default. But getting a stack trace in the debugger is much better. Does this code run correctly with simple default (ILU) preconditioning? As I recall you were getting floating point exceptions when computing the extreme eigen values. Try running with ILU and add -poisson_ksp_monitor_singular_value to see if this a general problem or a GAMG specific problem. From bsmith at mcs.anl.gov Thu Jul 25 15:09:55 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 25 Jul 2013 15:09:55 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: References: Message-ID: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> On Jul 25, 2013, at 12:01 PM, Su Yan wrote: > Hi, can I use PCSetType(myPC, PCSHELL); together with PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? PCFactorSetMatOrdering() is only effective if the PC is a subclass of PCFactor, for example PCLU is a subclass of PCFactor as are PCILU and PCCHOLESKY and PCICC If you want to use things like this you cannot use PCSHELL (which is a subclass of only the basic PC) you can copy the code for PCCreate_LU() for example and modify it for what you want to do. Or you can register new factorization and solver package as a MatSolverPackage; see for example superlu.c in mat/impls/aij/seq/superlu/ What is it you want to do? Barry > > Will the reordering still take effect? > > Thanks, > Su > From suyan0 at gmail.com Thu Jul 25 16:04:25 2013 From: suyan0 at gmail.com (Su Yan) Date: Thu, 25 Jul 2013 16:04:25 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> References: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> Message-ID: Thanks, Barry. I have interfaced PETSc with MKL ILUT preconditioner through PCSHELL, but want to use reordering technique comes with the PETSc package. Probably I can do it by extracting the reordering matrices, but the system matrix is the Jacobian set by SNESSetJacobian() and solved by KSP. Since the Jacobian matrix is changing for different Newton steps, I am strill trying to figure out how to implement the reordering my self. Best, Su On Thu, Jul 25, 2013 at 3:09 PM, Barry Smith wrote: > > On Jul 25, 2013, at 12:01 PM, Su Yan wrote: > > > Hi, can I use PCSetType(myPC, PCSHELL); together with > PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? > > PCFactorSetMatOrdering() is only effective if the PC is a subclass of > PCFactor, for example PCLU is a subclass of PCFactor as are PCILU and > PCCHOLESKY and PCICC > > If you want to use things like this you cannot use PCSHELL (which is a > subclass of only the basic PC) you can copy the code for PCCreate_LU() for > example and modify it for what you want to do. Or you can register new > factorization and solver package as a MatSolverPackage; see for example > superlu.c in mat/impls/aij/seq/superlu/ > > What is it you want to do? > > Barry > > > > > > Will the reordering still take effect? > > > > Thanks, > > Su > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jul 25 16:08:54 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 25 Jul 2013 16:08:54 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: References: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> Message-ID: On Jul 25, 2013, at 4:04 PM, Su Yan wrote: > Thanks, Barry. > > I have interfaced PETSc with MKL ILUT preconditioner through PCSHELL, Does MKL ILUT take a reordering (that is a permutation of the set {0,1,2,?.n-1} as an input argument? Or do they want you to reorder the matrix in the new ordering explicitly and pass that new matrix into the MKL routine? Barry > but want to use reordering technique comes with the PETSc package. Probably I can do it by extracting the reordering matrices, but the system matrix is the Jacobian set by SNESSetJacobian() and solved by KSP. Since the Jacobian matrix is changing for different Newton steps, I am strill trying to figure out how to implement the reordering my self. > > Best, > Su > > > On Thu, Jul 25, 2013 at 3:09 PM, Barry Smith wrote: > > On Jul 25, 2013, at 12:01 PM, Su Yan wrote: > > > Hi, can I use PCSetType(myPC, PCSHELL); together with PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? > > PCFactorSetMatOrdering() is only effective if the PC is a subclass of PCFactor, for example PCLU is a subclass of PCFactor as are PCILU and PCCHOLESKY and PCICC > > If you want to use things like this you cannot use PCSHELL (which is a subclass of only the basic PC) you can copy the code for PCCreate_LU() for example and modify it for what you want to do. Or you can register new factorization and solver package as a MatSolverPackage; see for example superlu.c in mat/impls/aij/seq/superlu/ > > What is it you want to do? > > Barry > > > > > > Will the reordering still take effect? > > > > Thanks, > > Su > > > > From suyan0 at gmail.com Thu Jul 25 16:17:02 2013 From: suyan0 at gmail.com (Su Yan) Date: Thu, 25 Jul 2013 16:17:02 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: References: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> Message-ID: I do not think they take the reordering as an input.I will need to reorder it myself and then pass it to the ILUT subroutine. On Thu, Jul 25, 2013 at 4:08 PM, Barry Smith wrote: > > On Jul 25, 2013, at 4:04 PM, Su Yan wrote: > > > Thanks, Barry. > > > > I have interfaced PETSc with MKL ILUT preconditioner through PCSHELL, > > Does MKL ILUT take a reordering (that is a permutation of the set > {0,1,2,?.n-1} as an input argument? Or do they want you to reorder the > matrix in the new ordering explicitly and pass that new matrix into the MKL > routine? > > Barry > > > but want to use reordering technique comes with the PETSc package. > Probably I can do it by extracting the reordering matrices, but the system > matrix is the Jacobian set by SNESSetJacobian() and solved by KSP. Since > the Jacobian matrix is changing for different Newton steps, I am strill > trying to figure out how to implement the reordering my self. > > > > Best, > > Su > > > > > > On Thu, Jul 25, 2013 at 3:09 PM, Barry Smith wrote: > > > > On Jul 25, 2013, at 12:01 PM, Su Yan wrote: > > > > > Hi, can I use PCSetType(myPC, PCSHELL); together with > PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? > > > > PCFactorSetMatOrdering() is only effective if the PC is a subclass of > PCFactor, for example PCLU is a subclass of PCFactor as are PCILU and > PCCHOLESKY and PCICC > > > > If you want to use things like this you cannot use PCSHELL (which is > a subclass of only the basic PC) you can copy the code for PCCreate_LU() > for example and modify it for what you want to do. Or you can register new > factorization and solver package as a MatSolverPackage; see for example > superlu.c in mat/impls/aij/seq/superlu/ > > > > What is it you want to do? > > > > Barry > > > > > > > > > > Will the reordering still take effect? > > > > > > Thanks, > > > Su > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 25 17:35:58 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 25 Jul 2013 17:35:58 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> References: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> Message-ID: <87fvv28dyp.fsf@mcs.anl.gov> Barry Smith writes: > PCFactorSetMatOrdering() is only effective if the PC is a subclass > of PCFactor, for example PCLU is a subclass of PCFactor as are > PCILU and PCCHOLESKY and PCICC One could register their own function without deriving from PCFactor. PetscObjectComposeFunction((PetscObject)pc,"PCFactorSetMatOrderingType_C",PCFactorSetMatOrderingType_YourPCShell); -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Thu Jul 25 18:18:45 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 25 Jul 2013 18:18:45 -0500 Subject: [petsc-users] Setting PETSc Sundials Options Using PETSC_OPTIONS In-Reply-To: <04D11310-3E49-4B4C-B0E5-BAE1729A5DDB@ed.ac.uk> References: <7FD7269B-72D4-4C13-A848-3160B4B3D658@ed.ac.uk> <04D11310-3E49-4B4C-B0E5-BAE1729A5DDB@ed.ac.uk> Message-ID: Call TSView yourself if you don't use TSSolve. On Jul 25, 2013 12:06 PM, "Lesleis Nagy" wrote: > Ah, I don't make a call to TSSolve(). I have a loop and then make calls to > TSStep until my exit criteria is met. > > On 25 Jul 2013, at 18:02, Satish Balay wrote: > > > Hm - ts_view should be checked in TSSolve() - so somehow thats not > hapenning? > > > > Satish > > > > On Thu, 25 Jul 2013, Lesleis Nagy wrote: > > > >> Hi Satish, Matthew, > >> > >> Thank you very much for your help. Everything seems to work with the > exception that PETSc now doesn't pick up the 'ts_view' option! > >> > >> For completeness, when using PETSc / petsc4py / FEniCS I simply called > 'ts.setFromOptions()' on my on my petsc4py time stepper object (called > 'ts'). > >> > >> Is the appearance of 'ts_view' in the unused database options list > something I should be concerned about? When I call the 'view()' method on > my time stepper object it seems to output information that I would expect > to see. > >> > >> Many thanks > >> Les > >> > >> On 25 Jul 2013, at 17:29, Satish Balay wrote: > >> > >>> Then as Matt refered to - TSSetFromOptions() isn't getting called. > >>> > >>> Satish > >>> > >>> On Thu, 25 Jul 2013, Lesleis Nagy wrote: > >>> > >>>> Hi Satish, Matthew, > >>>> > >>>> I'm just about to compile a version of PETSc with debugging symbols > included, so that I can use gdb. Satish, I've passed the option that you > suggest and the output is as follows: > >>>> > >>>> #PETSc Option Table entries: > >>>> -options_left > >>>> -options_table > >>>> -ts_sundials_gramschmidt_type modified > >>>> -ts_sundials_monitor_steps > >>>> -ts_sundials_type bdf > >>>> -ts_view > >>>> #End of PETSc Option Table entries > >>>> There are 7 unused database options. They are: > >>>> Option left: name:-ts_sundials_gramschmidt_type value: modified > >>>> Option left: name:-ts_sundials_monitor_steps no value > >>>> Option left: name:-ts_sundials_type value: bdf > >>>> Option left: name:-ts_view no value > >>>> > >>>> It appears as though PETSc ignores the 'ts_view' option. > >>>> > >>>> Les > >>>> > >>>> On 25 Jul 2013, at 17:15, Satish Balay wrote: > >>>> > >>>>> On Thu, 25 Jul 2013, Matthew Knepley wrote: > >>>>> > >>>>>> On Thu, Jul 25, 2013 at 9:53 AM, Lesleis Nagy > wrote: > >>>>>> > >>>>>>> Hello, > >>>>>>> > >>>>>>> I currently use PETSc (version: 3.3-p7) and petsc4py along with the > >>>>>>> Sundials interface to perform time stepping for some FEM code that > I have > >>>>>>> written (for completeness I make use of FEniCS). I would like to > pass > >>>>>>> options to PETSc from the command line using the PETSC_OPTIONS > environment > >>>>>>> variable. I set it with the following values: > >>>>>>> > >>>>>>> $ export PETSC_OPTIONS="-ts_sundials_type bdf > >>>>>>> -ts_sundials_gramschmidt_type modified -ts_sundials_monitor_steps > >>>>>>> -options_table -options_left" > >>>>>>> > >>>>>>> I then run the code using: > >>>>>>> > >>>>>>> $ mpiexec -n 2 python script.py > >>>>>>> > >>>>>>> Everything appears to work and my code terminates. However at the > end of > >>>>>>> the run, '-options_table' / '-options_left' reports that > >>>>>>> > >>>>>>> #PETSc Option Table entries: > >>>>>>> -options_left > >>>>>>> -options_table > >>>>>>> -ts_sundials_gramschmidt_type modified > >>>>>>> -ts_sundials_monitor_steps > >>>>>>> -ts_sundials_type bdf > >>>>>>> #End of PETSc Option Table entries > >>>>>>> There are 3 unused database options. They are: > >>>>>>> Option left: name:-ts_sundials_gramschmidt_type value: modified > >>>>>>> Option left: name:-ts_sundials_monitor_steps no value > >>>>>>> Option left: name:-ts_sundials_type value: bdf > >>>>>>> > >>>>>>> Could someone explain to me why PETSc is reading the Sundials > options that > >>>>>>> I've supplied but is not using them? > >>>>>>> > >>>>>> > >>>>>> 1) Your solver might have an options prefix > >>>>>> > >>>>>> 2) You might forget to call TSSetFromOptions() > >>>>>> > >>>>>> The easiest thing to do is run with -start_in_debugger and set a > breakpoint > >>>>>> in TSSetFromOptions() > >>>>> > >>>>> Also use -ts_view to check if sundials is actually getting set/used.. > >>>>> > >>>>> Satish > >>>>> > >>>>>> > >>>>>> Matt > >>>>>> > >>>>>> > >>>>>>> Kind regards > >>>>>>> Les > >>>>>>> -- > >>>>>>> The University of Edinburgh is a charitable body, registered in > >>>>>>> Scotland, with registration number SC005336. > >>>>>>> > >>>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>> > >>>> > >>>> > >>> > >>> > >> > >> > >> > > > > > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jul 25 18:27:38 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 25 Jul 2013 18:27:38 -0500 Subject: [petsc-users] PCSHELL In-Reply-To: References: <1F1B8079-879E-46E6-AFD1-15F459BD464B@mcs.anl.gov> Message-ID: On Jul 25, 2013, at 4:17 PM, Su Yan wrote: > I do not think they take the reordering as an input.I will need to reorder it myself and then pass it to the ILUT subroutine. Ok, this is likely not particularly good for performance, completely reordering a sparse matrix explicitly is expensive time wise. The way it is usually handled in most sparse solvers is to perform the factorization in the new ordering but using the matrix in the old ordering and just traverse the matrix in the old ordering according to the new ordering; this prevents two copies of the matrix and the total explicit new ordering. We never totally explicitly reorder a matrix and then factor it. Barry > > > On Thu, Jul 25, 2013 at 4:08 PM, Barry Smith wrote: > > On Jul 25, 2013, at 4:04 PM, Su Yan wrote: > > > Thanks, Barry. > > > > I have interfaced PETSc with MKL ILUT preconditioner through PCSHELL, > > Does MKL ILUT take a reordering (that is a permutation of the set {0,1,2,?.n-1} as an input argument? Or do they want you to reorder the matrix in the new ordering explicitly and pass that new matrix into the MKL routine? > > Barry > > > but want to use reordering technique comes with the PETSc package. Probably I can do it by extracting the reordering matrices, but the system matrix is the Jacobian set by SNESSetJacobian() and solved by KSP. Since the Jacobian matrix is changing for different Newton steps, I am strill trying to figure out how to implement the reordering my self. > > > > Best, > > Su > > > > > > On Thu, Jul 25, 2013 at 3:09 PM, Barry Smith wrote: > > > > On Jul 25, 2013, at 12:01 PM, Su Yan wrote: > > > > > Hi, can I use PCSetType(myPC, PCSHELL); together with PCFactorSetMatOrderingType(myPC, MATORDERINGRCM)? > > > > PCFactorSetMatOrdering() is only effective if the PC is a subclass of PCFactor, for example PCLU is a subclass of PCFactor as are PCILU and PCCHOLESKY and PCICC > > > > If you want to use things like this you cannot use PCSHELL (which is a subclass of only the basic PC) you can copy the code for PCCreate_LU() for example and modify it for what you want to do. Or you can register new factorization and solver package as a MatSolverPackage; see for example superlu.c in mat/impls/aij/seq/superlu/ > > > > What is it you want to do? > > > > Barry > > > > > > > > > > Will the reordering still take effect? > > > > > > Thanks, > > > Su > > > > > > > > > From ztdepyahoo at 163.com Fri Jul 26 02:49:30 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Fri, 26 Jul 2013 15:49:30 +0800 (CST) Subject: [petsc-users] can i use the Vec created with VecCreateGhost in the kspsolve; Message-ID: <3cd612ca.10cf9.14019f4388b.Coremail.ztdepyahoo@163.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 26 05:56:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Jul 2013 05:56:30 -0500 Subject: [petsc-users] can i use the Vec created with VecCreateGhost in the kspsolve; In-Reply-To: <3cd612ca.10cf9.14019f4388b.Coremail.ztdepyahoo@163.com> References: <3cd612ca.10cf9.14019f4388b.Coremail.ztdepyahoo@163.com> Message-ID: Yes Matt On Fri, Jul 26, 2013 at 2:49 AM, ??? wrote: > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Fri Jul 26 07:28:16 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Fri, 26 Jul 2013 14:28:16 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: <87li5555oo.fsf@mcs.anl.gov> References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: > Bishesh Khanal writes: > > > Now, I implemented two different approaches, each for both 2D and 3D, in > > MATLAB. It works for the smaller sizes but I have problems solving it for > > the problem size I need (250^3 grid size). > > I use staggered grid with p on cell centers, and components of v on cell > > faces. Similar split up of K to cell center and faces to account for the > > variable viscosity case) > > Okay, you're using a staggered-grid finite difference discretization of > variable-viscosity Stokes. This is a common problem and I recommend > starting with PCFieldSplit with Schur complement reduction (make that > work first, then switch to block preconditioner). Ok, I made my 3D problem work with PCFieldSplit with Schur complement reduction using the options: -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point -fieldsplit_1_ksp_constant_null_space You can use PCLSC or > (probably better for you), assemble a preconditioning matrix containing > the inverse viscosity in the pressure-pressure block. This diagonal > matrix is a spectrally equivalent (or nearly so, depending on > discretization) approximation of the Schur complement. The velocity > block can be solved with algebraic multigrid. Read the PCFieldSplit > docs (follow papers as appropriate) and let us know if you get stuck. > Now, I got a little confused in how exactly to use command line options to use multigrid for the velocity bock and PCLS for the pressure block. After going through the manual I tried the following: -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point -fieldsplit_0_pc_type mg -fieldsplit_0_pc_mg_levels 2 -fieldsplit_0_pc_mg_galerkin -fieldsplit_1_ksp_type fgmres -fieldsplit_1_ksp_constant_null_space -fieldsplit_1_ksp_monitor_short -fieldsplit_1_pc_type lsc -ksp_converged_reason but I get the following errror: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Null argument, when expecting valid pointer! [0]PETSC ERROR: Null Object: Parameter # 2! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards by bkhanal Fri Jul 26 14:23:40 2013 [0]PETSC ERROR: Libraries linked from /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 -with-clanguage=cxx --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatPtAP() line 8166 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_MG() line 628 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCSetUp() line 890 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 399 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: PCApply() line 442 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSP_PCApply() line 227 in /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h [0]PETSC ERROR: KSPInitialResidual() line 64 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c [0]PETSC ERROR: KSPSolve_GMRES() line 239 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: KSPSolve() line 441 in /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 26 07:32:57 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Jul 2013 07:32:57 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: > > > > On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: > >> Bishesh Khanal writes: >> >> > Now, I implemented two different approaches, each for both 2D and 3D, in >> > MATLAB. It works for the smaller sizes but I have problems solving it >> for >> > the problem size I need (250^3 grid size). >> > I use staggered grid with p on cell centers, and components of v on cell >> > faces. Similar split up of K to cell center and faces to account for the >> > variable viscosity case) >> >> Okay, you're using a staggered-grid finite difference discretization of >> variable-viscosity Stokes. This is a common problem and I recommend >> starting with PCFieldSplit with Schur complement reduction (make that >> work first, then switch to block preconditioner). > > > Ok, I made my 3D problem work with PCFieldSplit with Schur complement > reduction using the options: > -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point > -fieldsplit_1_ksp_constant_null_space > > > You can use PCLSC or >> (probably better for you), assemble a preconditioning matrix containing >> the inverse viscosity in the pressure-pressure block. This diagonal >> matrix is a spectrally equivalent (or nearly so, depending on >> discretization) approximation of the Schur complement. The velocity >> block can be solved with algebraic multigrid. Read the PCFieldSplit >> docs (follow papers as appropriate) and let us know if you get stuck. >> > > Now, I got a little confused in how exactly to use command line options > to use multigrid for the velocity bock and PCLS for the pressure block. > After going through the manual I tried the following: > You want Algebraic Multigrid -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point -pc_fieldsplit_type schur -fieldsplit_0_pc_type gamg -fieldsplit_1_ksp_type fgmres -fieldsplit_1_ksp_constant_null_space -fieldsplit_1_ksp_monitor_short -fieldsplit_1_pc_type lsc -ksp_converged_reason Matt > but I get the following errror: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Null argument, when expecting valid pointer! > [0]PETSC ERROR: Null Object: Parameter # 2! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards by > bkhanal Fri Jul 26 14:23:40 2013 > [0]PETSC ERROR: Libraries linked from > /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib > [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 > --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 > -with-clanguage=cxx --download-hypre=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatPtAP() line 8166 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_MG() line 628 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: PCSetUp() line 890 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 399 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [0]PETSC ERROR: PCApply() line 442 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSP_PCApply() line 227 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h > [0]PETSC ERROR: KSPInitialResidual() line 64 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: KSPSolve_GMRES() line 239 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: KSPSolve() line 441 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Fri Jul 26 09:11:40 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Fri, 26 Jul 2013 16:11:40 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: > On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: > >> >> >> >> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >> >>> Bishesh Khanal writes: >>> >>> > Now, I implemented two different approaches, each for both 2D and 3D, >>> in >>> > MATLAB. It works for the smaller sizes but I have problems solving it >>> for >>> > the problem size I need (250^3 grid size). >>> > I use staggered grid with p on cell centers, and components of v on >>> cell >>> > faces. Similar split up of K to cell center and faces to account for >>> the >>> > variable viscosity case) >>> >>> Okay, you're using a staggered-grid finite difference discretization of >>> variable-viscosity Stokes. This is a common problem and I recommend >>> starting with PCFieldSplit with Schur complement reduction (make that >>> work first, then switch to block preconditioner). >> >> >> Ok, I made my 3D problem work with PCFieldSplit with Schur complement >> reduction using the options: >> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >> -fieldsplit_1_ksp_constant_null_space >> >> >> You can use PCLSC or >>> (probably better for you), assemble a preconditioning matrix containing >>> the inverse viscosity in the pressure-pressure block. This diagonal >>> matrix is a spectrally equivalent (or nearly so, depending on >>> discretization) approximation of the Schur complement. The velocity >>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>> docs (follow papers as appropriate) and let us know if you get stuck. >>> >> >> Now, I got a little confused in how exactly to use command line options >> to use multigrid for the velocity bock and PCLS for the pressure block. >> After going through the manual I tried the following: >> > > You want Algebraic Multigrid > > -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point > -pc_fieldsplit_type schur > -fieldsplit_0_pc_type gamg > -fieldsplit_1_ksp_type fgmres > -fieldsplit_1_ksp_constant_null_space > -fieldsplit_1_ksp_monitor_short > -fieldsplit_1_pc_type lsc > -ksp_converged_reason > > I tried the above set of options but the solution I get seem to be not correct. The velocity field I get are quite different than the one I got before without using gamg which were the expected one. Note: (Also, I had to add one extra option of -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual norm did not converge within default 30 iterations before restarting). > Matt > > >> but I get the following errror: >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Null argument, when expecting valid pointer! >> [0]PETSC ERROR: Null Object: Parameter # 2! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards by >> bkhanal Fri Jul 26 14:23:40 2013 >> [0]PETSC ERROR: Libraries linked from >> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >> -with-clanguage=cxx --download-hypre=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatPtAP() line 8166 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >> [0]PETSC ERROR: PCSetUp_MG() line 628 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: PCSetUp() line 890 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 278 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: KSPSolve() line 399 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >> [0]PETSC ERROR: PCApply() line 442 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSP_PCApply() line 227 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >> [0]PETSC ERROR: KSPInitialResidual() line 64 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >> [0]PETSC ERROR: KSPSolve() line 441 in >> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 26 09:22:01 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Jul 2013 09:22:01 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: > > > > On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: > >> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: >> >>> >>> >>> >>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>> >>>> Bishesh Khanal writes: >>>> >>>> > Now, I implemented two different approaches, each for both 2D and 3D, >>>> in >>>> > MATLAB. It works for the smaller sizes but I have problems solving it >>>> for >>>> > the problem size I need (250^3 grid size). >>>> > I use staggered grid with p on cell centers, and components of v on >>>> cell >>>> > faces. Similar split up of K to cell center and faces to account for >>>> the >>>> > variable viscosity case) >>>> >>>> Okay, you're using a staggered-grid finite difference discretization of >>>> variable-viscosity Stokes. This is a common problem and I recommend >>>> starting with PCFieldSplit with Schur complement reduction (make that >>>> work first, then switch to block preconditioner). >>> >>> >>> Ok, I made my 3D problem work with PCFieldSplit with Schur complement >>> reduction using the options: >>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>> -fieldsplit_1_ksp_constant_null_space >>> >>> >>> You can use PCLSC or >>>> (probably better for you), assemble a preconditioning matrix containing >>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>> matrix is a spectrally equivalent (or nearly so, depending on >>>> discretization) approximation of the Schur complement. The velocity >>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>> docs (follow papers as appropriate) and let us know if you get stuck. >>>> >>> >>> Now, I got a little confused in how exactly to use command line options >>> to use multigrid for the velocity bock and PCLS for the pressure block. >>> After going through the manual I tried the following: >>> >> >> You want Algebraic Multigrid >> >> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >> -pc_fieldsplit_type schur >> -fieldsplit_0_pc_type gamg >> -fieldsplit_1_ksp_type fgmres >> -fieldsplit_1_ksp_constant_null_space >> -fieldsplit_1_ksp_monitor_short >> -fieldsplit_1_pc_type lsc >> -ksp_converged_reason >> >> I tried the above set of options but the solution I get seem to be not > correct. The velocity field I get are quite different than the one I got > before without using gamg which were the expected one. > Note: (Also, I had to add one extra option of > -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual > norm did not converge within default 30 iterations before restarting). > These are all iterative solvers. You have to make sure everything converges. I do this problem in the tutorial with a constant viscosity. Matt > Matt >> >> >>> but I get the following errror: >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>> [0]PETSC ERROR: Null Object: Parameter # 2! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards >>> by bkhanal Fri Jul 26 14:23:40 2013 >>> [0]PETSC ERROR: Libraries linked from >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>> -with-clanguage=cxx --download-hypre=1 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: MatPtAP() line 8166 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>> [0]PETSC ERROR: PCSetUp() line 890 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: KSPSetUp() line 278 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: KSPSolve() line 399 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>> [0]PETSC ERROR: PCApply() line 442 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>> [0]PETSC ERROR: KSPSolve() line 441 in >>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Fri Jul 26 10:13:07 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Fri, 26 Jul 2013 17:13:07 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: > On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: > >> >> >> >> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >> >>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: >>> >>>> >>>> >>>> >>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>> >>>>> Bishesh Khanal writes: >>>>> >>>>> > Now, I implemented two different approaches, each for both 2D and >>>>> 3D, in >>>>> > MATLAB. It works for the smaller sizes but I have problems solving >>>>> it for >>>>> > the problem size I need (250^3 grid size). >>>>> > I use staggered grid with p on cell centers, and components of v on >>>>> cell >>>>> > faces. Similar split up of K to cell center and faces to account for >>>>> the >>>>> > variable viscosity case) >>>>> >>>>> Okay, you're using a staggered-grid finite difference discretization of >>>>> variable-viscosity Stokes. This is a common problem and I recommend >>>>> starting with PCFieldSplit with Schur complement reduction (make that >>>>> work first, then switch to block preconditioner). >>>> >>>> >>>> Ok, I made my 3D problem work with PCFieldSplit with Schur complement >>>> reduction using the options: >>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>> -fieldsplit_1_ksp_constant_null_space >>>> >>>> >>>> You can use PCLSC or >>>>> (probably better for you), assemble a preconditioning matrix containing >>>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>> discretization) approximation of the Schur complement. The velocity >>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>>> docs (follow papers as appropriate) and let us know if you get stuck. >>>>> >>>> >>>> Now, I got a little confused in how exactly to use command line >>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>> block. After going through the manual I tried the following: >>>> >>> >>> You want Algebraic Multigrid >>> >>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>> -pc_fieldsplit_type schur >>> -fieldsplit_0_pc_type gamg >>> -fieldsplit_1_ksp_type fgmres >>> -fieldsplit_1_ksp_constant_null_space >>> -fieldsplit_1_ksp_monitor_short >>> -fieldsplit_1_pc_type lsc >>> -ksp_converged_reason >>> >>> I tried the above set of options but the solution I get seem to be not >> correct. The velocity field I get are quite different than the one I got >> before without using gamg which were the expected one. >> Note: (Also, I had to add one extra option of >> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >> norm did not converge within default 30 iterations before restarting). >> > > These are all iterative solvers. You have to make sure everything > converges. > When I set restart to 100, and do -ksp_monitor, it does converge (for the fieldsplit_1_ksp). Are you saying that in spite of having -ksp_converged_reason in the option and petsc completing the run with the message "Linear solve converged due to CONVERGED_RTOL .." not enough to make sure that everything is converging ? If that is the case what should I do for this particular problem ? I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 of: http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA Thus I have a DM with 4 dof but there are several "ghost values" set to 0. Would this cause any problem when using the multigrid ? (This has worked fine when not using the multigrid.) > I do this problem in the tutorial with > a constant viscosity. > Which tutorial are you referring to ? Could you please provide me the link please ? > Matt > > >> Matt >>> >>> >>>> but I get the following errror: >>>> >>>> [0]PETSC ERROR: --------------------- Error Message >>>> ------------------------------------ >>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards >>>> by bkhanal Fri Jul 26 14:23:40 2013 >>>> [0]PETSC ERROR: Libraries linked from >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>> -with-clanguage=cxx --download-hypre=1 >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>> [0]PETSC ERROR: PCApply() line 442 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 26 10:42:35 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Jul 2013 10:42:35 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal wrote: > > > > On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: > >> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: >> >>> >>> >>> >>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >>> >>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>>> >>>>>> Bishesh Khanal writes: >>>>>> >>>>>> > Now, I implemented two different approaches, each for both 2D and >>>>>> 3D, in >>>>>> > MATLAB. It works for the smaller sizes but I have problems solving >>>>>> it for >>>>>> > the problem size I need (250^3 grid size). >>>>>> > I use staggered grid with p on cell centers, and components of v on >>>>>> cell >>>>>> > faces. Similar split up of K to cell center and faces to account >>>>>> for the >>>>>> > variable viscosity case) >>>>>> >>>>>> Okay, you're using a staggered-grid finite difference discretization >>>>>> of >>>>>> variable-viscosity Stokes. This is a common problem and I recommend >>>>>> starting with PCFieldSplit with Schur complement reduction (make that >>>>>> work first, then switch to block preconditioner). >>>>> >>>>> >>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur complement >>>>> reduction using the options: >>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>>> -fieldsplit_1_ksp_constant_null_space >>>>> >>>>> >>>>> You can use PCLSC or >>>>>> (probably better for you), assemble a preconditioning matrix >>>>>> containing >>>>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>>> discretization) approximation of the Schur complement. The velocity >>>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>>>> docs (follow papers as appropriate) and let us know if you get stuck. >>>>>> >>>>> >>>>> Now, I got a little confused in how exactly to use command line >>>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>>> block. After going through the manual I tried the following: >>>>> >>>> >>>> You want Algebraic Multigrid >>>> >>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>>> -pc_fieldsplit_type schur >>>> -fieldsplit_0_pc_type gamg >>>> -fieldsplit_1_ksp_type fgmres >>>> -fieldsplit_1_ksp_constant_null_space >>>> -fieldsplit_1_ksp_monitor_short >>>> -fieldsplit_1_pc_type lsc >>>> -ksp_converged_reason >>>> >>>> I tried the above set of options but the solution I get seem to be not >>> correct. The velocity field I get are quite different than the one I got >>> before without using gamg which were the expected one. >>> Note: (Also, I had to add one extra option of >>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >>> norm did not converge within default 30 iterations before restarting). >>> >> >> These are all iterative solvers. You have to make sure everything >> converges. >> > > When I set restart to 100, and do -ksp_monitor, it does converge (for the > fieldsplit_1_ksp). Are you saying that in spite of having > -ksp_converged_reason in the option and petsc completing the run with the > message "Linear solve converged due to CONVERGED_RTOL .." not enough to > make sure that everything is converging ? If that is the case what should I > do for this particular problem ? > If your outer iteration converges, and you do not like the solution, there are usually two possibilities: 1) Your tolerance is too high, start with it cranked down all the way (1e-10) and slowly relax it 2) You have a null space that you are not accounting for I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 of: > http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA > > Thus I have a DM with 4 dof but there are several "ghost values" set to 0. > Would this cause any problem when using the multigrid ? (This has worked > fine when not using the multigrid.) > I don't know exactly how you have implemented this. These should be rows of the identity. > I do this problem in the tutorial with >> a constant viscosity. >> > > Which tutorial are you referring to ? Could you please provide me the link > please ? > There are a few on the PETSc Tutorials page, but you can look at this http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf for a step-by-step example of a saddle-point problem at the end. Matt > > >> Matt >> >> >>> Matt >>>> >>>> >>>>> but I get the following errror: >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message >>>>> ------------------------------------ >>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named edwards >>>>> by bkhanal Fri Jul 26 14:23:40 2013 >>>>> [0]PETSC ERROR: Libraries linked from >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>>> -with-clanguage=cxx --download-hypre=1 >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>>> [0]PETSC ERROR: PCApply() line 442 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Fri Jul 26 11:05:02 2013 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 26 Jul 2013 18:05:02 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: Yes, the nullspace is important. There is definitely a pressure null space of constants which needs to be considered. I don't believe ONLY using -fieldsplit_1_ksp_constant_null_space is not sufficient. You need to inform the KSP for the outer syetem (the coupled u-p system) that there is a null space of constants in the pressure system. This cannot (as far as I'm aware) be set via command line args. You need to write a null space removal function which accepts the complete (u,p) vector which only modifies the pressure dofs. Take care though when you write this though. Because of the DMDA formulation you are using, you have introduced dummy/ficticious pressure dofs on the right/top/front faces of your mesh. Thus your null space removal function should ignore those dofs when your define the constant pressure constraint. Cheers, Dave On 26 July 2013 17:42, Matthew Knepley wrote: > On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal wrote: > >> >> >> >> On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: >> >>> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: >>> >>>> >>>> >>>> >>>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >>>> >>>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>>>> >>>>>>> Bishesh Khanal writes: >>>>>>> >>>>>>> > Now, I implemented two different approaches, each for both 2D and >>>>>>> 3D, in >>>>>>> > MATLAB. It works for the smaller sizes but I have problems solving >>>>>>> it for >>>>>>> > the problem size I need (250^3 grid size). >>>>>>> > I use staggered grid with p on cell centers, and components of v >>>>>>> on cell >>>>>>> > faces. Similar split up of K to cell center and faces to account >>>>>>> for the >>>>>>> > variable viscosity case) >>>>>>> >>>>>>> Okay, you're using a staggered-grid finite difference discretization >>>>>>> of >>>>>>> variable-viscosity Stokes. This is a common problem and I recommend >>>>>>> starting with PCFieldSplit with Schur complement reduction (make that >>>>>>> work first, then switch to block preconditioner). >>>>>> >>>>>> >>>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur complement >>>>>> reduction using the options: >>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>> >>>>>> >>>>>> You can use PCLSC or >>>>>>> (probably better for you), assemble a preconditioning matrix >>>>>>> containing >>>>>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>>>> discretization) approximation of the Schur complement. The velocity >>>>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>>>>> docs (follow papers as appropriate) and let us know if you get stuck. >>>>>>> >>>>>> >>>>>> Now, I got a little confused in how exactly to use command line >>>>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>>>> block. After going through the manual I tried the following: >>>>>> >>>>> >>>>> You want Algebraic Multigrid >>>>> >>>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>>>> -pc_fieldsplit_type schur >>>>> -fieldsplit_0_pc_type gamg >>>>> -fieldsplit_1_ksp_type fgmres >>>>> -fieldsplit_1_ksp_constant_null_space >>>>> -fieldsplit_1_ksp_monitor_short >>>>> -fieldsplit_1_pc_type lsc >>>>> -ksp_converged_reason >>>>> >>>>> I tried the above set of options but the solution I get seem to be not >>>> correct. The velocity field I get are quite different than the one I got >>>> before without using gamg which were the expected one. >>>> Note: (Also, I had to add one extra option of >>>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >>>> norm did not converge within default 30 iterations before restarting). >>>> >>> >>> These are all iterative solvers. You have to make sure everything >>> converges. >>> >> >> When I set restart to 100, and do -ksp_monitor, it does converge (for the >> fieldsplit_1_ksp). Are you saying that in spite of having >> -ksp_converged_reason in the option and petsc completing the run with the >> message "Linear solve converged due to CONVERGED_RTOL .." not enough to >> make sure that everything is converging ? If that is the case what should I >> do for this particular problem ? >> > > If your outer iteration converges, and you do not like the solution, there > are usually two possibilities: > > 1) Your tolerance is too high, start with it cranked down all the way > (1e-10) and slowly relax it > > 2) You have a null space that you are not accounting for > > I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 >> of: >> http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA >> >> Thus I have a DM with 4 dof but there are several "ghost values" set to 0. >> Would this cause any problem when using the multigrid ? (This has worked >> fine when not using the multigrid.) >> > > I don't know exactly how you have implemented this. These should be rows > of the identity. > > >> I do this problem in the tutorial with >>> a constant viscosity. >>> >> >> Which tutorial are you referring to ? Could you please provide me the >> link please ? >> > > There are a few on the PETSc Tutorials page, but you can look at this > > > http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf > > for a step-by-step example of a saddle-point problem at the end. > > Matt > > >> >> >>> Matt >>> >>> >>>> Matt >>>>> >>>>> >>>>>> but I get the following errror: >>>>>> >>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>> ------------------------------------ >>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>>>> [0]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>> [0]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named >>>>>> edwards by bkhanal Fri Jul 26 14:23:40 2013 >>>>>> [0]PETSC ERROR: Libraries linked from >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>>>> -with-clanguage=cxx --download-hypre=1 >>>>>> [0]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>>>> [0]PETSC ERROR: PCApply() line 442 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Fri Jul 26 11:06:20 2013 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 26 Jul 2013 18:06:20 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: Sorry - I meant I believe ONLY using -fieldsplit_1_ksp_constant_null_space is NOT sufficient. On 26 July 2013 18:05, Dave May wrote: > Yes, the nullspace is important. > There is definitely a pressure null space of constants which needs to be > considered. > I don't believe ONLY using > -fieldsplit_1_ksp_constant_null_space > is not sufficient. > > You need to inform the KSP for the outer syetem (the coupled u-p system) > that there is a null space of constants in the pressure system. This cannot > (as far as I'm aware) be set via command line args. You need to write a > null space removal function which accepts the complete (u,p) vector which > only modifies the pressure dofs. > > Take care though when you write this though. Because of the DMDA > formulation you are using, you have introduced dummy/ficticious pressure > dofs on the right/top/front faces of your mesh. Thus your null space > removal function should ignore those dofs when your define the constant > pressure constraint. > > > > Cheers, > Dave > > > On 26 July 2013 17:42, Matthew Knepley wrote: > >> On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal wrote: >> >>> >>> >>> >>> On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: >>> >>>> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >>>>> >>>>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>>>>> >>>>>>>> Bishesh Khanal writes: >>>>>>>> >>>>>>>> > Now, I implemented two different approaches, each for both 2D and >>>>>>>> 3D, in >>>>>>>> > MATLAB. It works for the smaller sizes but I have problems >>>>>>>> solving it for >>>>>>>> > the problem size I need (250^3 grid size). >>>>>>>> > I use staggered grid with p on cell centers, and components of v >>>>>>>> on cell >>>>>>>> > faces. Similar split up of K to cell center and faces to account >>>>>>>> for the >>>>>>>> > variable viscosity case) >>>>>>>> >>>>>>>> Okay, you're using a staggered-grid finite difference >>>>>>>> discretization of >>>>>>>> variable-viscosity Stokes. This is a common problem and I recommend >>>>>>>> starting with PCFieldSplit with Schur complement reduction (make >>>>>>>> that >>>>>>>> work first, then switch to block preconditioner). >>>>>>> >>>>>>> >>>>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur >>>>>>> complement reduction using the options: >>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>>> >>>>>>> >>>>>>> You can use PCLSC or >>>>>>>> (probably better for you), assemble a preconditioning matrix >>>>>>>> containing >>>>>>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>>>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>>>>> discretization) approximation of the Schur complement. The velocity >>>>>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>>>>>> docs (follow papers as appropriate) and let us know if you get >>>>>>>> stuck. >>>>>>>> >>>>>>> >>>>>>> Now, I got a little confused in how exactly to use command line >>>>>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>>>>> block. After going through the manual I tried the following: >>>>>>> >>>>>> >>>>>> You want Algebraic Multigrid >>>>>> >>>>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>>>>> -pc_fieldsplit_type schur >>>>>> -fieldsplit_0_pc_type gamg >>>>>> -fieldsplit_1_ksp_type fgmres >>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>> -fieldsplit_1_ksp_monitor_short >>>>>> -fieldsplit_1_pc_type lsc >>>>>> -ksp_converged_reason >>>>>> >>>>>> I tried the above set of options but the solution I get seem to be >>>>> not correct. The velocity field I get are quite different than the one I >>>>> got before without using gamg which were the expected one. >>>>> Note: (Also, I had to add one extra option of >>>>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >>>>> norm did not converge within default 30 iterations before restarting). >>>>> >>>> >>>> These are all iterative solvers. You have to make sure everything >>>> converges. >>>> >>> >>> When I set restart to 100, and do -ksp_monitor, it does converge (for >>> the fieldsplit_1_ksp). Are you saying that in spite of having >>> -ksp_converged_reason in the option and petsc completing the run with the >>> message "Linear solve converged due to CONVERGED_RTOL .." not enough to >>> make sure that everything is converging ? If that is the case what should I >>> do for this particular problem ? >>> >> >> If your outer iteration converges, and you do not like the solution, >> there are usually two possibilities: >> >> 1) Your tolerance is too high, start with it cranked down all the way >> (1e-10) and slowly relax it >> >> 2) You have a null space that you are not accounting for >> >> I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 >>> of: >>> http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA >>> >>> Thus I have a DM with 4 dof but there are several "ghost values" set to >>> 0. >>> Would this cause any problem when using the multigrid ? (This has worked >>> fine when not using the multigrid.) >>> >> >> I don't know exactly how you have implemented this. These should be rows >> of the identity. >> >> >>> I do this problem in the tutorial with >>>> a constant viscosity. >>>> >>> >>> Which tutorial are you referring to ? Could you please provide me the >>> link please ? >>> >> >> There are a few on the PETSc Tutorials page, but you can look at this >> >> >> http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf >> >> for a step-by-step example of a saddle-point problem at the end. >> >> Matt >> >> >>> >>> >>>> Matt >>>> >>>> >>>>> Matt >>>>>> >>>>>> >>>>>>> but I get the following errror: >>>>>>> >>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>> ------------------------------------ >>>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>>>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named >>>>>>> edwards by bkhanal Fri Jul 26 14:23:40 2013 >>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>>>>> -with-clanguage=cxx --download-hypre=1 >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>>>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>>>>> [0]PETSC ERROR: PCApply() line 442 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Sat Jul 27 07:03:27 2013 From: popov at uni-mainz.de (Anton Popov) Date: Sat, 27 Jul 2013 14:03:27 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: <51F3B70F.4040707@uni-mainz.de> Dave, coupled staggered grid possesses a non-trivial null space. That means it probably contains not just three translations, three rotations and a constant pressure. It would be nice if somebody finally figures out how it looks like. Any ideas? This information is valuable for building a proper coupled algebraic multigrid preconditioner. As for the fake dofs, I really don't like them. It's perfectly possible to design a numbering scheme that would only contain real dofs, and implement it with PETSc. I'm working on it. Anton On 7/26/13 6:05 PM, Dave May wrote: > Yes, the nullspace is important. > There is definitely a pressure null space of constants which needs to > be considered. > I don't believe ONLY using > -fieldsplit_1_ksp_constant_null_space > is not sufficient. > > You need to inform the KSP for the outer syetem (the coupled u-p > system) that there is a null space of constants in the pressure > system. This cannot (as far as I'm aware) be set via command line > args. You need to write a null space removal function which accepts > the complete (u,p) vector which only modifies the pressure dofs. > > Take care though when you write this though. Because of the DMDA > formulation you are using, you have introduced dummy/ficticious > pressure dofs on the right/top/front faces of your mesh. Thus your > null space removal function should ignore those dofs when your define > the constant pressure constraint. > > > > Cheers, > Dave > > > On 26 July 2013 17:42, Matthew Knepley > wrote: > > On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal > > wrote: > > > > > On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley > > wrote: > > On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal > > wrote: > > > > > On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley > > wrote: > > On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal > > > wrote: > > > > > On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown > > wrote: > > Bishesh Khanal > writes: > > > Now, I implemented two different > approaches, each for both 2D and 3D, in > > MATLAB. It works for the smaller sizes > but I have problems solving it for > > the problem size I need (250^3 grid size). > > I use staggered grid with p on cell > centers, and components of v on cell > > faces. Similar split up of K to cell > center and faces to account for the > > variable viscosity case) > > Okay, you're using a staggered-grid finite > difference discretization of > variable-viscosity Stokes. This is a > common problem and I recommend > starting with PCFieldSplit with Schur > complement reduction (make that > work first, then switch to block > preconditioner). > > > Ok, I made my 3D problem work with > PCFieldSplit with Schur complement reduction > using the options: > -pc_fieldsplit_type schur > -pc_fieldsplit_detect_saddle_point > -fieldsplit_1_ksp_constant_null_space > > > You can use PCLSC or > (probably better for you), assemble a > preconditioning matrix containing > the inverse viscosity in the > pressure-pressure block. This diagonal > matrix is a spectrally equivalent (or > nearly so, depending on > discretization) approximation of the Schur > complement. The velocity > block can be solved with algebraic > multigrid. Read the PCFieldSplit > docs (follow papers as appropriate) and > let us know if you get stuck. > > > Now, I got a little confused in how exactly to > use command line options to use multigrid for > the velocity bock and PCLS for the pressure > block. After going through the manual I tried > the following: > > You want Algebraic Multigrid > > -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point > -pc_fieldsplit_type schur > -fieldsplit_0_pc_type gamg > -fieldsplit_1_ksp_type fgmres > -fieldsplit_1_ksp_constant_null_space > -fieldsplit_1_ksp_monitor_short > -fieldsplit_1_pc_type lsc > -ksp_converged_reason > > I tried the above set of options but the solution I > get seem to be not correct. The velocity field I get > are quite different than the one I got before without > using gamg which were the expected one. > Note: (Also, I had to add one extra option of > -fieldsplit_1_ksp_gmres_restart 100 , because the > fieldsplit_1_ksp residual norm did not converge within > default 30 iterations before restarting). > > > These are all iterative solvers. You have to make sure > everything converges. > > > When I set restart to 100, and do -ksp_monitor, it does > converge (for the fieldsplit_1_ksp). Are you saying that in > spite of having -ksp_converged_reason in the option and petsc > completing the run with the message "Linear solve converged > due to CONVERGED_RTOL .." not enough to make sure that > everything is converging ? If that is the case what should I > do for this particular problem ? > > > If your outer iteration converges, and you do not like the > solution, there are usually two possibilities: > > 1) Your tolerance is too high, start with it cranked down all > the way (1e-10) and slowly relax it > > 2) You have a null space that you are not accounting for > > I have used the MAC scheme with indexing as shown in: fig > 7.5, page 96 of: > http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA > > Thus I have a DM with 4 dof but there are several "ghost > values" set to 0. > Would this cause any problem when using the multigrid ? (This > has worked fine when not using the multigrid.) > > > I don't know exactly how you have implemented this. These should > be rows of the identity. > > I do this problem in the tutorial with > a constant viscosity. > > > Which tutorial are you referring to ? Could you please provide > me the link please ? > > > There are a few on the PETSc Tutorials page, but you can look at this > > http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf > > for a step-by-step example of a saddle-point problem at the end. > > Matt > > > > Matt > > Matt > > > but I get the following errror: > > [0]PETSC ERROR: --------------------- Error > Message ------------------------------------ > [0]PETSC ERROR: Null argument, when expecting > valid pointer! > [0]PETSC ERROR: Null Object: Parameter # 2! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.4.2, > Jul, 02, 2013 > [0]PETSC ERROR: See docs/changes/index.html > for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints > about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual > pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: src/AdLemMain on a > arch-linux2-cxx-debug named edwards by bkhanal > Fri Jul 26 14:23:40 2013 > [0]PETSC ERROR: Libraries linked from > /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib > [0]PETSC ERROR: Configure run at Fri Jul 19 > 14:25:01 2013 > [0]PETSC ERROR: Configure options > --with-cc=gcc --with-fc=g77 --with-cxx=g++ > --download-f-blas-lapack=1 --download-mpich=1 > -with-clanguage=cxx --download-hypre=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatPtAP() line 8166 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_MG() line 628 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: PCSetUp() line 890 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 399 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCApply_FieldSplit_Schur() > line 807 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [0]PETSC ERROR: PCApply() line 442 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSP_PCApply() line 227 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h > [0]PETSC ERROR: KSPInitialResidual() line 64 > in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: KSPSolve_GMRES() line 239 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: KSPSolve() line 441 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > > > > > > -- > What most experimenters take for granted before > they begin their experiments is infinitely more > interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin > their experiments is infinitely more interesting than any > results to which their experiments lead. > -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to > which their experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 27 07:14:28 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 27 Jul 2013 07:14:28 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: <51F3B70F.4040707@uni-mainz.de> References: <87li5555oo.fsf@mcs.anl.gov> <51F3B70F.4040707@uni-mainz.de> Message-ID: On Sat, Jul 27, 2013 at 7:03 AM, Anton Popov wrote: > Dave, > > coupled staggered grid possesses a non-trivial null space. That means it > probably contains not just three translations, three rotations and a > constant pressure. It would be nice if somebody finally figures out how it > looks like. Any ideas? This information is valuable for building a proper > coupled algebraic multigrid preconditioner. > > As for the fake dofs, I really don't like them. It's perfectly possible to > design a numbering scheme that would only contain real dofs, and implement > it with PETSc. I'm working on it. > My recommended way to do this is to use PetscSection() to define the dofs over a DMDA. This currently works correctly in parallel for vertex and cell dofs. The code for face dofs is in there, but I have never tested it. You have to use the 'next' branch because this is experimental stuff. Thanks, Matt > > Anton > > > On 7/26/13 6:05 PM, Dave May wrote: > > Yes, the nullspace is important. > There is definitely a pressure null space of constants which needs to be > considered. > I don't believe ONLY using > -fieldsplit_1_ksp_constant_null_space > is not sufficient. > > You need to inform the KSP for the outer syetem (the coupled u-p system) > that there is a null space of constants in the pressure system. This cannot > (as far as I'm aware) be set via command line args. You need to write a > null space removal function which accepts the complete (u,p) vector which > only modifies the pressure dofs. > > Take care though when you write this though. Because of the DMDA > formulation you are using, you have introduced dummy/ficticious pressure > dofs on the right/top/front faces of your mesh. Thus your null space > removal function should ignore those dofs when your define the constant > pressure constraint. > > > > Cheers, > Dave > > > On 26 July 2013 17:42, Matthew Knepley wrote: > >> On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal wrote: >> >>> >>> >>> >>> On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: >>> >>>> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >>>>> >>>>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal >>>>> > wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>>>>> >>>>>>>> Bishesh Khanal writes: >>>>>>>> >>>>>>>> > Now, I implemented two different approaches, each for both 2D and >>>>>>>> 3D, in >>>>>>>> > MATLAB. It works for the smaller sizes but I have problems >>>>>>>> solving it for >>>>>>>> > the problem size I need (250^3 grid size). >>>>>>>> > I use staggered grid with p on cell centers, and components of v >>>>>>>> on cell >>>>>>>> > faces. Similar split up of K to cell center and faces to account >>>>>>>> for the >>>>>>>> > variable viscosity case) >>>>>>>> >>>>>>>> Okay, you're using a staggered-grid finite difference >>>>>>>> discretization of >>>>>>>> variable-viscosity Stokes. This is a common problem and I recommend >>>>>>>> starting with PCFieldSplit with Schur complement reduction (make >>>>>>>> that >>>>>>>> work first, then switch to block preconditioner). >>>>>>> >>>>>>> >>>>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur >>>>>>> complement reduction using the options: >>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>>> >>>>>>> >>>>>>> You can use PCLSC or >>>>>>>> (probably better for you), assemble a preconditioning matrix >>>>>>>> containing >>>>>>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>>>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>>>>> discretization) approximation of the Schur complement. The velocity >>>>>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>>>>>> docs (follow papers as appropriate) and let us know if you get >>>>>>>> stuck. >>>>>>>> >>>>>>> >>>>>>> Now, I got a little confused in how exactly to use command line >>>>>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>>>>> block. After going through the manual I tried the following: >>>>>>> >>>>>> >>>>>> You want Algebraic Multigrid >>>>>> >>>>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>>>>> -pc_fieldsplit_type schur >>>>>> -fieldsplit_0_pc_type gamg >>>>>> -fieldsplit_1_ksp_type fgmres >>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>> -fieldsplit_1_ksp_monitor_short >>>>>> -fieldsplit_1_pc_type lsc >>>>>> -ksp_converged_reason >>>>>> >>>>>> I tried the above set of options but the solution I get seem to >>>>> be not correct. The velocity field I get are quite different than the one I >>>>> got before without using gamg which were the expected one. >>>>> Note: (Also, I had to add one extra option of >>>>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >>>>> norm did not converge within default 30 iterations before restarting). >>>>> >>>> >>>> These are all iterative solvers. You have to make sure everything >>>> converges. >>>> >>> >>> When I set restart to 100, and do -ksp_monitor, it does converge (for >>> the fieldsplit_1_ksp). Are you saying that in spite of having >>> -ksp_converged_reason in the option and petsc completing the run with the >>> message "Linear solve converged due to CONVERGED_RTOL .." not enough to >>> make sure that everything is converging ? If that is the case what should I >>> do for this particular problem ? >>> >> >> If your outer iteration converges, and you do not like the solution, >> there are usually two possibilities: >> >> 1) Your tolerance is too high, start with it cranked down all the way >> (1e-10) and slowly relax it >> >> 2) You have a null space that you are not accounting for >> >> I have used the MAC scheme with indexing as shown in: fig 7.5, page >>> 96 of: >>> http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA >>> >>> Thus I have a DM with 4 dof but there are several "ghost values" set to >>> 0. >>> Would this cause any problem when using the multigrid ? (This has >>> worked fine when not using the multigrid.) >>> >> >> I don't know exactly how you have implemented this. These should be >> rows of the identity. >> >> >>> I do this problem in the tutorial with >>>> a constant viscosity. >>>> >>> >>> Which tutorial are you referring to ? Could you please provide me the >>> link please ? >>> >> >> There are a few on the PETSc Tutorials page, but you can look at this >> >> >> http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf >> >> for a step-by-step example of a saddle-point problem at the end. >> >> Matt >> >> >>> >>> >>>> Matt >>>> >>>> >>>>> Matt >>>>>> >>>>>> >>>>>>> but I get the following errror: >>>>>>> >>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>> ------------------------------------ >>>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>>>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named >>>>>>> edwards by bkhanal Fri Jul 26 14:23:40 2013 >>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>>>>> -with-clanguage=cxx --download-hypre=1 >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>>>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>>>>> [0]PETSC ERROR: PCApply() line 442 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Mon Jul 29 05:01:31 2013 From: popov at uni-mainz.de (Anton Popov) Date: Mon, 29 Jul 2013 12:01:31 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> <51F3B70F.4040707@uni-mainz.de> Message-ID: <51F63D7B.4080103@uni-mainz.de> On 7/27/13 2:14 PM, Matthew Knepley wrote: > My recommended way to do this is to use PetscSection() to define the > dofs over a DMDA. This currently works correctly in parallel > for vertex and cell dofs. The code for face dofs is in there, but I > have never tested it. You have to use the 'next' branch because this > is experimental stuff. Thanks Matt, I'll check this. Do you have any info about PetscSection() (a presentation, tutorial, or whatever), or I should just use sources? Anton From knepley at gmail.com Mon Jul 29 07:27:44 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Jul 2013 07:27:44 -0500 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: <51F63D7B.4080103@uni-mainz.de> References: <87li5555oo.fsf@mcs.anl.gov> <51F3B70F.4040707@uni-mainz.de> <51F63D7B.4080103@uni-mainz.de> Message-ID: On Mon, Jul 29, 2013 at 5:01 AM, Anton Popov wrote: > On 7/27/13 2:14 PM, Matthew Knepley wrote: > >> My recommended way to do this is to use PetscSection() to define the dofs >> over a DMDA. This currently works correctly in parallel >> for vertex and cell dofs. The code for face dofs is in there, but I have >> never tested it. You have to use the 'next' branch because this >> is experimental stuff. >> > Thanks Matt, I'll check this. > Do you have any info about PetscSection() (a presentation, tutorial, or > whatever), or I should just use sources? Its a really simple class, so the manpages should be alright. If not let me know. There are also some slides on it from my recent Paris tutorial. Thanks, Matt > > Anton > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From heikki.a.virtanen at hotmail.com Mon Jul 29 07:54:34 2013 From: heikki.a.virtanen at hotmail.com (Heikki Virtanen) Date: Mon, 29 Jul 2013 15:54:34 +0300 Subject: [petsc-users] MATMPIBAIJ matrix allocation In-Reply-To: References: , Message-ID: Hi, I try to solve an eigenvalue problem (Ax = lambda B x) where A and B are complex matrices. Unfortunately, my data structures are not capable of handling directly complex numbers, yet. So, I have to use [re(A) -im(A)] RealA = [ ] [imA) re(A)] matrices instead. How should I allocate matrices if I know compressed sparse row matrix formats of A and B matrices. I have used something like this. ierr = MatCreate(PETSC_COMM_WORLD,&RealA); CHKERRQ(ierr); ierr = MatSetSizes(RealA,PETSC_DECIDE,PETSC_DECIDE,2*n,2*n); CHKERRQ(ierr); ierr = MatSetType(RealA,MATMPIBAIJ); CHKERRQ(ierr); ierr = MatSetBlockSize(RealA,2);CHKERRQ(ierr); ierr = MatSetFromOptions(RealA);CHKERRQ(ierr); ierr = MatMPIBAIJSetPreallocationCSR (RealA,2,rows,cols,0); CHKERRQ(ierr); ierr = MatAssemblyBegin (RealA,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); ierr = MatAssemblyEnd (RealA,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); where n is the global size of matrix A. rows and cols are csr arrays of A. Each submatrix of RealA matrix have the same nonzero pattern than A. But I am not sure if this is correct, because when I print my matrices out they look more like band matrices than block matrices. -Heikki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon Jul 29 08:20:49 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 29 Jul 2013 15:20:49 +0200 Subject: [petsc-users] MATMPIBAIJ matrix allocation In-Reply-To: References: , Message-ID: El 29/07/2013, a las 14:54, Heikki Virtanen escribi?: > Hi, I try to solve an eigenvalue problem (Ax = lambda B x) where A > and B are complex matrices. Unfortunately, my data structures > are not capable of handling directly complex numbers, yet. So, > I have to use > > [re(A) -im(A)] > RealA = [ ] > [imA) re(A)] > > matrices instead. How should I allocate matrices if I know > compressed sparse row matrix formats of A and B matrices. I have used > something like this. > > ierr = MatCreate(PETSC_COMM_WORLD,&RealA); CHKERRQ(ierr); > ierr = MatSetSizes(RealA,PETSC_DECIDE,PETSC_DECIDE,2*n,2*n); CHKERRQ(ierr); > ierr = MatSetType(RealA,MATMPIBAIJ); CHKERRQ(ierr); > ierr = MatSetBlockSize(RealA,2);CHKERRQ(ierr); > ierr = MatSetFromOptions(RealA);CHKERRQ(ierr); > > ierr = MatMPIBAIJSetPreallocationCSR (RealA,2,rows,cols,0); CHKERRQ(ierr); > ierr = MatAssemblyBegin (RealA,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd (RealA,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > where n is the global size of matrix A. rows and cols are csr arrays of A. > Each submatrix of RealA matrix have the same nonzero pattern than A. > But I am not sure if this is correct, because when I print my matrices out > they look more like band matrices than block matrices. > > -Heikki You are creating a BAIJ matrix with the nonzero pattern defined by (rows,cols), where each entry of the matrix is a 2x2 block, in your case [ re(a_ij) -im(a_ij) ] [ im(a_ij) re(a_ij) ] So the matrix you are creating is not this one [re(A) -im(A)] [imA) re(A)] but the one resulting from a perfect shuffle permutation. That's why you are not seeing a 2x2 block structure for the whole matrix. Jose From bisheshkh at gmail.com Mon Jul 29 08:42:06 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Mon, 29 Jul 2013 15:42:06 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Fri, Jul 26, 2013 at 6:05 PM, Dave May wrote: > Yes, the nullspace is important. > There is definitely a pressure null space of constants which needs to be > considered. > I don't believe ONLY using > -fieldsplit_1_ksp_constant_null_space > is not sufficient. > > You need to inform the KSP for the outer syetem (the coupled u-p system) > that there is a null space of constants in the pressure system. This cannot > (as far as I'm aware) be set via command line args. You need to write a > null space removal function which accepts the complete (u,p) vector which > only modifies the pressure dofs. > > Take care though when you write this though. Because of the DMDA > formulation you are using, you have introduced dummy/ficticious pressure > dofs on the right/top/front faces of your mesh. Thus your null space > removal function should ignore those dofs when your define the constant > pressure constraint. > > I'm aware of the constant pressure null space but did not realize that just having -fieldsplit_1_ksp_constant_null_space would work, thanks. Now I tried doing what you said but I get some problems! I created a null basis vector say, mNullBasis (creating a global vector using the DMDA formulation). I set all it's values to zero except the non-dummy pressure dofs whose values are set to one. Now I use MatNullSpaceCreate to create a MatNullSpace say mNullSpace using the mNullBasis vector. The relevant lines in my solver function are (mKsp and mDa are my relevant KSP and DM objects): ierr = DMKSPSetComputeRHS(mDa,computeRHSTaras3D,this);CHKERRQ(ierr); ierr = DMKSPSetComputeOperators(mDa,computeMatrixTaras3D,this);CHKERRQ(ierr); ierr = KSPSetDM(mKsp,mDa);CHKERRQ(ierr); ierr = KSPSetNullSpace(mKsp,mNullSpace);CHKERRQ(ierr); ierr = KSPSetFromOptions(mKsp);CHKERRQ(ierr); ierr = KSPSetUp(mKsp);CHKERRQ(ierr); ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr); And a corresponding addition in the function computeMatrixTaras3D is something equivalent to: ierr = MatNullSpaceRemove(mNullSpace,b,NULL);CHKERRQ(ierr); //where b is the vector that got updated with the rhs values before this line. When using the -ksp_view in the option I see that a null space now gets attached to the outer solver, while I still need to keep -fieldsplit_1_ksp_constant_null_space option as it is to have a null space attached to the inner pressure block solver. However the problem is I do not get the expected result with gamg and resulting solution blows up. Am I doing something wrong or completely missing some point here with nullspace removal idea ? (Note that I am imposing a Dirichlet boundary condition, so in my case I think the only null space is indeed the constant pressure and not the translation and rigid body rotation which would be present if using for e.g. Neumann boundary condition) > > > Cheers, > Dave > > > On 26 July 2013 17:42, Matthew Knepley wrote: > >> On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal wrote: >> >>> >>> >>> >>> On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: >>> >>>> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >>>>> >>>>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>>>>> >>>>>>>> Bishesh Khanal writes: >>>>>>>> >>>>>>>> > Now, I implemented two different approaches, each for both 2D and >>>>>>>> 3D, in >>>>>>>> > MATLAB. It works for the smaller sizes but I have problems >>>>>>>> solving it for >>>>>>>> > the problem size I need (250^3 grid size). >>>>>>>> > I use staggered grid with p on cell centers, and components of v >>>>>>>> on cell >>>>>>>> > faces. Similar split up of K to cell center and faces to account >>>>>>>> for the >>>>>>>> > variable viscosity case) >>>>>>>> >>>>>>>> Okay, you're using a staggered-grid finite difference >>>>>>>> discretization of >>>>>>>> variable-viscosity Stokes. This is a common problem and I recommend >>>>>>>> starting with PCFieldSplit with Schur complement reduction (make >>>>>>>> that >>>>>>>> work first, then switch to block preconditioner). >>>>>>> >>>>>>> >>>>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur >>>>>>> complement reduction using the options: >>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>>> >>>>>>> >>>>>>> You can use PCLSC or >>>>>>>> (probably better for you), assemble a preconditioning matrix >>>>>>>> containing >>>>>>>> the inverse viscosity in the pressure-pressure block. This diagonal >>>>>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>>>>> discretization) approximation of the Schur complement. The velocity >>>>>>>> block can be solved with algebraic multigrid. Read the PCFieldSplit >>>>>>>> docs (follow papers as appropriate) and let us know if you get >>>>>>>> stuck. >>>>>>>> >>>>>>> >>>>>>> Now, I got a little confused in how exactly to use command line >>>>>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>>>>> block. After going through the manual I tried the following: >>>>>>> >>>>>> >>>>>> You want Algebraic Multigrid >>>>>> >>>>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>>>>> -pc_fieldsplit_type schur >>>>>> -fieldsplit_0_pc_type gamg >>>>>> -fieldsplit_1_ksp_type fgmres >>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>> -fieldsplit_1_ksp_monitor_short >>>>>> -fieldsplit_1_pc_type lsc >>>>>> -ksp_converged_reason >>>>>> >>>>>> I tried the above set of options but the solution I get seem to be >>>>> not correct. The velocity field I get are quite different than the one I >>>>> got before without using gamg which were the expected one. >>>>> Note: (Also, I had to add one extra option of >>>>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >>>>> norm did not converge within default 30 iterations before restarting). >>>>> >>>> >>>> These are all iterative solvers. You have to make sure everything >>>> converges. >>>> >>> >>> When I set restart to 100, and do -ksp_monitor, it does converge (for >>> the fieldsplit_1_ksp). Are you saying that in spite of having >>> -ksp_converged_reason in the option and petsc completing the run with the >>> message "Linear solve converged due to CONVERGED_RTOL .." not enough to >>> make sure that everything is converging ? If that is the case what should I >>> do for this particular problem ? >>> >> >> If your outer iteration converges, and you do not like the solution, >> there are usually two possibilities: >> >> 1) Your tolerance is too high, start with it cranked down all the way >> (1e-10) and slowly relax it >> >> 2) You have a null space that you are not accounting for >> >> I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 >>> of: >>> http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA >>> >>> Thus I have a DM with 4 dof but there are several "ghost values" set to >>> 0. >>> Would this cause any problem when using the multigrid ? (This has worked >>> fine when not using the multigrid.) >>> >> >> I don't know exactly how you have implemented this. These should be rows >> of the identity. >> >> >>> I do this problem in the tutorial with >>>> a constant viscosity. >>>> >>> >>> Which tutorial are you referring to ? Could you please provide me the >>> link please ? >>> >> >> There are a few on the PETSc Tutorials page, but you can look at this >> >> >> http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf >> >> for a step-by-step example of a saddle-point problem at the end. >> >> Matt >> >> >>> >>> >>>> Matt >>>> >>>> >>>>> Matt >>>>>> >>>>>> >>>>>>> but I get the following errror: >>>>>>> >>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>> ------------------------------------ >>>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>>>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named >>>>>>> edwards by bkhanal Fri Jul 26 14:23:40 2013 >>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>>>>> -with-clanguage=cxx --download-hypre=1 >>>>>>> [0]PETSC ERROR: >>>>>>> ------------------------------------------------------------------------ >>>>>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>>>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>>>>> [0]PETSC ERROR: PCApply() line 442 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Mon Jul 29 11:07:00 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Mon, 29 Jul 2013 18:07:00 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: On Mon, Jul 29, 2013 at 3:42 PM, Bishesh Khanal wrote: > > > > On Fri, Jul 26, 2013 at 6:05 PM, Dave May wrote: > >> Yes, the nullspace is important. >> There is definitely a pressure null space of constants which needs to be >> considered. >> I don't believe ONLY using >> -fieldsplit_1_ksp_constant_null_space >> is not sufficient. >> >> You need to inform the KSP for the outer syetem (the coupled u-p system) >> that there is a null space of constants in the pressure system. This cannot >> (as far as I'm aware) be set via command line args. You need to write a >> null space removal function which accepts the complete (u,p) vector which >> only modifies the pressure dofs. >> >> Take care though when you write this though. Because of the DMDA >> formulation you are using, you have introduced dummy/ficticious pressure >> dofs on the right/top/front faces of your mesh. Thus your null space >> removal function should ignore those dofs when your define the constant >> pressure constraint. >> >> > I'm aware of the constant pressure null space but did not realize that > just having -fieldsplit_1_ksp_constant_null_space would work, thanks. Now I > tried doing what you said but I get some problems! > I created a null basis vector say, mNullBasis (creating a global vector > using the DMDA formulation). I set all it's values to zero except the > non-dummy pressure dofs whose values are set to one. Now I use > MatNullSpaceCreate to create a MatNullSpace say mNullSpace using the > mNullBasis vector. > > The relevant lines in my solver function are (mKsp and mDa are my relevant > KSP and DM objects): > > ierr = > DMKSPSetComputeRHS(mDa,computeRHSTaras3D,this);CHKERRQ(ierr); > ierr = > DMKSPSetComputeOperators(mDa,computeMatrixTaras3D,this);CHKERRQ(ierr); > ierr = > KSPSetDM(mKsp,mDa);CHKERRQ(ierr); > > ierr = KSPSetNullSpace(mKsp,mNullSpace);CHKERRQ(ierr); > ierr = KSPSetFromOptions(mKsp);CHKERRQ(ierr); > ierr = KSPSetUp(mKsp);CHKERRQ(ierr); > ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr); > > And a corresponding addition in the function computeMatrixTaras3D is > something equivalent to: > ierr = MatNullSpaceRemove(mNullSpace,b,NULL);CHKERRQ(ierr); //where b > is the vector that got updated with the rhs values before this line. > > When using the -ksp_view in the option I see that a null space now gets > attached to the outer solver, while I still need to keep > -fieldsplit_1_ksp_constant_null_space option as it is to have a null space > attached to the inner pressure block solver. However the problem is I do > not get the expected result with gamg and resulting solution blows up. > Never mind, I found out that I had forgotten to normalize the mNullBasis to form the basis of the null space. > Am I doing something wrong or completely missing some point here with > nullspace removal idea ? > (Note that I am imposing a Dirichlet boundary condition, so in my case I > think the only null space is indeed the constant pressure and not the > translation and rigid body rotation which would be present if using for > e.g. Neumann boundary condition) > So now I think the PCFieldsplit with the Algebraic multigrid on the velocity block works fine for the constant viscosity case. Now I'll look for the variable viscosity case. I'll have a look at Jed's suggestions for this in the beginning of the thread and come back with few more questions, thanks. > > > > > > > > > >> >> >> Cheers, >> Dave >> >> >> On 26 July 2013 17:42, Matthew Knepley wrote: >> >>> On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal wrote: >>> >>>> >>>> >>>> >>>> On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley wrote: >>>> >>>>> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley wrote: >>>>>> >>>>>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal >>>>>> > wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown wrote: >>>>>>>> >>>>>>>>> Bishesh Khanal writes: >>>>>>>>> >>>>>>>>> > Now, I implemented two different approaches, each for both 2D >>>>>>>>> and 3D, in >>>>>>>>> > MATLAB. It works for the smaller sizes but I have problems >>>>>>>>> solving it for >>>>>>>>> > the problem size I need (250^3 grid size). >>>>>>>>> > I use staggered grid with p on cell centers, and components of v >>>>>>>>> on cell >>>>>>>>> > faces. Similar split up of K to cell center and faces to account >>>>>>>>> for the >>>>>>>>> > variable viscosity case) >>>>>>>>> >>>>>>>>> Okay, you're using a staggered-grid finite difference >>>>>>>>> discretization of >>>>>>>>> variable-viscosity Stokes. This is a common problem and I >>>>>>>>> recommend >>>>>>>>> starting with PCFieldSplit with Schur complement reduction (make >>>>>>>>> that >>>>>>>>> work first, then switch to block preconditioner). >>>>>>>> >>>>>>>> >>>>>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur >>>>>>>> complement reduction using the options: >>>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point >>>>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>>>> >>>>>>>> >>>>>>>> You can use PCLSC or >>>>>>>>> (probably better for you), assemble a preconditioning matrix >>>>>>>>> containing >>>>>>>>> the inverse viscosity in the pressure-pressure block. This >>>>>>>>> diagonal >>>>>>>>> matrix is a spectrally equivalent (or nearly so, depending on >>>>>>>>> discretization) approximation of the Schur complement. The >>>>>>>>> velocity >>>>>>>>> block can be solved with algebraic multigrid. Read the >>>>>>>>> PCFieldSplit >>>>>>>>> docs (follow papers as appropriate) and let us know if you get >>>>>>>>> stuck. >>>>>>>>> >>>>>>>> >>>>>>>> Now, I got a little confused in how exactly to use command line >>>>>>>> options to use multigrid for the velocity bock and PCLS for the pressure >>>>>>>> block. After going through the manual I tried the following: >>>>>>>> >>>>>>> >>>>>>> You want Algebraic Multigrid >>>>>>> >>>>>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point >>>>>>> -pc_fieldsplit_type schur >>>>>>> -fieldsplit_0_pc_type gamg >>>>>>> -fieldsplit_1_ksp_type fgmres >>>>>>> -fieldsplit_1_ksp_constant_null_space >>>>>>> -fieldsplit_1_ksp_monitor_short >>>>>>> -fieldsplit_1_pc_type lsc >>>>>>> -ksp_converged_reason >>>>>>> >>>>>>> I tried the above set of options but the solution I get seem to be >>>>>> not correct. The velocity field I get are quite different than the one I >>>>>> got before without using gamg which were the expected one. >>>>>> Note: (Also, I had to add one extra option of >>>>>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp residual >>>>>> norm did not converge within default 30 iterations before restarting). >>>>>> >>>>> >>>>> These are all iterative solvers. You have to make sure everything >>>>> converges. >>>>> >>>> >>>> When I set restart to 100, and do -ksp_monitor, it does converge (for >>>> the fieldsplit_1_ksp). Are you saying that in spite of having >>>> -ksp_converged_reason in the option and petsc completing the run with the >>>> message "Linear solve converged due to CONVERGED_RTOL .." not enough to >>>> make sure that everything is converging ? If that is the case what should I >>>> do for this particular problem ? >>>> >>> >>> If your outer iteration converges, and you do not like the solution, >>> there are usually two possibilities: >>> >>> 1) Your tolerance is too high, start with it cranked down all the way >>> (1e-10) and slowly relax it >>> >>> 2) You have a null space that you are not accounting for >>> >>> I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 >>>> of: >>>> http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA >>>> >>>> Thus I have a DM with 4 dof but there are several "ghost values" set to >>>> 0. >>>> Would this cause any problem when using the multigrid ? (This has >>>> worked fine when not using the multigrid.) >>>> >>> >>> I don't know exactly how you have implemented this. These should be rows >>> of the identity. >>> >>> >>>> I do this problem in the tutorial with >>>>> a constant viscosity. >>>>> >>>> >>>> Which tutorial are you referring to ? Could you please provide me the >>>> link please ? >>>> >>> >>> There are a few on the PETSc Tutorials page, but you can look at this >>> >>> >>> http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf >>> >>> for a step-by-step example of a saddle-point problem at the end. >>> >>> Matt >>> >>> >>>> >>>> >>>>> Matt >>>>> >>>>> >>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> but I get the following errror: >>>>>>>> >>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>> ------------------------------------ >>>>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! >>>>>>>> [0]PETSC ERROR: Null Object: Parameter # 2! >>>>>>>> [0]PETSC ERROR: >>>>>>>> ------------------------------------------------------------------------ >>>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 >>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>>>> [0]PETSC ERROR: >>>>>>>> ------------------------------------------------------------------------ >>>>>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named >>>>>>>> edwards by bkhanal Fri Jul 26 14:23:40 2013 >>>>>>>> [0]PETSC ERROR: Libraries linked from >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib >>>>>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 >>>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 >>>>>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 >>>>>>>> -with-clanguage=cxx --download-hypre=1 >>>>>>>> [0]PETSC ERROR: >>>>>>>> ------------------------------------------------------------------------ >>>>>>>> [0]PETSC ERROR: MatPtAP() line 8166 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c >>>>>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c >>>>>>>> [0]PETSC ERROR: PCSetUp() line 890 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>>> [0]PETSC ERROR: KSPSetUp() line 278 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>>> [0]PETSC ERROR: KSPSolve() line 399 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c >>>>>>>> [0]PETSC ERROR: PCApply() line 442 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c >>>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h >>>>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c >>>>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c >>>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in >>>>>>>> /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Mon Jul 29 11:06:53 2013 From: popov at uni-mainz.de (Anton Popov) Date: Mon, 29 Jul 2013 18:06:53 +0200 Subject: [petsc-users] discontinuous viscosity stokes equation 3D staggered grid In-Reply-To: References: <87li5555oo.fsf@mcs.anl.gov> Message-ID: <51F6931D.4070700@uni-mainz.de> On 7/29/13 3:42 PM, Bishesh Khanal wrote: > > > > On Fri, Jul 26, 2013 at 6:05 PM, Dave May > wrote: > > Yes, the nullspace is important. > There is definitely a pressure null space of constants which needs > to be considered. > I don't believe ONLY using > -fieldsplit_1_ksp_constant_null_space > is not sufficient. > > You need to inform the KSP for the outer syetem (the coupled u-p > system) that there is a null space of constants in the pressure > system. This cannot (as far as I'm aware) be set via command line > args. You need to write a null space removal function which > accepts the complete (u,p) vector which only modifies the pressure > dofs. > > Take care though when you write this though. Because of the DMDA > formulation you are using, you have introduced dummy/ficticious > pressure dofs on the right/top/front faces of your mesh. Thus your > null space removal function should ignore those dofs when your > define the constant pressure constraint. > > > I'm aware of the constant pressure null space but did not realize that > just having -fieldsplit_1_ksp_constant_null_space would work, thanks. > Now I tried doing what you said but I get some problems! > I created a null basis vector say, mNullBasis (creating a global > vector using the DMDA formulation). I set all it's values to zero > except the non-dummy pressure dofs whose values are set to one. Now I > use MatNullSpaceCreate to create a MatNullSpace say mNullSpace using > the mNullBasis vector. > > The relevant lines in my solver function are (mKsp and mDa are my > relevant KSP and DM objects): > > ierr = DMKSPSetComputeRHS(mDa,computeRHSTaras3D,this);CHKERRQ(ierr); > ierr = > DMKSPSetComputeOperators(mDa,computeMatrixTaras3D,this);CHKERRQ(ierr); > ierr = KSPSetDM(mKsp,mDa);CHKERRQ(ierr); > ierr = KSPSetNullSpace(mKsp,mNullSpace);CHKERRQ(ierr); > ierr = KSPSetFromOptions(mKsp);CHKERRQ(ierr); > ierr = KSPSetUp(mKsp);CHKERRQ(ierr); > ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr); > > And a corresponding addition in the function computeMatrixTaras3D is > something equivalent to: > ierr = MatNullSpaceRemove(mNullSpace,b,NULL);CHKERRQ(ierr); > //where b is the vector that got updated with the rhs values before > this line. > > When using the -ksp_view in the option I see that a null space now > gets attached to the outer solver, while I still need to keep > -fieldsplit_1_ksp_constant_null_space option as it is to have a null > space attached to the inner pressure block solver. However the problem > is I do not get the expected result with gamg and resulting solution > blows up. > Am I doing something wrong or completely missing some point here with > nullspace removal idea ? > (Note that I am imposing a Dirichlet boundary condition, so in my case > I think the only null space is indeed the constant pressure and not > the translation and rigid body rotation which would be present if > using for e.g. Neumann boundary condition) > Bishesh, yes, pressure is the only filed that is defined up to a constant in the incompressible case, if you properly set the velocity Dirichlet BC. What I was saying is that you need to know the full null space of your linear system matrix if you want to setup an efficient algebraic multigrid preconditioner. (Near) null space is a starting point for the tentative prolongator. I don't know how it looks like for staggered grid, and don't know how to make PETSc use this information (the latter should be easy I suspect). Geometric multigrid for staggered grid is also not a straightforward option, because you need to define and pass to PETSc a custom interpolation-restriction operator with PCMGSetRestriction or PCMGSetInterpolation . Without efficient multigrid you code will be severely limited for large-scale 3D problems (especially with variable viscosity, if this is your goal). Which problem with variable viscosity are you solving? To my opinion the easiest way to remove the pressure null space in your case is to add minor compressibility. In many cases you can easily get rid of strict incompressibility assumption, because everything is compressible (at least slightly). If you still want to project out constant pressure, then Dave is right, -fieldsplit_1_ksp_constant_null_space will most likely not work, because of dummy DOFs. Another complication is that you should define the null space only for the pressure. You probably should retrieve KSP for the pressure Schur complement using PCFieldSplitGetSubKSP after setup of fieldsplit preconditioner, and try to set your custom null space directly for it. I'm not sure however if this will work out. We have a version of staggered grid solver without dummy pressure DOFs. For us it works both with and without ..._ksp_constant_null_space for pressure. Anton > > > > > > > > > > Cheers, > Dave > > > On 26 July 2013 17:42, Matthew Knepley > wrote: > > On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal > > wrote: > > > > > On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley > > wrote: > > On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal > > wrote: > > > > > On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley > > wrote: > > On Fri, Jul 26, 2013 at 7:28 AM, Bishesh > Khanal > wrote: > > > > > On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown > > wrote: > > Bishesh Khanal > writes: > > > Now, I implemented two different > approaches, each for both 2D and 3D, in > > MATLAB. It works for the smaller > sizes but I have problems solving it for > > the problem size I need (250^3 grid > size). > > I use staggered grid with p on cell > centers, and components of v on cell > > faces. Similar split up of K to cell > center and faces to account for the > > variable viscosity case) > > Okay, you're using a staggered-grid > finite difference discretization of > variable-viscosity Stokes. This is a > common problem and I recommend > starting with PCFieldSplit with Schur > complement reduction (make that > work first, then switch to block > preconditioner). > > > Ok, I made my 3D problem work with > PCFieldSplit with Schur complement > reduction using the options: > -pc_fieldsplit_type schur > -pc_fieldsplit_detect_saddle_point > -fieldsplit_1_ksp_constant_null_space > > > You can use PCLSC or > (probably better for you), assemble a > preconditioning matrix containing > the inverse viscosity in the > pressure-pressure block. This diagonal > matrix is a spectrally equivalent (or > nearly so, depending on > discretization) approximation of the > Schur complement. The velocity > block can be solved with algebraic > multigrid. Read the PCFieldSplit > docs (follow papers as appropriate) > and let us know if you get stuck. > > > Now, I got a little confused in how > exactly to use command line options to use > multigrid for the velocity bock and PCLS > for the pressure block. After going > through the manual I tried the following: > > You want Algebraic Multigrid > > -pc_type fieldsplit > -pc_fieldsplit_detect_saddle_point > -pc_fieldsplit_type schur > -fieldsplit_0_pc_type gamg > -fieldsplit_1_ksp_type fgmres > -fieldsplit_1_ksp_constant_null_space > -fieldsplit_1_ksp_monitor_short > -fieldsplit_1_pc_type lsc > -ksp_converged_reason > > I tried the above set of options but the solution > I get seem to be not correct. The velocity field I > get are quite different than the one I got before > without using gamg which were the expected one. > Note: (Also, I had to add one extra option of > -fieldsplit_1_ksp_gmres_restart 100 , because the > fieldsplit_1_ksp residual norm did not converge > within default 30 iterations before restarting). > > > These are all iterative solvers. You have to make sure > everything converges. > > > When I set restart to 100, and do -ksp_monitor, it does > converge (for the fieldsplit_1_ksp). Are you saying that > in spite of having -ksp_converged_reason in the option and > petsc completing the run with the message "Linear solve > converged due to CONVERGED_RTOL .." not enough to make > sure that everything is converging ? If that is the case > what should I do for this particular problem ? > > > If your outer iteration converges, and you do not like the > solution, there are usually two possibilities: > > 1) Your tolerance is too high, start with it cranked down > all the way (1e-10) and slowly relax it > > 2) You have a null space that you are not accounting for > > I have used the MAC scheme with indexing as shown in: fig > 7.5, page 96 of: > http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA > > Thus I have a DM with 4 dof but there are several "ghost > values" set to 0. > Would this cause any problem when using the multigrid ? > (This has worked fine when not using the multigrid.) > > > I don't know exactly how you have implemented this. These > should be rows of the identity. > > I do this problem in the tutorial with > a constant viscosity. > > > Which tutorial are you referring to ? Could you please > provide me the link please ? > > > There are a few on the PETSc Tutorials page, but you can look > at this > > http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf > > for a step-by-step example of a saddle-point problem at the end. > > Matt > > > > Matt > > Matt > > > but I get the following errror: > > [0]PETSC ERROR: --------------------- > Error Message > ------------------------------------ > [0]PETSC ERROR: Null argument, when > expecting valid pointer! > [0]PETSC ERROR: Null Object: Parameter # 2! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version > 3.4.2, Jul, 02, 2013 > [0]PETSC ERROR: See > docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for > hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for > manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: src/AdLemMain on a > arch-linux2-cxx-debug named edwards by > bkhanal Fri Jul 26 14:23:40 2013 > [0]PETSC ERROR: Libraries linked from > /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib > [0]PETSC ERROR: Configure run at Fri Jul > 19 14:25:01 2013 > [0]PETSC ERROR: Configure options > --with-cc=gcc --with-fc=g77 --with-cxx=g++ > --download-f-blas-lapack=1 > --download-mpich=1 -with-clanguage=cxx > --download-hypre=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatPtAP() line 8166 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_MG() line 628 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: PCSetUp() line 890 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 399 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCApply_FieldSplit_Schur() > line 807 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [0]PETSC ERROR: PCApply() line 442 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSP_PCApply() line 227 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h > [0]PETSC ERROR: KSPInitialResidual() line > 64 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c > [0]PETSC ERROR: KSPSolve_GMRES() line 239 > in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c > [0]PETSC ERROR: KSPSolve() line 441 in > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > > > > > > -- > What most experimenters take for granted > before they begin their experiments is > infinitely more interesting than any results > to which their experiments lead. > -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they > begin their experiments is infinitely more interesting > than any results to which their experiments lead. > -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin > their experiments is infinitely more interesting than any > results to which their experiments lead. > -- Norbert Wiener > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bisheshkh at gmail.com Mon Jul 29 12:05:12 2013 From: bisheshkh at gmail.com (Bishesh Khanal) Date: Mon, 29 Jul 2013 19:05:12 +0200 Subject: [petsc-users] petsc-users Digest, Vol 55, Issue 70 In-Reply-To: References: Message-ID: On Mon, Jul 29, 2013 at 6:07 PM, wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. Re: discontinuous viscosity stokes equation 3D staggered > grid (Bishesh Khanal) > 2. Re: discontinuous viscosity stokes equation 3D staggered > grid (Anton Popov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 29 Jul 2013 18:07:00 +0200 > From: Bishesh Khanal > To: Dave May , Jed Brown > > Cc: PETSc users list > Subject: Re: [petsc-users] discontinuous viscosity stokes equation 3D > staggered grid > Message-ID: > < > CAEhex8icJ1wqR+Eef74R+_b0vDUGP3jMDay72rpUFhOOvC6NbQ at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On Mon, Jul 29, 2013 at 3:42 PM, Bishesh Khanal > wrote: > > > > > > > > > On Fri, Jul 26, 2013 at 6:05 PM, Dave May > wrote: > > > >> Yes, the nullspace is important. > >> There is definitely a pressure null space of constants which needs to be > >> considered. > >> I don't believe ONLY using > >> -fieldsplit_1_ksp_constant_null_space > >> is not sufficient. > >> > >> You need to inform the KSP for the outer syetem (the coupled u-p system) > >> that there is a null space of constants in the pressure system. This > cannot > >> (as far as I'm aware) be set via command line args. You need to write a > >> null space removal function which accepts the complete (u,p) vector > which > >> only modifies the pressure dofs. > >> > >> Take care though when you write this though. Because of the DMDA > >> formulation you are using, you have introduced dummy/ficticious pressure > >> dofs on the right/top/front faces of your mesh. Thus your null space > >> removal function should ignore those dofs when your define the constant > >> pressure constraint. > >> > >> > > I'm aware of the constant pressure null space but did not realize that > > just having -fieldsplit_1_ksp_constant_null_space would work, thanks. > Now I > > tried doing what you said but I get some problems! > > I created a null basis vector say, mNullBasis (creating a global vector > > using the DMDA formulation). I set all it's values to zero except the > > non-dummy pressure dofs whose values are set to one. Now I use > > MatNullSpaceCreate to create a MatNullSpace say mNullSpace using the > > mNullBasis vector. > > > > The relevant lines in my solver function are (mKsp and mDa are my > relevant > > KSP and DM objects): > > > > ierr = > > DMKSPSetComputeRHS(mDa,computeRHSTaras3D,this);CHKERRQ(ierr); > > ierr = > > DMKSPSetComputeOperators(mDa,computeMatrixTaras3D,this);CHKERRQ(ierr); > > ierr = > > KSPSetDM(mKsp,mDa);CHKERRQ(ierr); > > > > ierr = KSPSetNullSpace(mKsp,mNullSpace);CHKERRQ(ierr); > > ierr = KSPSetFromOptions(mKsp);CHKERRQ(ierr); > > ierr = KSPSetUp(mKsp);CHKERRQ(ierr); > > ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr); > > > > And a corresponding addition in the function computeMatrixTaras3D is > > something equivalent to: > > ierr = MatNullSpaceRemove(mNullSpace,b,NULL);CHKERRQ(ierr); //where > b > > is the vector that got updated with the rhs values before this line. > > > > When using the -ksp_view in the option I see that a null space now gets > > attached to the outer solver, while I still need to keep > > -fieldsplit_1_ksp_constant_null_space option as it is to have a null > space > > attached to the inner pressure block solver. However the problem is I do > > not get the expected result with gamg and resulting solution blows up. > > > > Never mind, I found out that I had forgotten to normalize the mNullBasis to > form the basis of the null space. > > > > Am I doing something wrong or completely missing some point here with > > nullspace removal idea ? > > (Note that I am imposing a Dirichlet boundary condition, so in my case I > > think the only null space is indeed the constant pressure and not the > > translation and rigid body rotation which would be present if using for > > e.g. Neumann boundary condition) > > > > So now I think the PCFieldsplit with the Algebraic multigrid on the > velocity block works fine for the constant viscosity case. Now I'll look > for the variable viscosity case. I'll have a look at Jed's suggestions for > this in the beginning of the thread and come back with few more questions, > thanks. > > > > > > > > > > > > > > > > > > > > >> > >> > >> Cheers, > >> Dave > >> > >> > >> On 26 July 2013 17:42, Matthew Knepley wrote: > >> > >>> On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal >wrote: > >>> > >>>> > >>>> > >>>> > >>>> On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley >wrote: > >>>> > >>>>> On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal >wrote: > >>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley >wrote: > >>>>>> > >>>>>>> On Fri, Jul 26, 2013 at 7:28 AM, Bishesh Khanal < > bisheshkh at gmail.com > >>>>>>> > wrote: > >>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown >wrote: > >>>>>>>> > >>>>>>>>> Bishesh Khanal writes: > >>>>>>>>> > >>>>>>>>> > Now, I implemented two different approaches, each for both 2D > >>>>>>>>> and 3D, in > >>>>>>>>> > MATLAB. It works for the smaller sizes but I have problems > >>>>>>>>> solving it for > >>>>>>>>> > the problem size I need (250^3 grid size). > >>>>>>>>> > I use staggered grid with p on cell centers, and components of > v > >>>>>>>>> on cell > >>>>>>>>> > faces. Similar split up of K to cell center and faces to > account > >>>>>>>>> for the > >>>>>>>>> > variable viscosity case) > >>>>>>>>> > >>>>>>>>> Okay, you're using a staggered-grid finite difference > >>>>>>>>> discretization of > >>>>>>>>> variable-viscosity Stokes. This is a common problem and I > >>>>>>>>> recommend > >>>>>>>>> starting with PCFieldSplit with Schur complement reduction (make > >>>>>>>>> that > >>>>>>>>> work first, then switch to block preconditioner). > >>>>>>>> > >>>>>>>> > >>>>>>>> Ok, I made my 3D problem work with PCFieldSplit with Schur > >>>>>>>> complement reduction using the options: > >>>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_detect_saddle_point > >>>>>>>> -fieldsplit_1_ksp_constant_null_space > >>>>>>>> > >>>>>>>> > >>>>>>>> You can use PCLSC or > >>>>>>>>> (probably better for you), assemble a preconditioning matrix > >>>>>>>>> containing > >>>>>>>>> the inverse viscosity in the pressure-pressure block. This > >>>>>>>>> diagonal > >>>>>>>>> matrix is a spectrally equivalent (or nearly so, depending on > >>>>>>>>> discretization) approximation of the Schur complement. The > >>>>>>>>> velocity > >>>>>>>>> block can be solved with algebraic multigrid. Read the > >>>>>>>>> PCFieldSplit > >>>>>>>>> docs (follow papers as appropriate) and let us know if you get > >>>>>>>>> stuck. > >>>>>>>>> > >>>>>>>> > >>>>>>>> Now, I got a little confused in how exactly to use command line > >>>>>>>> options to use multigrid for the velocity bock and PCLS for the > pressure > >>>>>>>> block. After going through the manual I tried the following: > >>>>>>>> > >>>>>>> > >>>>>>> You want Algebraic Multigrid > >>>>>>> > >>>>>>> -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point > >>>>>>> -pc_fieldsplit_type schur > >>>>>>> -fieldsplit_0_pc_type gamg > >>>>>>> -fieldsplit_1_ksp_type fgmres > >>>>>>> -fieldsplit_1_ksp_constant_null_space > >>>>>>> -fieldsplit_1_ksp_monitor_short > >>>>>>> -fieldsplit_1_pc_type lsc > >>>>>>> -ksp_converged_reason > >>>>>>> > >>>>>>> I tried the above set of options but the solution I get seem to be > >>>>>> not correct. The velocity field I get are quite different than the > one I > >>>>>> got before without using gamg which were the expected one. > >>>>>> Note: (Also, I had to add one extra option of > >>>>>> -fieldsplit_1_ksp_gmres_restart 100 , because the fieldsplit_1_ksp > residual > >>>>>> norm did not converge within default 30 iterations before > restarting). > >>>>>> > >>>>> > >>>>> These are all iterative solvers. You have to make sure everything > >>>>> converges. > >>>>> > >>>> > >>>> When I set restart to 100, and do -ksp_monitor, it does converge (for > >>>> the fieldsplit_1_ksp). Are you saying that in spite of having > >>>> -ksp_converged_reason in the option and petsc completing the run with > the > >>>> message "Linear solve converged due to CONVERGED_RTOL .." not enough > to > >>>> make sure that everything is converging ? If that is the case what > should I > >>>> do for this particular problem ? > >>>> > >>> > >>> If your outer iteration converges, and you do not like the solution, > >>> there are usually two possibilities: > >>> > >>> 1) Your tolerance is too high, start with it cranked down all the way > >>> (1e-10) and slowly relax it > >>> > >>> 2) You have a null space that you are not accounting for > >>> > >>> I have used the MAC scheme with indexing as shown in: fig 7.5, page 96 > >>>> of: > >>>> > http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA > >>>> > >>>> Thus I have a DM with 4 dof but there are several "ghost values" set > to > >>>> 0. > >>>> Would this cause any problem when using the multigrid ? (This has > >>>> worked fine when not using the multigrid.) > >>>> > >>> > >>> I don't know exactly how you have implemented this. These should be > rows > >>> of the identity. > >>> > >>> > >>>> I do this problem in the tutorial with > >>>>> a constant viscosity. > >>>>> > >>>> > >>>> Which tutorial are you referring to ? Could you please provide me the > >>>> link please ? > >>>> > >>> > >>> There are a few on the PETSc Tutorials page, but you can look at this > >>> > >>> > >>> > http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf > >>> > >>> for a step-by-step example of a saddle-point problem at the end. > >>> > >>> Matt > >>> > >>> > >>>> > >>>> > >>>>> Matt > >>>>> > >>>>> > >>>>>> Matt > >>>>>>> > >>>>>>> > >>>>>>>> but I get the following errror: > >>>>>>>> > >>>>>>>> [0]PETSC ERROR: --------------------- Error Message > >>>>>>>> ------------------------------------ > >>>>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer! > >>>>>>>> [0]PETSC ERROR: Null Object: Parameter # 2! > >>>>>>>> [0]PETSC ERROR: > >>>>>>>> > ------------------------------------------------------------------------ > >>>>>>>> [0]PETSC ERROR: Petsc Release Version 3.4.2, Jul, 02, 2013 > >>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. > >>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble > shooting. > >>>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. > >>>>>>>> [0]PETSC ERROR: > >>>>>>>> > ------------------------------------------------------------------------ > >>>>>>>> [0]PETSC ERROR: src/AdLemMain on a arch-linux2-cxx-debug named > >>>>>>>> edwards by bkhanal Fri Jul 26 14:23:40 2013 > >>>>>>>> [0]PETSC ERROR: Libraries linked from > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib > >>>>>>>> [0]PETSC ERROR: Configure run at Fri Jul 19 14:25:01 2013 > >>>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77 > >>>>>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1 > >>>>>>>> -with-clanguage=cxx --download-hypre=1 > >>>>>>>> [0]PETSC ERROR: > >>>>>>>> > ------------------------------------------------------------------------ > >>>>>>>> [0]PETSC ERROR: MatPtAP() line 8166 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c > >>>>>>>> [0]PETSC ERROR: PCSetUp_MG() line 628 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c > >>>>>>>> [0]PETSC ERROR: PCSetUp() line 890 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > >>>>>>>> [0]PETSC ERROR: KSPSetUp() line 278 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > >>>>>>>> [0]PETSC ERROR: KSPSolve() line 399 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > >>>>>>>> [0]PETSC ERROR: PCApply_FieldSplit_Schur() line 807 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > >>>>>>>> [0]PETSC ERROR: PCApply() line 442 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > >>>>>>>> [0]PETSC ERROR: KSP_PCApply() line 227 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h > >>>>>>>> [0]PETSC ERROR: KSPInitialResidual() line 64 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c > >>>>>>>> [0]PETSC ERROR: KSPSolve_GMRES() line 239 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c > >>>>>>>> [0]PETSC ERROR: KSPSolve() line 441 in > >>>>>>>> > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> -- > >>>>>>> What most experimenters take for granted before they begin their > >>>>>>> experiments is infinitely more interesting than any results to > which their > >>>>>>> experiments lead. > >>>>>>> -- Norbert Wiener > >>>>>>> > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> What most experimenters take for granted before they begin their > >>>>> experiments is infinitely more interesting than any results to which > their > >>>>> experiments lead. > >>>>> -- Norbert Wiener > >>>>> > >>>> > >>>> > >>> > >>> > >>> -- > >>> What most experimenters take for granted before they begin their > >>> experiments is infinitely more interesting than any results to which > their > >>> experiments lead. > >>> -- Norbert Wiener > >>> > >> > >> > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130729/0935faa2/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Mon, 29 Jul 2013 18:06:53 +0200 > From: Anton Popov > To: > Subject: Re: [petsc-users] discontinuous viscosity stokes equation 3D > staggered grid > Message-ID: <51F6931D.4070700 at uni-mainz.de> > Content-Type: text/plain; charset="iso-8859-1"; Format="flowed" > > On 7/29/13 3:42 PM, Bishesh Khanal wrote: > > > > > > > > On Fri, Jul 26, 2013 at 6:05 PM, Dave May > > wrote: > > > > Yes, the nullspace is important. > > There is definitely a pressure null space of constants which needs > > to be considered. > > I don't believe ONLY using > > -fieldsplit_1_ksp_constant_null_space > > is not sufficient. > > > > You need to inform the KSP for the outer syetem (the coupled u-p > > system) that there is a null space of constants in the pressure > > system. This cannot (as far as I'm aware) be set via command line > > args. You need to write a null space removal function which > > accepts the complete (u,p) vector which only modifies the pressure > > dofs. > > > > Take care though when you write this though. Because of the DMDA > > formulation you are using, you have introduced dummy/ficticious > > pressure dofs on the right/top/front faces of your mesh. Thus your > > null space removal function should ignore those dofs when your > > define the constant pressure constraint. > > > > > > I'm aware of the constant pressure null space but did not realize that > > just having -fieldsplit_1_ksp_constant_null_space would work, thanks. > > Now I tried doing what you said but I get some problems! > > I created a null basis vector say, mNullBasis (creating a global > > vector using the DMDA formulation). I set all it's values to zero > > except the non-dummy pressure dofs whose values are set to one. Now I > > use MatNullSpaceCreate to create a MatNullSpace say mNullSpace using > > the mNullBasis vector. > > > > The relevant lines in my solver function are (mKsp and mDa are my > > relevant KSP and DM objects): > > > > ierr = DMKSPSetComputeRHS(mDa,computeRHSTaras3D,this);CHKERRQ(ierr); > > ierr = > > DMKSPSetComputeOperators(mDa,computeMatrixTaras3D,this);CHKERRQ(ierr); > > ierr = KSPSetDM(mKsp,mDa);CHKERRQ(ierr); > > ierr = KSPSetNullSpace(mKsp,mNullSpace);CHKERRQ(ierr); > > ierr = KSPSetFromOptions(mKsp);CHKERRQ(ierr); > > ierr = KSPSetUp(mKsp);CHKERRQ(ierr); > > ierr = KSPSolve(mKsp,NULL,NULL);CHKERRQ(ierr); > > > > And a corresponding addition in the function computeMatrixTaras3D is > > something equivalent to: > > ierr = MatNullSpaceRemove(mNullSpace,b,NULL);CHKERRQ(ierr); > > //where b is the vector that got updated with the rhs values before > > this line. > > > > When using the -ksp_view in the option I see that a null space now > > gets attached to the outer solver, while I still need to keep > > -fieldsplit_1_ksp_constant_null_space option as it is to have a null > > space attached to the inner pressure block solver. However the problem > > is I do not get the expected result with gamg and resulting solution > > blows up. > > Am I doing something wrong or completely missing some point here with > > nullspace removal idea ? > > (Note that I am imposing a Dirichlet boundary condition, so in my case > > I think the only null space is indeed the constant pressure and not > > the translation and rigid body rotation which would be present if > > using for e.g. Neumann boundary condition) > > > Bishesh, > > yes, pressure is the only filed that is defined up to a constant in the > incompressible case, if you properly set the velocity Dirichlet BC. > > What I was saying is that you need to know the full null space of your > linear system matrix if you want to setup an efficient algebraic > multigrid preconditioner. (Near) null space is a starting point for the > tentative prolongator. I don't know how it looks like for staggered > grid, and don't know how to make PETSc use this information (the latter > should be easy I suspect). Geometric multigrid for staggered grid is > also not a straightforward option, because you need to define and pass > to PETSc a custom interpolation-restriction operator with > PCMGSetRestriction or PCMGSetInterpolation . Without efficient multigrid > you code will be severely limited for large-scale 3D problems > (especially with variable viscosity, if this is your goal). > I do need to solve the system for a variable viscosity case. Yes, the null space does not seem to be trivial when we have the staggered grid case. > > Which problem with variable viscosity are you solving? I am working in modelling some biological phenomenon and with certain assumptions I ended up deriving a set of equations which happened to be this stokes equation except for a non-zero divergence. (This led me to this whole "new" world of {fluid dynamics, mechanics, solving pde's numerically, large-scale linear systems}!! :) ) Actually I do not assume the material to be incompressible, hence the non-zero divergence field. But it must satisfy this "compressibility constraint" of: div(v) = f2, everywhere in the domain strictly. It basically results from the constrained minimization of some sort of linear elastic energy, and for my problem it is important that it satisfies the "compressibility constraint" everywhere. And this deforming object has different parts with very different viscosity, (or actually they are Lame's parameters if you look at it from elastic material point of view). To my opinion the > easiest way to remove the pressure null space in your case is to add > minor compressibility. In many cases you can easily get rid of strict > incompressibility assumption, because everything is compressible (at > least slightly). > > If you still want to project out constant pressure, then Dave is right, > -fieldsplit_1_ksp_constant_null_space will most likely not work, because > of dummy DOFs. Another complication is that you should define the null > space only for the pressure. You probably should retrieve KSP for the > pressure Schur complement using PCFieldSplitGetSubKSP after setup of > fieldsplit preconditioner, and try to set your custom null space > directly for it. I'm not sure however if this will work out. It seems to work for me (Only for the constant viscosity) when I create a global vector using the DMDA I have and set to non-zero constant the components corresponding to only those non-dummy pressure dofs. Then I normalize it and use as the nullspace basis for the MatNullSpace context. This I use in KSPSetNullSpace() in the outermost KSP solver, then I use MatNullSpaceRemove() in the rhs computing function. Then for the pressure Schur complement, I could simply use the run-time option fieldsplit_1_ksp_constant_null_space. > We have a > version of staggered grid solver without dummy pressure DOFs. For us it > works both with and without ..._ksp_constant_null_space for pressure. > Does your version work with strict incompressibilty constraint ? Do you use your own specialized multigrid preconditioner or the petsc provided one for this case ? And do you use the PetscSection as suggested in the thread before (I'm not familiar with PetscSection yet) for creating staggered grid without dummy pressure ? Thanks, Bishesh > > Anton > > > > > > > > > > > > > > > > > > > > Cheers, > > Dave > > > > > > On 26 July 2013 17:42, Matthew Knepley > > wrote: > > > > On Fri, Jul 26, 2013 at 10:13 AM, Bishesh Khanal > > > wrote: > > > > > > > > > > On Fri, Jul 26, 2013 at 4:22 PM, Matthew Knepley > > > wrote: > > > > On Fri, Jul 26, 2013 at 9:11 AM, Bishesh Khanal > > > > wrote: > > > > > > > > > > On Fri, Jul 26, 2013 at 2:32 PM, Matthew Knepley > > > > wrote: > > > > On Fri, Jul 26, 2013 at 7:28 AM, Bishesh > > Khanal > > wrote: > > > > > > > > > > On Wed, Jul 17, 2013 at 9:48 PM, Jed Brown > > > > wrote: > > > > Bishesh Khanal > > writes: > > > > > Now, I implemented two different > > approaches, each for both 2D and 3D, in > > > MATLAB. It works for the smaller > > sizes but I have problems solving it for > > > the problem size I need (250^3 grid > > size). > > > I use staggered grid with p on cell > > centers, and components of v on cell > > > faces. Similar split up of K to cell > > center and faces to account for the > > > variable viscosity case) > > > > Okay, you're using a staggered-grid > > finite difference discretization of > > variable-viscosity Stokes. This is a > > common problem and I recommend > > starting with PCFieldSplit with Schur > > complement reduction (make that > > work first, then switch to block > > preconditioner). > > > > > > Ok, I made my 3D problem work with > > PCFieldSplit with Schur complement > > reduction using the options: > > -pc_fieldsplit_type schur > > -pc_fieldsplit_detect_saddle_point > > -fieldsplit_1_ksp_constant_null_space > > > > > > You can use PCLSC or > > (probably better for you), assemble a > > preconditioning matrix containing > > the inverse viscosity in the > > pressure-pressure block. This diagonal > > matrix is a spectrally equivalent (or > > nearly so, depending on > > discretization) approximation of the > > Schur complement. The velocity > > block can be solved with algebraic > > multigrid. Read the PCFieldSplit > > docs (follow papers as appropriate) > > and let us know if you get stuck. > > > > > > Now, I got a little confused in how > > exactly to use command line options to use > > multigrid for the velocity bock and PCLS > > for the pressure block. After going > > through the manual I tried the following: > > > > You want Algebraic Multigrid > > > > -pc_type fieldsplit > > -pc_fieldsplit_detect_saddle_point > > -pc_fieldsplit_type schur > > -fieldsplit_0_pc_type gamg > > -fieldsplit_1_ksp_type fgmres > > -fieldsplit_1_ksp_constant_null_space > > -fieldsplit_1_ksp_monitor_short > > -fieldsplit_1_pc_type lsc > > -ksp_converged_reason > > > > I tried the above set of options but the solution > > I get seem to be not correct. The velocity field I > > get are quite different than the one I got before > > without using gamg which were the expected one. > > Note: (Also, I had to add one extra option of > > -fieldsplit_1_ksp_gmres_restart 100 , because the > > fieldsplit_1_ksp residual norm did not converge > > within default 30 iterations before restarting). > > > > > > These are all iterative solvers. You have to make sure > > everything converges. > > > > > > When I set restart to 100, and do -ksp_monitor, it does > > converge (for the fieldsplit_1_ksp). Are you saying that > > in spite of having -ksp_converged_reason in the option and > > petsc completing the run with the message "Linear solve > > converged due to CONVERGED_RTOL .." not enough to make > > sure that everything is converging ? If that is the case > > what should I do for this particular problem ? > > > > > > If your outer iteration converges, and you do not like the > > solution, there are usually two possibilities: > > > > 1) Your tolerance is too high, start with it cranked down > > all the way (1e-10) and slowly relax it > > > > 2) You have a null space that you are not accounting for > > > > I have used the MAC scheme with indexing as shown in: fig > > 7.5, page 96 of: > > > http://books.google.co.uk/books?id=W83gxp165SkC&printsec=frontcover&dq=Introduction+to+Numerical+Geodynamic+Modelling&hl=en&sa=X&ei=v6TmUaP_L4PuOs3agJgE&ved=0CDIQ6AEwAA > > > > Thus I have a DM with 4 dof but there are several "ghost > > values" set to 0. > > Would this cause any problem when using the multigrid ? > > (This has worked fine when not using the multigrid.) > > > > > > I don't know exactly how you have implemented this. These > > should be rows of the identity. > > > > I do this problem in the tutorial with > > a constant viscosity. > > > > > > Which tutorial are you referring to ? Could you please > > provide me the link please ? > > > > > > There are a few on the PETSc Tutorials page, but you can look > > at this > > > > > http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/presentations/SessionIV_Solvers.pdf > > > > for a step-by-step example of a saddle-point problem at the end. > > > > Matt > > > > > > > > Matt > > > > Matt > > > > > > but I get the following errror: > > > > [0]PETSC ERROR: --------------------- > > Error Message > > ------------------------------------ > > [0]PETSC ERROR: Null argument, when > > expecting valid pointer! > > [0]PETSC ERROR: Null Object: Parameter # 2! > > [0]PETSC ERROR: > > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version > > 3.4.2, Jul, 02, 2013 > > [0]PETSC ERROR: See > > docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for > > hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for > > manual pages. > > [0]PETSC ERROR: > > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: src/AdLemMain on a > > arch-linux2-cxx-debug named edwards by > > bkhanal Fri Jul 26 14:23:40 2013 > > [0]PETSC ERROR: Libraries linked from > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/arch-linux2-cxx-debug/lib > > [0]PETSC ERROR: Configure run at Fri Jul > > 19 14:25:01 2013 > > [0]PETSC ERROR: Configure options > > --with-cc=gcc --with-fc=g77 --with-cxx=g++ > > --download-f-blas-lapack=1 > > --download-mpich=1 -with-clanguage=cxx > > --download-hypre=1 > > [0]PETSC ERROR: > > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: MatPtAP() line 8166 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/mat/interface/matrix.c > > [0]PETSC ERROR: PCSetUp_MG() line 628 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/mg/mg.c > > [0]PETSC ERROR: PCSetUp() line 890 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: KSPSetUp() line 278 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: KSPSolve() line 399 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > > [0]PETSC ERROR: PCApply_FieldSplit_Schur() > > line 807 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > > [0]PETSC ERROR: PCApply() line 442 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: KSP_PCApply() line 227 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/include/petsc-private/kspimpl.h > > [0]PETSC ERROR: KSPInitialResidual() line > > 64 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itres.c > > [0]PETSC ERROR: KSPSolve_GMRES() line 239 > > in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/impls/gmres/gmres.c > > [0]PETSC ERROR: KSPSolve() line 441 in > > > /home/bkhanal/Documents/softwares/petsc-3.4.2/src/ksp/ksp/interface/itfunc.c > > > > > > > > > > > > -- > > What most experimenters take for granted > > before they begin their experiments is > > infinitely more interesting than any results > > to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > -- > > What most experimenters take for granted before they > > begin their experiments is infinitely more interesting > > than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin > > their experiments is infinitely more interesting than any > > results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130729/f0583dd1/attachment.html > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 55, Issue 70 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkubis at purdue.edu Mon Jul 29 12:38:40 2013 From: tkubis at purdue.edu (Kubis, Tillmann C) Date: Mon, 29 Jul 2013 17:38:40 +0000 Subject: [petsc-users] question about special matrix-matrix-matrix product Message-ID: <637DBFF26F82E542A0F37B0EA99F7F3D17811CC9@WPVEXCMBX07.purdue.lcl> Hello, I need the diagonal of GR*Gamma*GR^dagger , where Gamma is a sparse and GR is a dense matrix. Is there a good/already coded way to do that? I am up to write most of the product myself using the method MatGetRow on GR and Gamma and multiplying the elements directly. So far these are serial complex matrices, but all matrices will be distributed in the end. Thanks, Tillmann Kubis ____________________________________ Tillmann Kubis, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-1 West Lafayette, Indiana 47907-1971 phone: +1-765-496-7312 fax: +1-765-496-6026 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jitendra.ornl at gmail.com Mon Jul 29 13:59:06 2013 From: jitendra.ornl at gmail.com (Jitendra Kumar) Date: Mon, 29 Jul 2013 14:59:06 -0400 Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: Satish, By building my own zlib library specifying it through LIBS flag to PETSc configure, I was able to compile PETSC-Dev with HDF5 support on Intrepid. However, I ran into errors while building my application and it looks like the XLF Fortran is using case sensitive settings and is failing due to mixed uppercase/lowercase variable names in the code. I have followed the /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py and am not sure if there's something included or missing in the configuration causing this. I believe default is for XLF to not to be case sensitive, but I am not sure what do I have that's making it otherwise. I would appreciate if you can point me to anything I may be missing in my configuration file. Attached are petsc configuration file and log of my application compilation errors. Thanks, Jitu On Tue, Jul 16, 2013 at 12:59 PM, Satish Balay wrote: > --download-package might not work on all machines. --download-hdf5=1 does > not work on bg/p > > However there is hdf5 installed on it. You can try using > --with-hdf5-include/--with-hdf5-lib options. > > There could still be an issue with "Compression library [libz.a or > equivalent] not found" but I think the workarround is already in > petsc-dev. > > Satish > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > Thanks Satish. I tried using the configuration you pointed me to with the > > addition of --download-hdf5=1 and got error "Compression library [libz.a > or > > equivalent] not found > > " > > > > Do I need to load some package to get this? > > > > Jitu > > > > > > On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay > wrote: > > > > > As the message indicates you need '--with-batch' option on this machine > > > > > > Check one of the default builds on intrepid for configure options to > use.. > > > > > > [perhaps > > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > > > > > Satish > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > I ran into following errors while trying to build PETSc-dev on > Intrepid > > > > @ALCF. (configure.log attached) > > > > > > > > > > > > ******************************************************************************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > configure.log for > > > > details): > > > > > > > > ------------------------------------------------------------------------------- > > > > Cannot run executable to determine size of char. If this machine > uses a > > > > batch system > > > > to submit jobs you will need to configure using ./configure with the > > > > additional option --with-batch. > > > > Otherwise there is problem with the compilers. Can you compile and > run > > > > code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > ******************************************************************************* > > > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, > in > > > > petsc_configure > > > > framework.configure(out = sys.stdout) > > > > File > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > > > line > > > > 933, in configure > > > > child.configure() > > > > File > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > line 386, in configure > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > 'double', > > > > 'size_t']) > > > > File > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > line 386, in > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > 'double', > > > > 'size_t']) > > > > File > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > > > line 115, in executeTest > > > > ret = apply(test, args,kargs) > > > > File > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > line 296, in checkSizeof > > > > raise RuntimeError(msg) > > > > > > > > This is what my configuration looks like (adapted from > > > > config/examples/arch-bgp-ibm-opt.py) > > > > configure_options = [ > > > > '--with-cc=mpixlc', > > > > '--with-fc=mpixlf90', > > > > '--with-cxx=mpixlcxx', > > > > 'COPTFLAGS=-O3', > > > > 'FOPTFLAGS=-O3', > > > > '--with-debugging=0', > > > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > > > '--download-parmetis=1', > > > > '--download-metis=1', > > > > '--download-plapack=1', > > > > '--download-hdf5=1' > > > > ] > > > > > > > > I would appreciate any help building the llbrary there. > > > > > > > > Thanks, > > > > Jitu > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pflotran_make.log Type: application/octet-stream Size: 138142 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: intrepid_bgp_fast.py Type: application/octet-stream Size: 2776 bytes Desc: not available URL: From luqiyue at gmail.com Mon Jul 29 14:22:06 2013 From: luqiyue at gmail.com (Lu Qiyue) Date: Mon, 29 Jul 2013 14:22:06 -0500 Subject: [petsc-users] PETSC_INT 32bits / 64 bits problem Message-ID: Dear All: I am solving a huge system of AX=b, Matrix A is a sparse matrix but its number of non-zeros has been out of range of the upper limit of PETSC_INT (4 bytes). In the online documents http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html It shows: Its size can be configured with the option --with-64-bit-indices - to be either 32bit or 64bit [default 32 bits int]. Also, in the document: http://www.mcs.anl.gov/petsc/petsc-current/include/petscsys.h.html#PetscInt Looks need to configure BLAS and LAPACK lib with 64 bits options as well. If I just change the PETSC_INT declaration in the driver to LONG INT, warnings prompt out when calling functions which used PETSC_INT data as its parameters. Should I recompile (re-install) PETSC to add the 64 bits configurations? And for BLAS and LAPACK libs? Or, is there any other solutions? Thanks Qiyue Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Jul 29 14:25:44 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 29 Jul 2013 14:25:44 -0500 (CDT) Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: >>>>>> mpixlf2003_r -qnosave -c -O3 -qarch=450d -qtune=450 -qmaxmem=-1 -I/gpfs/home/jkumar/lib/petsc-hg/include -I/gpfs/home/jkumar/lib/petsc-hg/arch-bgp-ibm-opt/i\ nclude -I/soft/apps/current/hdf5-1.8.9/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/c\ omm/sys/include -I. -DMUALEM_SPLINE -DICE -o utility.o utility.F90 bgxlf2003_r: 1501-214 (W) command option M reserved for future use - ignored "utility.F90", line 61.14: 1516-036 (S) Entity M1 has undefined type. "utility.F90", line 62.14: 1516-036 (S) Entity IA1 has undefined type. "utility.F90", line 63.14: 1516-036 (S) Entity IC1 has undefined type. "utility.F90", line 65.14: 1516-036 (S) Entity M2 has undefined type. "utility.F90", line 66.14: 1516-036 (S) Entity IA2 has undefined type. "utility.F90", line 67.14: 1516-036 (S) Entity IC2 has undefined type. "utility.F90", line 69.14: 1516-036 (S) Entity M3 has undefined type. "utility.F90", line 70.14: 1516-036 (S) Entity IA3 has undefined type. "utility.F90", line 71.14: 1516-036 (S) Entity IC3 has undefined type. "utility.F90", 1520-031 (W) Option DLINES is ignored within Fortran 90 free form and IBM free form. ** Utility_module === End of Compilation 1 === 1501-511 Compilation failed for file utility.F90. <<<<<< On Mon, 29 Jul 2013, Jitendra Kumar wrote: > Satish, > By building my own zlib library specifying it through LIBS flag to PETSc > configure, I was able to compile PETSC-Dev with HDF5 support on Intrepid. > However, I ran into errors while building my application and it looks like > the XLF Fortran is using case sensitive settings and is failing due to > mixed uppercase/lowercase variable names in the code. Are you sure this is the problem? I can't reporduce it with a simple code.. Satish -------- [balay at vestalac1 junk]$ cat foo.F program main implicit none integer i I=5 write(*,*) I end [balay at vestalac1 junk]$ mpixlf2003_r foo.F ** main === End of Compilation 1 === 1501-510 Compilation successful for file foo.F. [balay at vestalac1 junk]$ ./a.out 5 > > I have followed the > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py > and am not sure if there's something included or missing in the > configuration causing this. I believe default is for XLF to not to be case > sensitive, but I am not sure what do I have that's making it otherwise. > > I would appreciate if you can point me to anything I may be missing in my > configuration file. Attached are petsc configuration file and log of my > application compilation errors. > > Thanks, > Jitu > > > > On Tue, Jul 16, 2013 at 12:59 PM, Satish Balay wrote: > > > --download-package might not work on all machines. --download-hdf5=1 does > > not work on bg/p > > > > However there is hdf5 installed on it. You can try using > > --with-hdf5-include/--with-hdf5-lib options. > > > > There could still be an issue with "Compression library [libz.a or > > equivalent] not found" but I think the workarround is already in > > petsc-dev. > > > > Satish > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > Thanks Satish. I tried using the configuration you pointed me to with the > > > addition of --download-hdf5=1 and got error "Compression library [libz.a > > or > > > equivalent] not found > > > " > > > > > > Do I need to load some package to get this? > > > > > > Jitu > > > > > > > > > On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay > > wrote: > > > > > > > As the message indicates you need '--with-batch' option on this machine > > > > > > > > Check one of the default builds on intrepid for configure options to > > use.. > > > > > > > > [perhaps > > > > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > > > > > > > Satish > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > > > I ran into following errors while trying to build PETSc-dev on > > Intrepid > > > > > @ALCF. (configure.log attached) > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > configure.log for > > > > > details): > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > Cannot run executable to determine size of char. If this machine > > uses a > > > > > batch system > > > > > to submit jobs you will need to configure using ./configure with the > > > > > additional option --with-batch. > > > > > Otherwise there is problem with the compilers. Can you compile and > > run > > > > > code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > ******************************************************************************* > > > > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, > > in > > > > > petsc_configure > > > > > framework.configure(out = sys.stdout) > > > > > File > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > > > > line > > > > > 933, in configure > > > > > child.configure() > > > > > File > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > line 386, in configure > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > 'double', > > > > > 'size_t']) > > > > > File > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > line 386, in > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > 'double', > > > > > 'size_t']) > > > > > File > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > > > > line 115, in executeTest > > > > > ret = apply(test, args,kargs) > > > > > File > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > line 296, in checkSizeof > > > > > raise RuntimeError(msg) > > > > > > > > > > This is what my configuration looks like (adapted from > > > > > config/examples/arch-bgp-ibm-opt.py) > > > > > configure_options = [ > > > > > '--with-cc=mpixlc', > > > > > '--with-fc=mpixlf90', > > > > > '--with-cxx=mpixlcxx', > > > > > 'COPTFLAGS=-O3', > > > > > 'FOPTFLAGS=-O3', > > > > > '--with-debugging=0', > > > > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > > > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > > > > '--download-parmetis=1', > > > > > '--download-metis=1', > > > > > '--download-plapack=1', > > > > > '--download-hdf5=1' > > > > > ] > > > > > > > > > > I would appreciate any help building the llbrary there. > > > > > > > > > > Thanks, > > > > > Jitu > > > > > > > > > > > > > > > > > > > > > From rtm at eecs.utk.edu Mon Jul 29 14:45:35 2013 From: rtm at eecs.utk.edu (Richard Tran Mills) Date: Mon, 29 Jul 2013 15:45:35 -0400 Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: <51F6C65F.2090104@eecs.utk.edu> Hi Jitu, I suspect that this problem doesn't have anything to do with a PETSc configure problem on the machine. Strangely, I am not seeing the same error when I attempt to build the same file: mpixlf2003_r -qnosave -qxlf2003=polymorphic -c -g -qarch=450d -qtune=450 -qmaxmem=-1 -I/home/rmills/proj/petsc/include -I/home/rmills/proj/petsc/bgp-ibm_g/include -I/soft/apps/current/hdf5-1.8.9/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/sys/include -I. -o utility.o utility.F90 ** utility_module === End of Compilation 1 === 1501-510 Compilation successful for file utility.F90. I do note that you need to add the compiler option '-qxlf2003=polymorphic' so that Fortran 2003 classes will work. I did this in my PETSc configure file by specifying ''--with-fc=mpixlf2003_r -qnosave -qxlf2003=polymorphic'', though I believe that the proper way to do this is specify the Fortran flags separately using '--FFLAGS='. You can have a look at my PETSc configure file on Intrepid at /home/rmills/proj/petsc/config/bgp-gnu_g.py I'm still seeing lots of compile errors with the IBM compilers, which I am working through. Also working on GCC, though the "official" version of the compilers on the machine is too old. (An ALCF staff member did point me to a more recent build to try.) We should probably take this discussion off the petsc-users list unless we determine that it really is a problem with the PETSc configure. Best regards, Richard On 7/29/13 3:25 PM, Satish Balay wrote: > mpixlf2003_r -qnosave -c -O3 -qarch=450d -qtune=450 -qmaxmem=-1 -I/gpfs/home/jkumar/lib/petsc-hg/include -I/gpfs/home/jkumar/lib/petsc-hg/arch-bgp-ibm-opt/i\ > nclude -I/soft/apps/current/hdf5-1.8.9/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/c\ > omm/sys/include -I. -DMUALEM_SPLINE -DICE -o utility.o utility.F90 > bgxlf2003_r: 1501-214 (W) command option M reserved for future use - ignored > "utility.F90", line 61.14: 1516-036 (S) Entity M1 has undefined type. > "utility.F90", line 62.14: 1516-036 (S) Entity IA1 has undefined type. > "utility.F90", line 63.14: 1516-036 (S) Entity IC1 has undefined type. > "utility.F90", line 65.14: 1516-036 (S) Entity M2 has undefined type. > "utility.F90", line 66.14: 1516-036 (S) Entity IA2 has undefined type. > "utility.F90", line 67.14: 1516-036 (S) Entity IC2 has undefined type. > "utility.F90", line 69.14: 1516-036 (S) Entity M3 has undefined type. > "utility.F90", line 70.14: 1516-036 (S) Entity IA3 has undefined type. > "utility.F90", line 71.14: 1516-036 (S) Entity IC3 has undefined type. > "utility.F90", 1520-031 (W) Option DLINES is ignored within Fortran 90 free form and IBM free form. > ** Utility_module === End of Compilation 1 === > 1501-511 Compilation failed for file utility.F90. > <<<<<< > > On Mon, 29 Jul 2013, Jitendra Kumar wrote: > >> Satish, >> By building my own zlib library specifying it through LIBS flag to PETSc >> configure, I was able to compile PETSC-Dev with HDF5 support on Intrepid. >> However, I ran into errors while building my application and it looks like >> the XLF Fortran is using case sensitive settings and is failing due to >> mixed uppercase/lowercase variable names in the code. > Are you sure this is the problem? I can't reporduce it with a simple code.. > > > Satish > > -------- > [balay at vestalac1 junk]$ cat foo.F > program main > implicit none > integer i > I=5 > write(*,*) I > end > [balay at vestalac1 junk]$ mpixlf2003_r foo.F > ** main === End of Compilation 1 === > 1501-510 Compilation successful for file foo.F. > [balay at vestalac1 junk]$ ./a.out > 5 > > >> I have followed the >> /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py >> and am not sure if there's something included or missing in the >> configuration causing this. I believe default is for XLF to not to be case >> sensitive, but I am not sure what do I have that's making it otherwise. >> >> I would appreciate if you can point me to anything I may be missing in my >> configuration file. Attached are petsc configuration file and log of my >> application compilation errors. >> >> Thanks, >> Jitu >> >> >> >> On Tue, Jul 16, 2013 at 12:59 PM, Satish Balay wrote: >> >>> --download-package might not work on all machines. --download-hdf5=1 does >>> not work on bg/p >>> >>> However there is hdf5 installed on it. You can try using >>> --with-hdf5-include/--with-hdf5-lib options. >>> >>> There could still be an issue with "Compression library [libz.a or >>> equivalent] not found" but I think the workarround is already in >>> petsc-dev. >>> >>> Satish >>> >>> >>> >>> On Tue, 16 Jul 2013, Jitendra Kumar wrote: >>> >>>> Thanks Satish. I tried using the configuration you pointed me to with the >>>> addition of --download-hdf5=1 and got error "Compression library [libz.a >>> or >>>> equivalent] not found >>>> " >>>> >>>> Do I need to load some package to get this? >>>> >>>> Jitu >>>> >>>> >>>> On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay >>> wrote: >>>>> As the message indicates you need '--with-batch' option on this machine >>>>> >>>>> Check one of the default builds on intrepid for configure options to >>> use.. >>>>> [perhaps >>>>> >>> /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] >>>>> Satish >>>>> >>>>> On Tue, 16 Jul 2013, Jitendra Kumar wrote: >>>>> >>>>>> I ran into following errors while trying to build PETSc-dev on >>> Intrepid >>>>>> @ALCF. (configure.log attached) >>>>>> >>>>>> >>> ******************************************************************************* >>>>>> UNABLE to CONFIGURE with GIVEN OPTIONS (see >>> configure.log for >>>>>> details): >>>>>> >>> ------------------------------------------------------------------------------- >>>>>> Cannot run executable to determine size of char. If this machine >>> uses a >>>>>> batch system >>>>>> to submit jobs you will need to configure using ./configure with the >>>>>> additional option --with-batch. >>>>>> Otherwise there is problem with the compilers. Can you compile and >>> run >>>>>> code with your C/C++ (and maybe Fortran) compilers? >>>>>> >>> ******************************************************************************* >>>>>> File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line 293, >>> in >>>>>> petsc_configure >>>>>> framework.configure(out = sys.stdout) >>>>>> File >>>>>> "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", >>>>> line >>>>>> 933, in configure >>>>>> child.configure() >>>>>> File >>> "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", >>>>>> line 386, in configure >>>>>> map(lambda type: self.executeTest(self.checkSizeof, type), >>>>>> ['char','void *', 'short', 'int', 'long', 'long long', 'float', >>> 'double', >>>>>> 'size_t']) >>>>>> File >>> "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", >>>>>> line 386, in >>>>>> map(lambda type: self.executeTest(self.checkSizeof, type), >>>>>> ['char','void *', 'short', 'int', 'long', 'long long', 'float', >>> 'double', >>>>>> 'size_t']) >>>>>> File >>> "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", >>>>>> line 115, in executeTest >>>>>> ret = apply(test, args,kargs) >>>>>> File >>> "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", >>>>>> line 296, in checkSizeof >>>>>> raise RuntimeError(msg) >>>>>> >>>>>> This is what my configuration looks like (adapted from >>>>>> config/examples/arch-bgp-ibm-opt.py) >>>>>> configure_options = [ >>>>>> '--with-cc=mpixlc', >>>>>> '--with-fc=mpixlf90', >>>>>> '--with-cxx=mpixlcxx', >>>>>> 'COPTFLAGS=-O3', >>>>>> 'FOPTFLAGS=-O3', >>>>>> '--with-debugging=0', >>>>>> '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', >>>>>> # '--with-hdf5=/soft/apps/hdf5-1.8.0', >>>>>> '--download-parmetis=1', >>>>>> '--download-metis=1', >>>>>> '--download-plapack=1', >>>>>> '--download-hdf5=1' >>>>>> ] >>>>>> >>>>>> I would appreciate any help building the llbrary there. >>>>>> >>>>>> Thanks, >>>>>> Jitu >>>>>> >>>>> >>> -- Richard Tran Mills, Ph.D. Computational Earth Scientist | Joint Assistant Professor Hydrogeochemical Dynamics Team | EECS and Earth & Planetary Sciences Oak Ridge National Laboratory | University of Tennessee, Knoxville E-mail: rmills at ornl.gov V: 865-241-3198 http://climate.ornl.gov/~rmills From balay at mcs.anl.gov Mon Jul 29 14:49:41 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 29 Jul 2013 14:49:41 -0500 (CDT) Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: <51F6C65F.2090104@eecs.utk.edu> References: <51F6C65F.2090104@eecs.utk.edu> Message-ID: On Mon, 29 Jul 2013, Richard Tran Mills wrote: > Hi Jitu, > > I suspect that this problem doesn't have anything to do with a PETSc configure > problem on the machine. Strangely, I am not seeing the same error when I > attempt to build the same file: > > mpixlf2003_r -qnosave -qxlf2003=polymorphic -c -g -qarch=450d -qtune=450 > -qmaxmem=-1 -I/home/rmills/proj/petsc/include > -I/home/rmills/proj/petsc/bgp-ibm_g/include > -I/soft/apps/current/hdf5-1.8.9/include > -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include > -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/sys/include -I. -o utility.o > utility.F90 > ** utility_module === End of Compilation 1 === > 1501-510 Compilation successful for file utility.F90. > > I do note that you need to add the compiler option '-qxlf2003=polymorphic' so > that Fortran 2003 classes will work. I did this in my PETSc configure file by > specifying ''--with-fc=mpixlf2003_r -qnosave -qxlf2003=polymorphic'', though I > believe that the proper way to do this is specify the Fortran flags separately > using '--FFLAGS='. You can have a look at my PETSc configure file on > Intrepid at These extra compiler flags can be added in pflotran makefile or invoke make with these options.. make FFLAGS=-qxlf2003=polymorphic flow etc.. > > /home/rmills/proj/petsc/config/bgp-gnu_g.py > > I'm still seeing lots of compile errors with the IBM compilers, which I am > working through. Also working on GCC, though the "official" version of the > compilers on the machine is too old. (An ALCF staff member did point me to a > more recent build to try.) > > We should probably take this discussion off the petsc-users list unless we > determine that it really is a problem with the PETSc configure. > > Best regards, > Richard > > > On 7/29/13 3:25 PM, Satish Balay wrote: > > mpixlf2003_r -qnosave -c -O3 -qarch=450d -qtune=450 -qmaxmem=-1 > > -I/gpfs/home/jkumar/lib/petsc-hg/include > > -I/gpfs/home/jkumar/lib/petsc-hg/arch-bgp-ibm-opt/i\ > > nclude -I/soft/apps/current/hdf5-1.8.9/include > > -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include > > -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/c\ > > omm/sys/include -I. -DMUALEM_SPLINE -DICE -o utility.o utility.F90 > > bgxlf2003_r: 1501-214 (W) command option M reserved for future use - ignored > > "utility.F90", line 61.14: 1516-036 (S) Entity M1 has undefined type. > > "utility.F90", line 62.14: 1516-036 (S) Entity IA1 has undefined type. > > "utility.F90", line 63.14: 1516-036 (S) Entity IC1 has undefined type. > > "utility.F90", line 65.14: 1516-036 (S) Entity M2 has undefined type. > > "utility.F90", line 66.14: 1516-036 (S) Entity IA2 has undefined type. > > "utility.F90", line 67.14: 1516-036 (S) Entity IC2 has undefined type. > > "utility.F90", line 69.14: 1516-036 (S) Entity M3 has undefined type. > > "utility.F90", line 70.14: 1516-036 (S) Entity IA3 has undefined type. > > "utility.F90", line 71.14: 1516-036 (S) Entity IC3 has undefined type. > > "utility.F90", 1520-031 (W) Option DLINES is ignored within Fortran 90 free > > form and IBM free form. > > ** Utility_module === End of Compilation 1 === > > 1501-511 Compilation failed for file utility.F90. > > <<<<<< > > > > On Mon, 29 Jul 2013, Jitendra Kumar wrote: > > > > > Satish, > > > By building my own zlib library specifying it through LIBS flag to PETSc > > > configure, I was able to compile PETSC-Dev with HDF5 support on Intrepid. > > > However, I ran into errors while building my application and it looks like > > > the XLF Fortran is using case sensitive settings and is failing due to > > > mixed uppercase/lowercase variable names in the code. > > Are you sure this is the problem? I can't reporduce it with a simple code.. > > > > > > Satish > > > > -------- > > [balay at vestalac1 junk]$ cat foo.F > > program main > > implicit none > > integer i > > I=5 > > write(*,*) I > > end > > [balay at vestalac1 junk]$ mpixlf2003_r foo.F > > ** main === End of Compilation 1 === > > 1501-510 Compilation successful for file foo.F. > > [balay at vestalac1 junk]$ ./a.out > > 5 > > > > > > > I have followed the > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py > > > and am not sure if there's something included or missing in the > > > configuration causing this. I believe default is for XLF to not to be case > > > sensitive, but I am not sure what do I have that's making it otherwise. > > > > > > I would appreciate if you can point me to anything I may be missing in my > > > configuration file. Attached are petsc configuration file and log of my > > > application compilation errors. > > > > > > Thanks, > > > Jitu > > > > > > > > > > > > On Tue, Jul 16, 2013 at 12:59 PM, Satish Balay wrote: > > > > > > > --download-package might not work on all machines. --download-hdf5=1 > > > > does > > > > not work on bg/p > > > > > > > > However there is hdf5 installed on it. You can try using > > > > --with-hdf5-include/--with-hdf5-lib options. > > > > > > > > There could still be an issue with "Compression library [libz.a or > > > > equivalent] not found" but I think the workarround is already in > > > > petsc-dev. > > > > > > > > Satish > > > > > > > > > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > > > Thanks Satish. I tried using the configuration you pointed me to with > > > > > the > > > > > addition of --download-hdf5=1 and got error "Compression library > > > > > [libz.a > > > > or > > > > > equivalent] not found > > > > > " > > > > > > > > > > Do I need to load some package to get this? > > > > > > > > > > Jitu > > > > > > > > > > > > > > > On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay > > > > wrote: > > > > > > As the message indicates you need '--with-batch' option on this > > > > > > machine > > > > > > > > > > > > Check one of the default builds on intrepid for configure options to > > > > use.. > > > > > > [perhaps > > > > > > > > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > > > > > Satish > > > > > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > > > > > > > I ran into following errors while trying to build PETSc-dev on > > > > Intrepid > > > > > > > @ALCF. (configure.log attached) > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > configure.log for > > > > > > > details): > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > > Cannot run executable to determine size of char. If this machine > > > > uses a > > > > > > > batch system > > > > > > > to submit jobs you will need to configure using ./configure with > > > > > > > the > > > > > > > additional option --with-batch. > > > > > > > Otherwise there is problem with the compilers. Can you compile > > > > > > > and > > > > run > > > > > > > code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > ******************************************************************************* > > > > > > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line > > > > > > > 293, > > > > in > > > > > > > petsc_configure > > > > > > > framework.configure(out = sys.stdout) > > > > > > > File > > > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > > > > > > line > > > > > > > 933, in configure > > > > > > > child.configure() > > > > > > > File > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > > > line 386, in configure > > > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > > > 'double', > > > > > > > 'size_t']) > > > > > > > File > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > > > line 386, in > > > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > > > 'double', > > > > > > > 'size_t']) > > > > > > > File > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > > > > > > line 115, in executeTest > > > > > > > ret = apply(test, args,kargs) > > > > > > > File > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > > > line 296, in checkSizeof > > > > > > > raise RuntimeError(msg) > > > > > > > > > > > > > > This is what my configuration looks like (adapted from > > > > > > > config/examples/arch-bgp-ibm-opt.py) > > > > > > > configure_options = [ > > > > > > > '--with-cc=mpixlc', > > > > > > > '--with-fc=mpixlf90', > > > > > > > '--with-cxx=mpixlcxx', > > > > > > > 'COPTFLAGS=-O3', > > > > > > > 'FOPTFLAGS=-O3', > > > > > > > '--with-debugging=0', > > > > > > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > > > > > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > > > > > > '--download-parmetis=1', > > > > > > > '--download-metis=1', > > > > > > > '--download-plapack=1', > > > > > > > '--download-hdf5=1' > > > > > > > ] > > > > > > > > > > > > > > I would appreciate any help building the llbrary there. > > > > > > > > > > > > > > Thanks, > > > > > > > Jitu > > > > > > > > > > > > > > > > > > > > From balay at mcs.anl.gov Mon Jul 29 14:51:49 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 29 Jul 2013 14:51:49 -0500 (CDT) Subject: [petsc-users] PETSC_INT 32bits / 64 bits problem In-Reply-To: References: Message-ID: On Mon, 29 Jul 2013, Lu Qiyue wrote: > Dear All: > > I am solving a huge system of AX=b, Matrix A is a sparse matrix but its > number of non-zeros has been out of range of the upper limit of PETSC_INT > (4 bytes). > > In the online documents > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html > It shows: > Its size can be configured with the option --with-64-bit-indices - to be > either 32bit or 64bit [default 32 bits int]. > Also, in the document: > http://www.mcs.anl.gov/petsc/petsc-current/include/petscsys.h.html#PetscInt > Looks need to configure BLAS and LAPACK lib with 64 bits options as well. > > If I just change the PETSC_INT declaration in the driver to LONG INT, > warnings prompt out when calling functions which used PETSC_INT data as its > parameters. > > Should I recompile (re-install) PETSC to add the 64 bits configurations? Yes - you need to reconfigure with --with-64-bit-indices. Suggest using different PETSC_ARCH for this build - so that your current build is still useable. > And for BLAS and LAPACK libs? Or, is there any other solutions? We use 32bit blas for --with-64-bit-indices build aswell - so nothing special is needed for it. Note: this relies on the subproblem [per mpi proc] to be within the 32bit int limit.. Satish > > Thanks > > Qiyue Lu > From bsmith at mcs.anl.gov Mon Jul 29 14:52:18 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 29 Jul 2013 14:52:18 -0500 Subject: [petsc-users] PETSC_INT 32bits / 64 bits problem In-Reply-To: References: Message-ID: Configure PETSc with --with-64-bit-indices just leave BLAS LAPACK alone it can use the standard blas and lapack. Barry On Jul 29, 2013, at 2:22 PM, Lu Qiyue wrote: > Dear All: > > I am solving a huge system of AX=b, Matrix A is a sparse matrix but its number of non-zeros has been out of range of the upper limit of PETSC_INT (4 bytes). > > In the online documents > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html > It shows: > Its size can be configured with the option --with-64-bit-indices - to be either 32bit or 64bit [default 32 bits int]. > Also, in the document: > http://www.mcs.anl.gov/petsc/petsc-current/include/petscsys.h.html#PetscInt > Looks need to configure BLAS and LAPACK lib with 64 bits options as well. > > If I just change the PETSC_INT declaration in the driver to LONG INT, warnings prompt out when calling functions which used PETSC_INT data as its parameters. > > Should I recompile (re-install) PETSC to add the 64 bits configurations? And for BLAS and LAPACK libs? Or, is there any other solutions? > > Thanks > > Qiyue Lu From jitendra.ornl at gmail.com Mon Jul 29 16:03:35 2013 From: jitendra.ornl at gmail.com (Jitendra Kumar) Date: Mon, 29 Jul 2013 17:03:35 -0400 Subject: [petsc-users] Fwd: PETSc installation on Intrepid In-Reply-To: References: Message-ID: Upper/lower case errors appears to be for variables defined as "parameter". I tried changing the cases to be consistent in utility.F90 and that makes the compilation errors go away. But again I cannot reproduce the same error in a simple test program. The exact version of PFLOTRAN builds fine with GCC and PGI compilers on other machines. I am continuing to look into this. Jitu On Mon, Jul 29, 2013 at 3:25 PM, Satish Balay wrote: > >>>>>> > mpixlf2003_r -qnosave -c -O3 -qarch=450d -qtune=450 -qmaxmem=-1 > -I/gpfs/home/jkumar/lib/petsc-hg/include > -I/gpfs/home/jkumar/lib/petsc-hg/arch-bgp-ibm-opt/i\ > nclude -I/soft/apps/current/hdf5-1.8.9/include > -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include > -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/c\ > omm/sys/include -I. -DMUALEM_SPLINE -DICE -o utility.o utility.F90 > bgxlf2003_r: 1501-214 (W) command option M reserved for future use - > ignored > "utility.F90", line 61.14: 1516-036 (S) Entity M1 has undefined type. > "utility.F90", line 62.14: 1516-036 (S) Entity IA1 has undefined type. > "utility.F90", line 63.14: 1516-036 (S) Entity IC1 has undefined type. > "utility.F90", line 65.14: 1516-036 (S) Entity M2 has undefined type. > "utility.F90", line 66.14: 1516-036 (S) Entity IA2 has undefined type. > "utility.F90", line 67.14: 1516-036 (S) Entity IC2 has undefined type. > "utility.F90", line 69.14: 1516-036 (S) Entity M3 has undefined type. > "utility.F90", line 70.14: 1516-036 (S) Entity IA3 has undefined type. > "utility.F90", line 71.14: 1516-036 (S) Entity IC3 has undefined type. > "utility.F90", 1520-031 (W) Option DLINES is ignored within Fortran 90 > free form and IBM free form. > ** Utility_module === End of Compilation 1 === > 1501-511 Compilation failed for file utility.F90. > <<<<<< > > On Mon, 29 Jul 2013, Jitendra Kumar wrote: > > > Satish, > > By building my own zlib library specifying it through LIBS flag to PETSc > > configure, I was able to compile PETSC-Dev with HDF5 support on Intrepid. > > However, I ran into errors while building my application and it looks > like > > the XLF Fortran is using case sensitive settings and is failing due to > > mixed uppercase/lowercase variable names in the code. > > Are you sure this is the problem? I can't reporduce it with a simple code.. > > > Satish > > -------- > [balay at vestalac1 junk]$ cat foo.F > program main > implicit none > integer i > I=5 > write(*,*) I > end > [balay at vestalac1 junk]$ mpixlf2003_r foo.F > ** main === End of Compilation 1 === > 1501-510 Compilation successful for file foo.F. > [balay at vestalac1 junk]$ ./a.out > 5 > > > > > > I have followed the > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py > > and am not sure if there's something included or missing in the > > configuration causing this. I believe default is for XLF to not to be > case > > sensitive, but I am not sure what do I have that's making it otherwise. > > > > I would appreciate if you can point me to anything I may be missing in my > > configuration file. Attached are petsc configuration file and log of my > > application compilation errors. > > > > Thanks, > > Jitu > > > > > > > > On Tue, Jul 16, 2013 at 12:59 PM, Satish Balay > wrote: > > > > > --download-package might not work on all machines. --download-hdf5=1 > does > > > not work on bg/p > > > > > > However there is hdf5 installed on it. You can try using > > > --with-hdf5-include/--with-hdf5-lib options. > > > > > > There could still be an issue with "Compression library [libz.a or > > > equivalent] not found" but I think the workarround is already in > > > petsc-dev. > > > > > > Satish > > > > > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > Thanks Satish. I tried using the configuration you pointed me to > with the > > > > addition of --download-hdf5=1 and got error "Compression library > [libz.a > > > or > > > > equivalent] not found > > > > " > > > > > > > > Do I need to load some package to get this? > > > > > > > > Jitu > > > > > > > > > > > > On Tue, Jul 16, 2013 at 11:59 AM, Satish Balay > > > wrote: > > > > > > > > > As the message indicates you need '--with-batch' option on this > machine > > > > > > > > > > Check one of the default builds on intrepid for configure options > to > > > use.. > > > > > > > > > > [perhaps > > > > > > > > > /soft/apps/libraries/petsc/3.3-p6/xl-opt/conf/reconfigure-arch-bgp-ibm-opt.py] > > > > > > > > > > Satish > > > > > > > > > > On Tue, 16 Jul 2013, Jitendra Kumar wrote: > > > > > > > > > > > I ran into following errors while trying to build PETSc-dev on > > > Intrepid > > > > > > @ALCF. (configure.log attached) > > > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > configure.log for > > > > > > details): > > > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > Cannot run executable to determine size of char. If this machine > > > uses a > > > > > > batch system > > > > > > to submit jobs you will need to configure using ./configure with > the > > > > > > additional option --with-batch. > > > > > > Otherwise there is problem with the compilers. Can you compile > and > > > run > > > > > > code with your C/C++ (and maybe Fortran) compilers? > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > File "/gpfs/home/jkumar/lib/petsc/config/configure.py", line > 293, > > > in > > > > > > petsc_configure > > > > > > framework.configure(out = sys.stdout) > > > > > > File > > > > > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/framework.py", > > > > > line > > > > > > 933, in configure > > > > > > child.configure() > > > > > > File > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > > line 386, in configure > > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > > 'double', > > > > > > 'size_t']) > > > > > > File > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > > line 386, in > > > > > > map(lambda type: self.executeTest(self.checkSizeof, type), > > > > > > ['char','void *', 'short', 'int', 'long', 'long long', 'float', > > > 'double', > > > > > > 'size_t']) > > > > > > File > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/base.py", > > > > > > line 115, in executeTest > > > > > > ret = apply(test, args,kargs) > > > > > > File > > > "/gpfs/home/jkumar/lib/petsc/config/BuildSystem/config/types.py", > > > > > > line 296, in checkSizeof > > > > > > raise RuntimeError(msg) > > > > > > > > > > > > This is what my configuration looks like (adapted from > > > > > > config/examples/arch-bgp-ibm-opt.py) > > > > > > configure_options = [ > > > > > > '--with-cc=mpixlc', > > > > > > '--with-fc=mpixlf90', > > > > > > '--with-cxx=mpixlcxx', > > > > > > 'COPTFLAGS=-O3', > > > > > > 'FOPTFLAGS=-O3', > > > > > > '--with-debugging=0', > > > > > > '--with-cmake=/soft/apps/fen/cmake-2.8.3/bin/cmake', > > > > > > # '--with-hdf5=/soft/apps/hdf5-1.8.0', > > > > > > '--download-parmetis=1', > > > > > > '--download-metis=1', > > > > > > '--download-plapack=1', > > > > > > '--download-hdf5=1' > > > > > > ] > > > > > > > > > > > > I would appreciate any help building the llbrary there. > > > > > > > > > > > > Thanks, > > > > > > Jitu > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Mon Jul 29 16:41:16 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Mon, 29 Jul 2013 23:41:16 +0200 Subject: [petsc-users] error linking Message-ID: Hi When I link my program to petsc. I have this linking error: /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function `errexit': error.c:(.text+0x80): multiple definition of `errexit' /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): first defined here I have linked to the libraries as the sequence in $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to petsc-maint at mcs.anl.gov) for your information Ciao Bui -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 29 17:02:38 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Jul 2013 17:02:38 -0500 Subject: [petsc-users] error linking In-Reply-To: References: Message-ID: On Mon, Jul 29, 2013 at 4:41 PM, Hoang Giang Bui wrote: > > Hi > > When I link my program to petsc. I have this linking error: > > /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function > `errexit': > error.c:(.text+0x80): multiple definition of `errexit' > /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): > first defined here > This looks like a conflict between PARMs and and Metis. Do you need both? Thanks, Matt > I have linked to the libraries as the sequence in > $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to > petsc-maint at mcs.anl.gov) > for your information > > Ciao > Bui > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Mon Jul 29 17:10:04 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Tue, 30 Jul 2013 00:10:04 +0200 Subject: [petsc-users] Fwd: error linking In-Reply-To: References: Message-ID: Forgot to forward to the list. Ciao again ---------- Forwarded message ---------- From: Hoang Giang Bui Date: Tue, Jul 30, 2013 at 12:08 AM Subject: Re: [petsc-users] error linking To: Matthew Knepley The fact is that I compiled petsc with both metis and parms (I like to use preconditioner from parms) and I think there are no conflict since no compilation error is thrown. Now I want to link my program to petsc I have to link to both. If not it causes more error. Ciao Bui On Tue, Jul 30, 2013 at 12:02 AM, Matthew Knepley wrote: > On Mon, Jul 29, 2013 at 4:41 PM, Hoang Giang Bui wrote: > >> >> Hi >> >> When I link my program to petsc. I have this linking error: >> >> /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function >> `errexit': >> error.c:(.text+0x80): multiple definition of `errexit' >> /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): >> first defined here >> > > This looks like a conflict between PARMs and and Metis. Do you need both? > > Thanks, > > Matt > > >> I have linked to the libraries as the sequence in >> $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to >> petsc-maint at mcs.anl.gov) >> for your information >> >> Ciao >> Bui >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- With Best Regards ! Giang Bui To learn and to excel -- With Best Regards ! Giang Bui To learn and to excel -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 29 17:15:39 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Jul 2013 17:15:39 -0500 Subject: [petsc-users] error linking In-Reply-To: References: Message-ID: On Mon, Jul 29, 2013 at 5:08 PM, Hoang Giang Bui wrote: > > The fact is that I compiled petsc with both metis and parms (I like to use > preconditioner from parms) and I think there are no conflict since no > compilation error is thrown. Now I want to link my program to petsc I have > to link to both. If not it causes more error. > Unfortunately, compiling does not find link conflicts. PARMs and Metis has chosen the same name for a routine. We cannot fix that. Matt > Ciao > Bui > > > > On Tue, Jul 30, 2013 at 12:02 AM, Matthew Knepley wrote: > >> On Mon, Jul 29, 2013 at 4:41 PM, Hoang Giang Bui wrote: >> >>> >>> Hi >>> >>> When I link my program to petsc. I have this linking error: >>> >>> /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function >>> `errexit': >>> error.c:(.text+0x80): multiple definition of `errexit' >>> /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): >>> first defined here >>> >> >> This looks like a conflict between PARMs and and Metis. Do you need both? >> >> Thanks, >> >> Matt >> >> >>> I have linked to the libraries as the sequence in >>> $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to >>> petsc-maint at mcs.anl.gov) >>> for your information >>> >>> Ciao >>> Bui >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > With Best Regards ! > Giang Bui > To learn and to excel > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Jul 29 17:18:51 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 29 Jul 2013 17:18:51 -0500 (CDT) Subject: [petsc-users] error linking In-Reply-To: References: Message-ID: changes to Parms looks trivial. You can try the attached patch.. cd externalpackages/pARMS_3.2 patch -Np1 < parms.patch and then: rm -f /home/hbui/sw/petsc-3.4.2-build1/arch-linux2-cxx-opt/conf/pARMS and rerun PETSc configure as before.. Satish On Mon, 29 Jul 2013, Matthew Knepley wrote: > On Mon, Jul 29, 2013 at 5:08 PM, Hoang Giang Bui wrote: > > > > > The fact is that I compiled petsc with both metis and parms (I like to use > > preconditioner from parms) and I think there are no conflict since no > > compilation error is thrown. Now I want to link my program to petsc I have > > to link to both. If not it causes more error. > > > > Unfortunately, compiling does not find link conflicts. PARMs and Metis has > chosen the same name for a routine. We cannot fix that. > > Matt > > > > Ciao > > Bui > > > > > > > > On Tue, Jul 30, 2013 at 12:02 AM, Matthew Knepley wrote: > > > >> On Mon, Jul 29, 2013 at 4:41 PM, Hoang Giang Bui wrote: > >> > >>> > >>> Hi > >>> > >>> When I link my program to petsc. I have this linking error: > >>> > >>> /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function > >>> `errexit': > >>> error.c:(.text+0x80): multiple definition of `errexit' > >>> /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): > >>> first defined here > >>> > >> > >> This looks like a conflict between PARMs and and Metis. Do you need both? > >> > >> Thanks, > >> > >> Matt > >> > >> > >>> I have linked to the libraries as the sequence in > >>> $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to > >>> petsc-maint at mcs.anl.gov) > >>> for your information > >>> > >>> Ciao > >>> Bui > >>> > >>> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which their > >> experiments lead. > >> -- Norbert Wiener > >> > > > > > > > > -- > > With Best Regards ! > > Giang Bui > > To learn and to excel > > > > > > -------------- next part -------------- diff --git a/src/DDPQ/protos.h b/src/DDPQ/protos.h index 8b675af..7cb4885 100644 --- a/src/DDPQ/protos.h +++ b/src/DDPQ/protos.h @@ -33,7 +33,7 @@ #endif /* sets */ -extern void errexit(char *f_str, ...); +extern void parms_errexit(char *f_str, ...); extern void *Malloc(int nbytes, char *msg); extern int setupP4 (p4ptr amat, int Bn, int Cn, csptr F, csptr E); extern int cleanP4(p4ptr amat); diff --git a/src/DDPQ/sets.c b/src/DDPQ/sets.c index 49067e6..1cfb68b 100644 --- a/src/DDPQ/sets.c +++ b/src/DDPQ/sets.c @@ -8,7 +8,7 @@ #endif #include "protos.h" -void errexit( char *f_str, ... ) +void parms_errexit( char *f_str, ... ) { va_list argp; char out1[256], out2[256]; @@ -34,7 +34,7 @@ void *Malloc( int nbytes, char *msg ) ptr = (void *)malloc(nbytes); if (ptr == NULL) - errexit( "Not enough mem for %s. Requested size: %d bytes", msg, nbytes ); + parms_errexit( "Not enough mem for %s. Requested size: %d bytes", msg, nbytes ); return ptr; } From hgbk2008 at gmail.com Mon Jul 29 17:47:11 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Tue, 30 Jul 2013 00:47:11 +0200 Subject: [petsc-users] error linking In-Reply-To: References: Message-ID: Perfect. The patch solution works like a charm. Ciao Bui On Tue, Jul 30, 2013 at 12:18 AM, Satish Balay wrote: > changes to Parms looks trivial. You can try the attached patch.. > > cd externalpackages/pARMS_3.2 > patch -Np1 < parms.patch > > and then: > > rm -f /home/hbui/sw/petsc-3.4.2-build1/arch-linux2-cxx-opt/conf/pARMS > > and rerun PETSc configure as before.. > > Satish > > On Mon, 29 Jul 2013, Matthew Knepley wrote: > > > On Mon, Jul 29, 2013 at 5:08 PM, Hoang Giang Bui > wrote: > > > > > > > > The fact is that I compiled petsc with both metis and parms (I like to > use > > > preconditioner from parms) and I think there are no conflict since no > > > compilation error is thrown. Now I want to link my program to petsc I > have > > > to link to both. If not it causes more error. > > > > > > > Unfortunately, compiling does not find link conflicts. PARMs and Metis > has > > chosen the same name for a routine. We cannot fix that. > > > > Matt > > > > > > > Ciao > > > Bui > > > > > > > > > > > > On Tue, Jul 30, 2013 at 12:02 AM, Matthew Knepley >wrote: > > > > > >> On Mon, Jul 29, 2013 at 4:41 PM, Hoang Giang Bui >wrote: > > >> > > >>> > > >>> Hi > > >>> > > >>> When I link my program to petsc. I have this linking error: > > >>> > > >>> /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function > > >>> `errexit': > > >>> error.c:(.text+0x80): multiple definition of `errexit' > > >>> > /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): > > >>> first defined here > > >>> > > >> > > >> This looks like a conflict between PARMs and and Metis. Do you need > both? > > >> > > >> Thanks, > > >> > > >> Matt > > >> > > >> > > >>> I have linked to the libraries as the sequence in > > >>> $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to > > >>> petsc-maint at mcs.anl.gov< > http://www.mcs.anl.gov/petsc/documentation/bugreporting.html>) > > >>> for your information > > >>> > > >>> Ciao > > >>> Bui > > >>> > > >>> > > >> > > >> > > >> -- > > >> What most experimenters take for granted before they begin their > > >> experiments is infinitely more interesting than any results to which > their > > >> experiments lead. > > >> -- Norbert Wiener > > >> > > > > > > > > > > > > -- > > > With Best Regards ! > > > Giang Bui > > > To learn and to excel > > > > > > > > > > > > -- With Best Regards ! Giang Bui To learn and to excel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jul 29 18:14:59 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 29 Jul 2013 18:14:59 -0500 Subject: [petsc-users] error linking In-Reply-To: References: Message-ID: <0137CAF1-42D2-481E-9124-53C7A57AF994@mcs.anl.gov> Satish, Thanks. Please make sure that our downloaded Parms has this patch. Barry On Jul 29, 2013, at 5:18 PM, Satish Balay wrote: > changes to Parms looks trivial. You can try the attached patch.. > > cd externalpackages/pARMS_3.2 > patch -Np1 < parms.patch > > and then: > > rm -f /home/hbui/sw/petsc-3.4.2-build1/arch-linux2-cxx-opt/conf/pARMS > > and rerun PETSc configure as before.. > > Satish > > On Mon, 29 Jul 2013, Matthew Knepley wrote: > >> On Mon, Jul 29, 2013 at 5:08 PM, Hoang Giang Bui wrote: >> >>> >>> The fact is that I compiled petsc with both metis and parms (I like to use >>> preconditioner from parms) and I think there are no conflict since no >>> compilation error is thrown. Now I want to link my program to petsc I have >>> to link to both. If not it causes more error. >>> >> >> Unfortunately, compiling does not find link conflicts. PARMs and Metis has >> chosen the same name for a routine. We cannot fix that. >> >> Matt >> >> >>> Ciao >>> Bui >>> >>> >>> >>> On Tue, Jul 30, 2013 at 12:02 AM, Matthew Knepley wrote: >>> >>>> On Mon, Jul 29, 2013 at 4:41 PM, Hoang Giang Bui wrote: >>>> >>>>> >>>>> Hi >>>>> >>>>> When I link my program to petsc. I have this linking error: >>>>> >>>>> /opt/petsc/petsc-3.4.2-build1/lib/libmetis.a(error.c.o): In function >>>>> `errexit': >>>>> error.c:(.text+0x80): multiple definition of `errexit' >>>>> /opt/petsc/petsc-3.4.2-build1/lib/libparms.a(sets.o):sets.c:(.text+0x0): >>>>> first defined here >>>>> >>>> >>>> This looks like a conflict between PARMs and and Metis. Do you need both? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> I have linked to the libraries as the sequence in >>>>> $PETSC_WITH_EXTERNAL_LIB. I also sent my configure.log (to >>>>> petsc-maint at mcs.anl.gov) >>>>> for your information >>>>> >>>>> Ciao >>>>> Bui >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> With Best Regards ! >>> Giang Bui >>> To learn and to excel >>> >> >> >> >> > From ztdepyahoo at 163.com Tue Jul 30 02:56:56 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Tue, 30 Jul 2013 15:56:56 +0800 (CST) Subject: [petsc-users] how to scale the diagonal of a matrix Message-ID: <15edb16f.17ebd.1402e947817.Coremail.ztdepyahoo@163.com> i want to implement the under-relaxation version of the equation. it nees to scale the diagonal value a matrix with the relaxation factor, could you please tell me how to achieve this goal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 30 03:47:22 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 30 Jul 2013 16:47:22 +0800 Subject: [petsc-users] how to scale the diagonal of a matrix In-Reply-To: <15edb16f.17ebd.1402e947817.Coremail.ztdepyahoo@163.com> References: <15edb16f.17ebd.1402e947817.Coremail.ztdepyahoo@163.com> Message-ID: <87r4egxwmd.fsf@mcs.anl.gov> ??? writes: > i want to implement the under-relaxation version of the equation. it > nees to scale the diagonal value a matrix with the relaxation > factor, could you please tell me how to achieve this goal. This sounds like damped Richardson, perhaps preconditioned by Jacobi, but if you really want to do in-place scaling, there is MatShift and MatDiagonalScale, depending on what you're after. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From olivier.bonnefon at avignon.inra.fr Tue Jul 30 09:06:24 2013 From: olivier.bonnefon at avignon.inra.fr (Olivier Bonnefon) Date: Tue, 30 Jul 2013 16:06:24 +0200 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: <51E7F011.2020408@avignon.inra.fr> References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> <51E7F011.2020408@avignon.inra.fr> Message-ID: <51F7C860.3030704@avignon.inra.fr> Hello, I want to use PETSC for large problem about diffusive model in epidemiological field. I'm using the slides 'Advanced PETSc Tutorial, Maison de la Simulation, Orsay, France, June 2013 (Matt)'. My first step is to simulated a problem 0=-\nabla u + f(u), where f is a linear or non-linear function, with FEM. To do this I'm adapting the example ex12.c for the linear problem: 0=-\nabla u + w*w*u with the exact solution u(x,y)=exp(w*x)+exp(w*y). The Dirichlet boundary condition are defined from the exact solution, like in example ex12. To do this, I change only the two following functions: double WW=1.0; void quadratic_u_2d(const PetscReal x[], PetscScalar *u) { *u = exp(WW*(x[0])) +exp(WW*(x[1])) ; } void f0_u(const PetscScalar u[], const PetscScalar gradU[], const PetscReal x[], PetscScalar f0[]) { const PetscInt Ncomp = NUM_BASIS_COMPONENTS_0; PetscInt comp; for (comp = 0; comp < Ncomp; ++comp) f0[comp] = WW*WW*u[comp] ; } The result is : $ ./ex12 -refinement_limit 0.01 -snes_monitor_short -snes_converged_reason 0 SNES Function norm 22.1518 1 SNES Function norm 0.364312 2 SNES Function norm 0.0165162 3 SNES Function norm 0.000792446 4 SNES Function norm 3.81143e-05 5 SNES Function norm 1.83353e-06 6 SNES Function norm 8.8206e-08 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 6 Number of SNES iterations = 6 L_2 Error: 0.00511 Something is wrong because it is a linear problem, why the snes didn't converge in one iteration ? Thanks a lot. Olivier B On 07/18/2013 03:39 PM, Olivier Bonnefon wrote: > On 07/18/2013 03:26 PM, Matthew Knepley wrote: >> On Thu, Jul 18, 2013 at 8:17 AM, Olivier Bonnefon >> > > wrote: >> >> It is what I wanted, it works. >> If I well understand the code, ex12.h contains the P1 >> implementation. To simulate an other system, with time >> dependences for examples (du/dt), I have to adapt the plugin >> functions. >> >> >> The way I would add time dependence is to convert this from a SNES >> example into a TS example. I can help you >> do this since I want to start using TS by default. Does this sound >> reasonable? > Yes, of course. My goal is to simulate diffusive equation with non > linear sources, for example Lotka-Voltera competion. > > Olivier B >> >> Thanks, >> >> Matt >> >> Thanks a lot. >> >> Olivier B >> >> On 07/18/2013 01:12 PM, Matthew Knepley wrote: >>> On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon >>> >> > wrote: >>> >>> Hello, >>> >>> I have a 2-d heat equation that I want to simulate with >>> Finit Element Method, to do this, I'm looking for an example >>> solving 2D poisson equation with FEM (DMDA or DMPlex). Is >>> there an example like this ? >>> >>> >>> There is, but there it is still somewhat problematic. I use FIAT >>> to generate the basis function tabulation, >>> so you have to configure with >>> >>> --download-fiat --download-scientificpython --download-generator >>> >>> and you need mesh generation and partitioning >>> >>> --download-triangle --download-chaco >>> >>> and then you can run SNES ex12 using Builder (which will make >>> the header file) >>> >>> python2.7 ./config/builder2.py check >>> src/snes/examples/tutorials/ex12.c >>> >>> Jed and I are working on an all C version of tabulation which >>> would mean that you could bypass >>> the Python code generation step. Once the header is generated >>> for the element you want, then >>> you can just run the example as normal. >>> >>> Matt >>> >>> Thanks a lot. >>> >>> Olivier Bonnefon >>> >>> -- >>> Olivier Bonnefon >>> INRA PACA-Avignon, Unit? BioSP >>> Tel: +33 (0)4 32 72 21 58 >>> >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> -- Norbert Wiener >> >> >> -- >> Olivier Bonnefon >> INRA PACA-Avignon, Unit? BioSP >> Tel:+33 (0)4 32 72 21 58 >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel: +33 (0)4 32 72 21 58 -- Olivier Bonnefon INRA PACA-Avignon, Unit? BioSP Tel: +33 (0)4 32 72 21 58 -------------- next part -------------- An HTML attachment was scrubbed... URL: From luqiyue at gmail.com Tue Jul 30 10:54:07 2013 From: luqiyue at gmail.com (Lu Qiyue) Date: Tue, 30 Jul 2013 10:54:07 -0500 Subject: [petsc-users] Question on MatCreateSeqAIJ() and 32-bit integer Message-ID: Dear All: I am solving a huge system AX=b, A is a sparse matrix but its number of non-zeros is 3.29*10^9. I noticed that the 32-bit integer upper limit is ~2.15*10^9. A is in COO format here. When I prepare the input *.bin file for Petsc, the line ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,0,cnt,&A);CHKERRQ(ierr); can not pass. m =n =39979380 here, cnt is an array holding the number of non-zeros per row. The error message is: [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0]PETSC ERROR: Memory allocated 0 Memory used by process 492802048 [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [0]PETSC ERROR: Memory requested 18446744061813993472! Calling MatCreateSeqAIJ() doesn't involve NNZ information there, m/n/cnt are all integers less than 32-bit integer limit. And the total size of data in COO format: nnz*8 bytes(values)+nnz*4 bytes(rows)+nnz*4 bytes(cols) are less than the memory limit of our system. The code works on a system with half size of this failed one. I am wondering, Does this because MatCreateSeqAIJ() might do some 'internal' counting things which exceeds the integer limit in this case? Thanks Qiyue Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 30 11:38:35 2013 From: mfadams at lbl.gov (Mark F. Adams) Date: Tue, 30 Jul 2013 12:38:35 -0400 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: <51F7C860.3030704@avignon.inra.fr> References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> <51E7F011.2020408@avignon.inra.fr> <51F7C860.3030704@avignon.inra.fr> Message-ID: On Jul 30, 2013, at 10:06 AM, Olivier Bonnefon wrote: > Hello, > > I want to use PETSC for large problem about diffusive model in epidemiological field. > I'm using the slides 'Advanced PETSc Tutorial, Maison de la Simulation, Orsay, France, June 2013 (Matt)'. > > My first step is to simulated a problem 0=-\nabla u + f(u), where f is a linear or non-linear function, with FEM. To do this I'm adapting the example ex12.c for the linear problem: > > 0=-\nabla u + w*w*u > with the exact solution u(x,y)=exp(w*x)+exp(w*y). The Dirichlet boundary condition are defined from the exact solution, like in example ex12. > > To do this, I change only the two following functions: > > double WW=1.0; > void quadratic_u_2d(const PetscReal x[], PetscScalar *u) > { > *u = exp(WW*(x[0])) +exp(WW*(x[1])) ; > } > > void f0_u(const PetscScalar u[], const PetscScalar gradU[], const PetscReal x[], PetscScalar f0[]) > { > const PetscInt Ncomp = NUM_BASIS_COMPONENTS_0; > PetscInt comp; > > for (comp = 0; comp < Ncomp; ++comp) f0[comp] = WW*WW*u[comp] ; > } > > > The result is : > $ ./ex12 -refinement_limit 0.01 -snes_monitor_short -snes_converged_reason > 0 SNES Function norm 22.1518 > 1 SNES Function norm 0.364312 > 2 SNES Function norm 0.0165162 > 3 SNES Function norm 0.000792446 > 4 SNES Function norm 3.81143e-05 > 5 SNES Function norm 1.83353e-06 > 6 SNES Function norm 8.8206e-08 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 6 > Number of SNES iterations = 6 > L_2 Error: 0.00511 > -ksp_monitor will show the linear solver residuals and these should match from one nonlinear iteration to the next if the problem is linear. Your function and Jacobean might not be consistent (my guess). > > Something is wrong because it is a linear problem, why the snes didn't converge in one iteration ? > > Thanks a lot. > > Olivier B > > On 07/18/2013 03:39 PM, Olivier Bonnefon wrote: >> >> On 07/18/2013 03:26 PM, Matthew Knepley wrote: >>> >>> On Thu, Jul 18, 2013 at 8:17 AM, Olivier Bonnefon wrote: >>> It is what I wanted, it works. >>> If I well understand the code, ex12.h contains the P1 implementation. To simulate an other system, with time dependences for examples (du/dt), I have to adapt the plugin functions. >>> >>> The way I would add time dependence is to convert this from a SNES example into a TS example. I can help you >>> do this since I want to start using TS by default. Does this sound reasonable? >> Yes, of course. My goal is to simulate diffusive equation with non linear sources, for example Lotka-Voltera competion. >> >> Olivier B >>> >>> Thanks, >>> >>> Matt >>> >>> Thanks a lot. >>> >>> Olivier B >>> >>> On 07/18/2013 01:12 PM, Matthew Knepley wrote: >>>> >>>> On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon wrote: >>>> Hello, >>>> >>>> I have a 2-d heat equation that I want to simulate with Finit Element Method, to do this, I'm looking for an example solving 2D poisson equation with FEM (DMDA or DMPlex). Is there an example like this ? >>>> >>>> There is, but there it is still somewhat problematic. I use FIAT to generate the basis function tabulation, >>>> so you have to configure with >>>> >>>> --download-fiat --download-scientificpython --download-generator >>>> >>>> and you need mesh generation and partitioning >>>> >>>> --download-triangle --download-chaco >>>> >>>> and then you can run SNES ex12 using Builder (which will make the header file) >>>> >>>> python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex12.c >>>> >>>> Jed and I are working on an all C version of tabulation which would mean that you could bypass >>>> the Python code generation step. Once the header is generated for the element you want, then >>>> you can just run the example as normal. >>>> >>>> Matt >>>> >>>> Thanks a lot. >>>> >>>> Olivier Bonnefon >>>> >>>> -- >>>> Olivier Bonnefon >>>> INRA PACA-Avignon, Unit? BioSP >>>> Tel: +33 (0)4 32 72 21 58 >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >>> -- >>> Olivier Bonnefon >>> INRA PACA-Avignon, Unit? BioSP >>> Tel: +33 (0)4 32 72 21 58 >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> >> -- >> Olivier Bonnefon >> INRA PACA-Avignon, Unit? BioSP >> Tel: +33 (0)4 32 72 21 58 > > > -- > Olivier Bonnefon > INRA PACA-Avignon, Unit? BioSP > Tel: +33 (0)4 32 72 21 58 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 30 13:38:29 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 30 Jul 2013 13:38:29 -0500 Subject: [petsc-users] Question on MatCreateSeqAIJ() and 32-bit integer In-Reply-To: References: Message-ID: Lu, You need to switch to 64 bit indices. Even though the individual number of non zeros per row is less than the 32 bit limit the total number of non zeros in the matrix is higher than the limit; how much physical memory you have is not relevant. As you note it is because of internal counts inside of PETSc (actually the way we store the matrices) it requires 64 bit indices. Barry Switching to 64 bit indices should not be difficult, please let us know if you have any problems. On Jul 30, 2013, at 10:54 AM, Lu Qiyue wrote: > Dear All: > > I am solving a huge system AX=b, A is a sparse matrix but its number of non-zeros is 3.29*10^9. I noticed that the 32-bit integer upper limit is ~2.15*10^9. > > A is in COO format here. When I prepare the input *.bin file for Petsc, the line > > ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,0,cnt,&A);CHKERRQ(ierr); > > can not pass. > > m =n =39979380 here, cnt is an array holding the number of non-zeros per row. The error message is: > > [0]PETSC ERROR: Out of memory. This could be due to allocating > [0]PETSC ERROR: too large an object or bleeding by not properly > [0]PETSC ERROR: destroying unneeded objects. > [0]PETSC ERROR: Memory allocated 0 Memory used by process 492802048 > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [0]PETSC ERROR: Memory requested 18446744061813993472! > > Calling MatCreateSeqAIJ() doesn't involve NNZ information there, m/n/cnt are all integers less than 32-bit integer limit. > And the total size of data in COO format: nnz*8 bytes(values)+nnz*4 bytes(rows)+nnz*4 bytes(cols) are less than the memory limit of our system. > > The code works on a system with half size of this failed one. > > I am wondering, Does this because MatCreateSeqAIJ() might do some 'internal' counting things which exceeds the integer limit in this case? > > Thanks > > Qiyue Lu From bsmith at mcs.anl.gov Tue Jul 30 13:45:37 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 30 Jul 2013 13:45:37 -0500 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> <51E7F011.2020408@avignon.inra.fr> <51F7C860.3030704@avignon.inra.fr> Message-ID: Olivier, I concur with Mark that the most likely problem is a wrong Jacobian. You can look at http://www.mcs.anl.gov/petsc/documentation/faq.html#newton and it gives you a series of things to check. They also pretty much apply to your case, likely if you work through them you'll determine the problem. If you continue to have difficulties do not hesitate to contact us again, Barry On Jul 30, 2013, at 11:38 AM, Mark F. Adams wrote: > > On Jul 30, 2013, at 10:06 AM, Olivier Bonnefon wrote: > >> Hello, >> >> I want to use PETSC for large problem about diffusive model in epidemiological field. >> I'm using the slides 'Advanced PETSc Tutorial, Maison de la Simulation, Orsay, France, June 2013 (Matt)'. >> >> My first step is to simulated a problem 0=-\nabla u + f(u), where f is a linear or non-linear function, with FEM. To do this I'm adapting the example ex12.c for the linear problem: >> >> 0=-\nabla u + w*w*u >> with the exact solution u(x,y)=exp(w*x)+exp(w*y). The Dirichlet boundary condition are defined from the exact solution, like in example ex12. >> >> To do this, I change only the two following functions: >> >> double WW=1.0; >> void quadratic_u_2d(const PetscReal x[], PetscScalar *u) >> { >> *u = exp(WW*(x[0])) +exp(WW*(x[1])) ; >> } >> >> void f0_u(const PetscScalar u[], const PetscScalar gradU[], const PetscReal x[], PetscScalar f0[]) >> { >> const PetscInt Ncomp = NUM_BASIS_COMPONENTS_0; >> PetscInt comp; >> >> for (comp = 0; comp < Ncomp; ++comp) f0[comp] = WW*WW*u[comp] ; >> } >> >> >> The result is : >> $ ./ex12 -refinement_limit 0.01 -snes_monitor_short -snes_converged_reason >> 0 SNES Function norm 22.1518 >> 1 SNES Function norm 0.364312 >> 2 SNES Function norm 0.0165162 >> 3 SNES Function norm 0.000792446 >> 4 SNES Function norm 3.81143e-05 >> 5 SNES Function norm 1.83353e-06 >> 6 SNES Function norm 8.8206e-08 >> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 6 >> Number of SNES iterations = 6 >> L_2 Error: 0.00511 >> > > -ksp_monitor will show the linear solver residuals and these should match from one nonlinear iteration to the next if the problem is linear. Your function and Jacobean might not be consistent (my guess). > >> >> Something is wrong because it is a linear problem, why the snes didn't converge in one iteration ? >> >> Thanks a lot. >> >> Olivier B >> >> On 07/18/2013 03:39 PM, Olivier Bonnefon wrote: >>> On 07/18/2013 03:26 PM, Matthew Knepley wrote: >>>> On Thu, Jul 18, 2013 at 8:17 AM, Olivier Bonnefon wrote: >>>> It is what I wanted, it works. >>>> If I well understand the code, ex12.h contains the P1 implementation. To simulate an other system, with time dependences for examples (du/dt), I have to adapt the plugin functions. >>>> >>>> The way I would add time dependence is to convert this from a SNES example into a TS example. I can help you >>>> do this since I want to start using TS by default. Does this sound reasonable? >>> Yes, of course. My goal is to simulate diffusive equation with non linear sources, for example Lotka-Voltera competion. >>> >>> Olivier B >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> Thanks a lot. >>>> >>>> Olivier B >>>> >>>> On 07/18/2013 01:12 PM, Matthew Knepley wrote: >>>>> On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon wrote: >>>>> Hello, >>>>> >>>>> I have a 2-d heat equation that I want to simulate with Finit Element Method, to do this, I'm looking for an example solving 2D poisson equation with FEM (DMDA or DMPlex). Is there an example like this ? >>>>> >>>>> There is, but there it is still somewhat problematic. I use FIAT to generate the basis function tabulation, >>>>> so you have to configure with >>>>> >>>>> --download-fiat --download-scientificpython --download-generator >>>>> >>>>> and you need mesh generation and partitioning >>>>> >>>>> --download-triangle --download-chaco >>>>> >>>>> and then you can run SNES ex12 using Builder (which will make the header file) >>>>> >>>>> python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex12.c >>>>> >>>>> Jed and I are working on an all C version of tabulation which would mean that you could bypass >>>>> the Python code generation step. Once the header is generated for the element you want, then >>>>> you can just run the example as normal. >>>>> >>>>> Matt >>>>> >>>>> Thanks a lot. >>>>> >>>>> Olivier Bonnefon >>>>> >>>>> -- >>>>> Olivier Bonnefon >>>>> INRA PACA-Avignon, Unit? BioSP >>>>> Tel: +33 (0)4 32 72 21 58 >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> >>>> -- >>>> Olivier Bonnefon >>>> INRA PACA-Avignon, Unit? BioSP >>>> Tel: >>>> +33 (0)4 32 72 21 58 >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >>> -- >>> Olivier Bonnefon >>> INRA PACA-Avignon, Unit? BioSP >>> Tel: +33 (0)4 32 72 21 58 >>> >> >> >> -- >> Olivier Bonnefon >> INRA PACA-Avignon, Unit? BioSP >> Tel: +33 (0)4 32 72 21 58 >> > From luqiyue at gmail.com Tue Jul 30 14:05:01 2013 From: luqiyue at gmail.com (Lu Qiyue) Date: Tue, 30 Jul 2013 14:05:01 -0500 Subject: [petsc-users] Question on MatCreateSeqAIJ() and 32-bit integer In-Reply-To: References: Message-ID: Hello Barry: Switching to 64 bit, means to re-install PETSC and add --with-64-bit-indices when do the configuration, right? Because re-install PETSC is not easy here and I am wondering, is there any other way to solve this problem? Like add some ' -bigint' option while compiling the driver? Looks not enough for this case, because PETSC still at 32-bits. >From what you said that 'because of internal counts inside of PETSc (actually the way we store the matrices) it requires 64 bit indices', I am afraid that re-install is the only solution? Thanks Qiyue Lu On Tue, Jul 30, 2013 at 1:38 PM, Barry Smith wrote: > > Lu, > > You need to switch to 64 bit indices. Even though the individual > number of non zeros per row is less than the 32 bit limit the total number > of non zeros in the matrix is higher than the limit; how much physical > memory you have is not relevant. As you note it is because of internal > counts inside of PETSc (actually the way we store the matrices) it requires > 64 bit indices. > > Barry > > Switching to 64 bit indices should not be difficult, please let us know if > you have any problems. > > On Jul 30, 2013, at 10:54 AM, Lu Qiyue wrote: > > > Dear All: > > > > I am solving a huge system AX=b, A is a sparse matrix but its number of > non-zeros is 3.29*10^9. I noticed that the 32-bit integer upper limit is > ~2.15*10^9. > > > > A is in COO format here. When I prepare the input *.bin file for Petsc, > the line > > > > ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,0,cnt,&A);CHKERRQ(ierr); > > > > can not pass. > > > > m =n =39979380 here, cnt is an array holding the number of non-zeros per > row. The error message is: > > > > [0]PETSC ERROR: Out of memory. This could be due to allocating > > [0]PETSC ERROR: too large an object or bleeding by not properly > > [0]PETSC ERROR: destroying unneeded objects. > > [0]PETSC ERROR: Memory allocated 0 Memory used by process 492802048 > > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > > [0]PETSC ERROR: Memory requested 18446744061813993472! > > > > Calling MatCreateSeqAIJ() doesn't involve NNZ information there, m/n/cnt > are all integers less than 32-bit integer limit. > > And the total size of data in COO format: nnz*8 bytes(values)+nnz*4 > bytes(rows)+nnz*4 bytes(cols) are less than the memory limit of our system. > > > > The code works on a system with half size of this failed one. > > > > I am wondering, Does this because MatCreateSeqAIJ() might do some > 'internal' counting things which exceeds the integer limit in this case? > > > > Thanks > > > > Qiyue Lu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 30 14:07:39 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 30 Jul 2013 14:07:39 -0500 Subject: [petsc-users] question about special matrix-matrix-matrix product In-Reply-To: <637DBFF26F82E542A0F37B0EA99F7F3D17811CC9@WPVEXCMBX07.purdue.lcl> References: <637DBFF26F82E542A0F37B0EA99F7F3D17811CC9@WPVEXCMBX07.purdue.lcl> Message-ID: Tillman, If you only need the diagonal of the product then computing just the diagonal is the way to go. Use MatGetRow() on Gamma but use MatDenseGetArray() on the dense matrix. Then each entry of the diagonal is one row of the Gamma (sparse) times one column of the dense matrix. (PETSc dense matrices are stored column oriented so a column is entries next to each other. Do this for Seq first. Once you have the seq understood and working well you can do the parallel. The parallel is a bit more involved since on each process you will need parts of the dense matrix that are stored on other processes. I would start by using MatMPIAIJGetSeqAIJ(mat,&Ad,&Ao,&colmap) the final argument to this tells which off-processor rows of the dense matrix are needed on each process to compute the computation locally (each process will need all the columns of the dense matrix associated with its "owned/local" columns. You move those needed values over using MPI and then do two local products for each diagonal entry: row of Ad * local column of the dense matrix + row of Ao * (collected from other processes part of dense matrix). Barry On Jul 29, 2013, at 12:38 PM, "Kubis, Tillmann C" wrote: > Hello, > I need the diagonal of GR*Gamma*GR^dagger , where Gamma is a sparse and GR is a dense matrix. Is there a good/already coded way to do that? > I am up to write most of the product myself using the method MatGetRow on GR and Gamma and multiplying the elements directly. So far these are serial complex matrices, but all matrices will be distributed in the end. > Thanks, > Tillmann Kubis > ____________________________________ > Tillmann Kubis, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-1 > West Lafayette, Indiana 47907-1971 > phone: +1-765-496-7312 > fax: +1-765-496-6026 > From rupp at mcs.anl.gov Tue Jul 30 14:10:01 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Tue, 30 Jul 2013 14:10:01 -0500 Subject: [petsc-users] Question on MatCreateSeqAIJ() and 32-bit integer In-Reply-To: References: Message-ID: <51F80F89.8060000@mcs.anl.gov> Hi, yes, you really need to re-compile the PETSc library to get 64-bit integers. You can build the library below your home folder, there is no need to beg your admin for this. :-) (As an additional benefit, you can also enable additional packages which may not be available in your default installation). Best regards, Karli On 07/30/2013 02:05 PM, Lu Qiyue wrote: > Hello Barry: > Switching to 64 bit, means to re-install PETSC and add > --with-64-bit-indices > when do the configuration, right? > > Because re-install PETSC is not easy here and I am wondering, is there > any other way to solve this problem? Like add some ' -bigint' option > while compiling the driver? Looks not enough for this case, because > PETSC still at 32-bits. > > From what you said that 'because of internal counts inside of PETSc > (actually the way we store the matrices) it requires 64 bit indices', I > am afraid that re-install is the only solution? > > Thanks > > Qiyue Lu > > > > On Tue, Jul 30, 2013 at 1:38 PM, Barry Smith > wrote: > > > Lu, > > You need to switch to 64 bit indices. Even though the > individual number of non zeros per row is less than the 32 bit limit > the total number of non zeros in the matrix is higher than the > limit; how much physical memory you have is not relevant. As you > note it is because of internal counts inside of PETSc (actually the > way we store the matrices) it requires 64 bit indices. > > Barry > > Switching to 64 bit indices should not be difficult, please let us > know if you have any problems. > > On Jul 30, 2013, at 10:54 AM, Lu Qiyue > wrote: > > > Dear All: > > > > I am solving a huge system AX=b, A is a sparse matrix but its > number of non-zeros is 3.29*10^9. I noticed that the 32-bit integer > upper limit is ~2.15*10^9. > > > > A is in COO format here. When I prepare the input *.bin file for > Petsc, the line > > > > ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,0,cnt,&A);CHKERRQ(ierr); > > > > can not pass. > > > > m =n =39979380 here, cnt is an array holding the number of > non-zeros per row. The error message is: > > > > [0]PETSC ERROR: Out of memory. This could be due to allocating > > [0]PETSC ERROR: too large an object or bleeding by not properly > > [0]PETSC ERROR: destroying unneeded objects. > > [0]PETSC ERROR: Memory allocated 0 Memory used by process 492802048 > > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for > info. > > [0]PETSC ERROR: Memory requested 18446744061813993472! > > > > Calling MatCreateSeqAIJ() doesn't involve NNZ information there, > m/n/cnt are all integers less than 32-bit integer limit. > > And the total size of data in COO format: nnz*8 > bytes(values)+nnz*4 bytes(rows)+nnz*4 bytes(cols) are less than the > memory limit of our system. > > > > The code works on a system with half size of this failed one. > > > > I am wondering, Does this because MatCreateSeqAIJ() might do some > 'internal' counting things which exceeds the integer limit in this case? > > > > Thanks > > > > Qiyue Lu > > From bsmith at mcs.anl.gov Tue Jul 30 14:09:40 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 30 Jul 2013 14:09:40 -0500 Subject: [petsc-users] Question on MatCreateSeqAIJ() and 32-bit integer In-Reply-To: References: Message-ID: <3F3F4105-422A-45EA-98E1-6517B7F4C76F@mcs.anl.gov> Select a new PETSC_ARCH (so you don't over write the old install) and make a new install, use all the same other ./configure options. Why is an install difficult? You can run an install in your home directory without help from anyone else; an entire install takes about 5 minutes on my laptop so it should not be too time consuming. Send the configure.log and make.log to petsc-maint at mcs.anl.gov the new install fails. Barry On Jul 30, 2013, at 2:05 PM, Lu Qiyue wrote: > Hello Barry: > Switching to 64 bit, means to re-install PETSC and add > --with-64-bit-indices > when do the configuration, right? > > Because re-install PETSC is not easy here and I am wondering, is there any other way to solve this problem? Like add some ' -bigint' option while compiling the driver? Looks not enough for this case, because PETSC still at 32-bits. > > From what you said that 'because of internal counts inside of PETSc (actually the way we store the matrices) it requires 64 bit indices', I am afraid that re-install is the only solution? > > Thanks > > Qiyue Lu > > > > On Tue, Jul 30, 2013 at 1:38 PM, Barry Smith wrote: > > Lu, > > You need to switch to 64 bit indices. Even though the individual number of non zeros per row is less than the 32 bit limit the total number of non zeros in the matrix is higher than the limit; how much physical memory you have is not relevant. As you note it is because of internal counts inside of PETSc (actually the way we store the matrices) it requires 64 bit indices. > > Barry > > Switching to 64 bit indices should not be difficult, please let us know if you have any problems. > > On Jul 30, 2013, at 10:54 AM, Lu Qiyue wrote: > > > Dear All: > > > > I am solving a huge system AX=b, A is a sparse matrix but its number of non-zeros is 3.29*10^9. I noticed that the 32-bit integer upper limit is ~2.15*10^9. > > > > A is in COO format here. When I prepare the input *.bin file for Petsc, the line > > > > ierr = MatCreateSeqAIJ(PETSC_COMM_WORLD,m,n,0,cnt,&A);CHKERRQ(ierr); > > > > can not pass. > > > > m =n =39979380 here, cnt is an array holding the number of non-zeros per row. The error message is: > > > > [0]PETSC ERROR: Out of memory. This could be due to allocating > > [0]PETSC ERROR: too large an object or bleeding by not properly > > [0]PETSC ERROR: destroying unneeded objects. > > [0]PETSC ERROR: Memory allocated 0 Memory used by process 492802048 > > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > > [0]PETSC ERROR: Memory requested 18446744061813993472! > > > > Calling MatCreateSeqAIJ() doesn't involve NNZ information there, m/n/cnt are all integers less than 32-bit integer limit. > > And the total size of data in COO format: nnz*8 bytes(values)+nnz*4 bytes(rows)+nnz*4 bytes(cols) are less than the memory limit of our system. > > > > The code works on a system with half size of this failed one. > > > > I am wondering, Does this because MatCreateSeqAIJ() might do some 'internal' counting things which exceeds the integer limit in this case? > > > > Thanks > > > > Qiyue Lu > > From ztdepyahoo at 163.com Tue Jul 30 18:37:58 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Wed, 31 Jul 2013 07:37:58 +0800 (CST) Subject: [petsc-users] how to select values from certain positions of a vec from different process to process 0. Message-ID: <706b0a8.d79.14031f201a5.Coremail.ztdepyahoo@163.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 30 20:08:26 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 30 Jul 2013 20:08:26 -0500 Subject: [petsc-users] how to select values from certain positions of a vec from different process to process 0. In-Reply-To: <706b0a8.d79.14031f201a5.Coremail.ztdepyahoo@163.com> References: <706b0a8.d79.14031f201a5.Coremail.ztdepyahoo@163.com> Message-ID: <81B5DEFB-C245-47AF-BF95-3E4CB1EEE74F@mcs.anl.gov> create an IS with the global indices you want on process 0 and a seq Vec of the same size, create an IS of size 0 on all the other processes and a seq Vec of the size zero on all other processes create a VecScatter using these IS, the global Vec and the seq Vec. use VecScatterBegin/End to send over the values. Barry On Jul 30, 2013, at 6:37 PM, ??? wrote: > > > > > > > > > > From heikki.a.virtanen at hotmail.com Wed Jul 31 05:03:35 2013 From: heikki.a.virtanen at hotmail.com (Heikki Virtanen) Date: Wed, 31 Jul 2013 13:03:35 +0300 Subject: [petsc-users] MATMPIBAIJ matrix allocation In-Reply-To: References: , , Message-ID: > El 29/07/2013, a las 14:54, Heikki Virtanen escribi?: > > > Hi, I try to solve an eigenvalue problem (Ax = lambda B x) where A > > and B are complex matrices. Unfortunately, my data structures > > are not capable of handling directly complex numbers, yet. So, > > I have to use > > > > [re(A) -im(A)] > > RealA = [ ] > > [imA) re(A)] > > > > matrices instead. How should I allocate matrices if I know > > compressed sparse row matrix formats of A and B matrices. I have used > > something like this. > > > > ierr = MatCreate(PETSC_COMM_WORLD,&RealA); CHKERRQ(ierr); > > ierr = MatSetSizes(RealA,PETSC_DECIDE,PETSC_DECIDE,2*n,2*n); CHKERRQ(ierr); > > ierr = MatSetType(RealA,MATMPIBAIJ); CHKERRQ(ierr); > > ierr = MatSetBlockSize(RealA,2);CHKERRQ(ierr); > > ierr = MatSetFromOptions(RealA);CHKERRQ(ierr); > > > > ierr = MatMPIBAIJSetPreallocationCSR (RealA,2,rows,cols,0); CHKERRQ(ierr); > > ierr = MatAssemblyBegin (RealA,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > ierr = MatAssemblyEnd (RealA,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > > > where n is the global size of matrix A. rows and cols are csr arrays of A. > > Each submatrix of RealA matrix have the same nonzero pattern than A. > > But I am not sure if this is correct, because when I print my matrices out > > they look more like band matrices than block matrices. > > > > -Heikki > > You are creating a BAIJ matrix with the nonzero pattern defined by (rows,cols), where each entry of the matrix is a 2x2 block, in your case > [ re(a_ij) -im(a_ij) ] > [ im(a_ij) re(a_ij) ] > > So the matrix you are creating is not this one > [re(A) -im(A)] > [imA) re(A)] > but the one resulting from a perfect shuffle permutation. That's why you are not seeing a 2x2 block structure for the whole matrix. > > Jose > Hi, Sorry to bother you, but I still have problems with MATMPIBAIJ matrices. It is said here http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateBAIJ.html that these matrices are created using block compressed row format. It means this one http://www.cs.colostate.edu/~mroberts/toolbox/c++/sparseMatrix/sparse_matrix_compression.html or something else? (indices start here from 1 not from 0, but if this is fixed, then the format should be the same?) I have also second question. What does "block size" mean? Is it number of elements per side of the Block? -Heikki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Jul 31 05:54:50 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 31 Jul 2013 12:54:50 +0200 Subject: [petsc-users] MATMPIBAIJ matrix allocation In-Reply-To: References: , , Message-ID: <85D2F943-D894-4993-AD23-821B9AAEE4EC@dsic.upv.es> El 31/07/2013, a las 12:03, Heikki Virtanen escribi?: > Hi, Sorry to bother you, but I still have problems with MATMPIBAIJ matrices. It is said here > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateBAIJ.html > > that these matrices are created using block compressed row format. It means this one > > http://www.cs.colostate.edu/~mroberts/toolbox/c++/sparseMatrix/sparse_matrix_compression.html > > or something else? (indices start here from 1 not from 0, but if this is fixed, then the format should be the same?) No, the format explained in that page supports variable block sizes. In PETSc all blocks have the same dimension (square blocks). The sparsity pattern (rows,cols) refers to blocks, not matrix entries. See http://www.mcs.anl.gov/petsc/petsc-3.4/docs/manualpages/Mat/MatSetValuesBlocked.html > I have also second question. What does "block size" mean? Is it number of elements per side of the Block? Yes. Jose > > -Heikki From hgbk2008 at gmail.com Wed Jul 31 07:02:45 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Wed, 31 Jul 2013 14:02:45 +0200 Subject: [petsc-users] preallocation of mpiaij matrix Message-ID: <51F8FCE5.80600@gmail.com> Hi I want to consult the optimized way to allocate the mpiaij matrix for FEM application. Typically, elements in each process contribute to global matrix partially, therefore in each process I can define the number of nonzeros in according row locally but not globally. Is there method in petsc support to allocate matrix based on sparsity pattern (i.e. I provide petsc with row & column I'm going to fill later on in the assembly process) ? Best regards Bui From tkubis at purdue.edu Wed Jul 31 07:36:59 2013 From: tkubis at purdue.edu (Kubis, Tillmann C) Date: Wed, 31 Jul 2013 12:36:59 +0000 Subject: [petsc-users] question about special matrix-matrix-matrix product In-Reply-To: References: <637DBFF26F82E542A0F37B0EA99F7F3D17811CC9@WPVEXCMBX07.purdue.lcl> Message-ID: <637DBFF26F82E542A0F37B0EA99F7F3D17812108@WPVEXCMBX07.purdue.lcl> Hi, Thanks a lot. The serial version is working fine now. It will take a while until I can try the parallel one. Thanks, Tillmann Kubis ____________________________________ Tillmann Kubis, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-5 West Lafayette, Indiana 47907-1971 phone: +1-765-496-7312 fax: +1-765-496-6026 -----Original Message----- From: Barry Smith [mailto:bsmith at mcs.anl.gov] Sent: Tuesday, July 30, 2013 3:08 PM To: Kubis, Tillmann C Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] question about special matrix-matrix-matrix product Tillman, If you only need the diagonal of the product then computing just the diagonal is the way to go. Use MatGetRow() on Gamma but use MatDenseGetArray() on the dense matrix. Then each entry of the diagonal is one row of the Gamma (sparse) times one column of the dense matrix. (PETSc dense matrices are stored column oriented so a column is entries next to each other. Do this for Seq first. Once you have the seq understood and working well you can do the parallel. The parallel is a bit more involved since on each process you will need parts of the dense matrix that are stored on other processes. I would start by using MatMPIAIJGetSeqAIJ(mat,&Ad,&Ao,&colmap) the final argument to this tells which off-processor rows of the dense matrix are needed on each process to compute the computation locally (each process will need all the columns of the dense matrix associated with its "owned/local" columns. You move those needed values over using MPI and then do two local products for each diagonal entry: row of Ad * local column of the dense matrix + row of Ao * (collected from other processes part of dense matrix). Barry On Jul 29, 2013, at 12:38 PM, "Kubis, Tillmann C" wrote: > Hello, > I need the diagonal of GR*Gamma*GR^dagger , where Gamma is a sparse and GR is a dense matrix. Is there a good/already coded way to do that? > I am up to write most of the product myself using the method MatGetRow on GR and Gamma and multiplying the elements directly. So far these are serial complex matrices, but all matrices will be distributed in the end. > Thanks, > Tillmann Kubis > ____________________________________ > Tillmann Kubis, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-1 > West Lafayette, Indiana 47907-1971 > phone: +1-765-496-7312 > fax: +1-765-496-6026 > From ztdepyahoo at 163.com Wed Jul 31 07:46:23 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Wed, 31 Jul 2013 20:46:23 +0800 (CST) Subject: [petsc-users] Segmentatoin fault Message-ID: <43ee585c.11ae0.14034c3d34b.Coremail.ztdepyahoo@163.com> I write a FVM code for the convection diffusion problem on a 2D cartesian grid. the code can run correctly with a grid of 720*720. and it use only 26% of the system memory(8G) during its running. but if i increase the grid to 800*800, it gives me the following error? could you please give me some information how to resolve it. mpiexec -np 4 ./main =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 11 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES =================================================================================== YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11) This typically refers to a problem with your application. Please see the FAQ page for debugging suggestions ztdep at linuxdell:~/Projects/CartesianPFlowingHeat2D-KDE/build> mpiexec -np 4 ./main write the grid to Grid.cgns sucessfully -------------- next part -------------- An HTML attachment was scrubbed... URL: From ztdepyahoo at 163.com Wed Jul 31 07:55:12 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Wed, 31 Jul 2013 20:55:12 +0800 (CST) Subject: [petsc-users] how to select values from certain positions of a vec from different process to process 0. In-Reply-To: <81B5DEFB-C245-47AF-BF95-3E4CB1EEE74F@mcs.anl.gov> References: <706b0a8.d79.14031f201a5.Coremail.ztdepyahoo@163.com> <81B5DEFB-C245-47AF-BF95-3E4CB1EEE74F@mcs.anl.gov> Message-ID: <5c9b1ad1.11c4b.14034cbe647.Coremail.ztdepyahoo@163.com> Does the function VecScatterCreateToZero resolve this problem? At 2013-07-31 09:08:26,"Barry Smith" wrote: > > > create an IS with the global indices you want on process 0 and a seq Vec of the same size, > > create an IS of size 0 on all the other processes and a seq Vec of the size zero on all other processes > > create a VecScatter using these IS, the global Vec and the seq Vec. > > use VecScatterBegin/End to send over the values. > > Barry > > >On Jul 30, 2013, at 6:37 PM, ??? wrote: > >> >> >> >> >> >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.bonnefon at avignon.inra.fr Wed Jul 31 10:28:47 2013 From: olivier.bonnefon at avignon.inra.fr (Olivier Bonnefon) Date: Wed, 31 Jul 2013 17:28:47 +0200 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> <51E7F011.2020408@avignon.inra.fr> <51F7C860.3030704@avignon.inra.fr> Message-ID: <51F92D2F.1080106@avignon.inra.fr> Hello, You are right. I have to define the Jacobian function of the variational formulation. I'm using snes and the petsFem struc (like in ex12). I need some information about the petscFem struct. I didn't find any document about that, is there one ? The field f0Funcs is used for the term \int f_0(u,gradu,x)*v The field f1Funcs is used for the term \int f_1(u,gradu,x).grad v Are f0 and f1 used for the rhs of the linearized problem ? But what about g0,g1,g2 and g3 functions? I guess I have to use it to define the Jacobian ? Thanks a lot. Olivier B On 07/30/2013 08:45 PM, Barry Smith wrote: > Olivier, > > I concur with Mark that the most likely problem is a wrong Jacobian. You can look at http://www.mcs.anl.gov/petsc/documentation/faq.html#newton and it gives you a series of things to check. They also pretty much apply to your case, likely if you work through them you'll determine the problem. > > If you continue to have difficulties do not hesitate to contact us again, > > Barry > > On Jul 30, 2013, at 11:38 AM, Mark F. Adams wrote: > >> On Jul 30, 2013, at 10:06 AM, Olivier Bonnefon wrote: >> >>> Hello, >>> >>> I want to use PETSC for large problem about diffusive model in epidemiological field. >>> I'm using the slides 'Advanced PETSc Tutorial, Maison de la Simulation, Orsay, France, June 2013 (Matt)'. >>> >>> My first step is to simulated a problem 0=-\nabla u + f(u), where f is a linear or non-linear function, with FEM. To do this I'm adapting the example ex12.c for the linear problem: >>> >>> 0=-\nabla u + w*w*u >>> with the exact solution u(x,y)=exp(w*x)+exp(w*y). The Dirichlet boundary condition are defined from the exact solution, like in example ex12. >>> >>> To do this, I change only the two following functions: >>> >>> double WW=1.0; >>> void quadratic_u_2d(const PetscReal x[], PetscScalar *u) >>> { >>> *u = exp(WW*(x[0])) +exp(WW*(x[1])) ; >>> } >>> >>> void f0_u(const PetscScalar u[], const PetscScalar gradU[], const PetscReal x[], PetscScalar f0[]) >>> { >>> const PetscInt Ncomp = NUM_BASIS_COMPONENTS_0; >>> PetscInt comp; >>> >>> for (comp = 0; comp< Ncomp; ++comp) f0[comp] = WW*WW*u[comp] ; >>> } >>> >>> >>> The result is : >>> $ ./ex12 -refinement_limit 0.01 -snes_monitor_short -snes_converged_reason >>> 0 SNES Function norm 22.1518 >>> 1 SNES Function norm 0.364312 >>> 2 SNES Function norm 0.0165162 >>> 3 SNES Function norm 0.000792446 >>> 4 SNES Function norm 3.81143e-05 >>> 5 SNES Function norm 1.83353e-06 >>> 6 SNES Function norm 8.8206e-08 >>> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 6 >>> Number of SNES iterations = 6 >>> L_2 Error: 0.00511 >>> >> -ksp_monitor will show the linear solver residuals and these should match from one nonlinear iteration to the next if the problem is linear. Your function and Jacobean might not be consistent (my guess). >> >>> Something is wrong because it is a linear problem, why the snes didn't converge in one iteration ? >>> >>> Thanks a lot. >>> >>> Olivier B >>> >>> On 07/18/2013 03:39 PM, Olivier Bonnefon wrote: >>>> On 07/18/2013 03:26 PM, Matthew Knepley wrote: >>>>> On Thu, Jul 18, 2013 at 8:17 AM, Olivier Bonnefon wrote: >>>>> It is what I wanted, it works. >>>>> If I well understand the code, ex12.h contains the P1 implementation. To simulate an other system, with time dependences for examples (du/dt), I have to adapt the plugin functions. >>>>> >>>>> The way I would add time dependence is to convert this from a SNES example into a TS example. I can help you >>>>> do this since I want to start using TS by default. Does this sound reasonable? >>>> Yes, of course. My goal is to simulate diffusive equation with non linear sources, for example Lotka-Voltera competion. >>>> >>>> Olivier B >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> Thanks a lot. >>>>> >>>>> Olivier B >>>>> >>>>> On 07/18/2013 01:12 PM, Matthew Knepley wrote: >>>>>> On Thu, Jul 18, 2013 at 5:08 AM, Olivier Bonnefon wrote: >>>>>> Hello, >>>>>> >>>>>> I have a 2-d heat equation that I want to simulate with Finit Element Method, to do this, I'm looking for an example solving 2D poisson equation with FEM (DMDA or DMPlex). Is there an example like this ? >>>>>> >>>>>> There is, but there it is still somewhat problematic. I use FIAT to generate the basis function tabulation, >>>>>> so you have to configure with >>>>>> >>>>>> --download-fiat --download-scientificpython --download-generator >>>>>> >>>>>> and you need mesh generation and partitioning >>>>>> >>>>>> --download-triangle --download-chaco >>>>>> >>>>>> and then you can run SNES ex12 using Builder (which will make the header file) >>>>>> >>>>>> python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex12.c >>>>>> >>>>>> Jed and I are working on an all C version of tabulation which would mean that you could bypass >>>>>> the Python code generation step. Once the header is generated for the element you want, then >>>>>> you can just run the example as normal. >>>>>> >>>>>> Matt >>>>>> >>>>>> Thanks a lot. >>>>>> >>>>>> Olivier Bonnefon >>>>>> >>>>>> -- >>>>>> Olivier Bonnefon >>>>>> INRA PACA-Avignon, Unit? BioSP >>>>>> Tel: +33 (0)4 32 72 21 58 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>> >>>>> -- >>>>> Olivier Bonnefon >>>>> INRA PACA-Avignon, Unit? BioSP >>>>> Tel: >>>>> +33 (0)4 32 72 21 58 >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> -- >>>> Olivier Bonnefon >>>> INRA PACA-Avignon, Unit? BioSP >>>> Tel: +33 (0)4 32 72 21 58 >>>> >>> >>> -- >>> Olivier Bonnefon >>> INRA PACA-Avignon, Unit? BioSP >>> Tel: +33 (0)4 32 72 21 58 >>> -- Olivier Bonnefon INRA PACA-Avignon, Unit? BioSP Tel: +33 (0)4 32 72 21 58 From jedbrown at mcs.anl.gov Wed Jul 31 10:35:12 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 31 Jul 2013 23:35:12 +0800 Subject: [petsc-users] preallocation of mpiaij matrix In-Reply-To: <51F8FCE5.80600@gmail.com> References: <51F8FCE5.80600@gmail.com> Message-ID: <87li4m68un.fsf@mcs.anl.gov> Hoang Giang Bui writes: > Hi > > I want to consult the optimized way to allocate the mpiaij matrix for > FEM application. Typically, elements in each process contribute to > global matrix partially, therefore in each process I can define the > number of nonzeros in according row locally but not globally. Is there > method in petsc support to allocate matrix based on sparsity pattern > (i.e. I provide petsc with row & column I'm going to fill later on in > the assembly process) ? There are a few ways to do it, but you should be providing full preallocation information, not just for the parts the local process is contributing. Consider doing something like this, for example: http://mail-archive.com/search?l=mid&q=3BEF7FCC-AF4A-461A-B068-9DB01EF94B7A at mcs.anl.gov -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Wed Jul 31 10:36:38 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 31 Jul 2013 23:36:38 +0800 Subject: [petsc-users] Segmentatoin fault In-Reply-To: <43ee585c.11ae0.14034c3d34b.Coremail.ztdepyahoo@163.com> References: <43ee585c.11ae0.14034c3d34b.Coremail.ztdepyahoo@163.com> Message-ID: <87iozq68s9.fsf@mcs.anl.gov> ??? writes: > I write a FVM code for the convection diffusion problem on a 2D > cartesian grid. the code can run correctly with a grid of > 720*720. and it use only 26% of the system memory(8G) during its > running. How are you testing the total application memory usage? > but if i increase the grid to 800*800, it gives me the following > error? could you please give me some information how to resolve it. > > > > mpiexec -np 4 ./main > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 11 > = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP > MESSAGES > =================================================================================== > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault > (signal 11) This typically refers to a problem with your application. > Please see the FAQ page for debugging suggestions > ztdep at linuxdell:~/Projects/CartesianPFlowingHeat2D-KDE/build> mpiexec > -np 4 ./main write the grid to Grid.cgns sucessfully -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Wed Jul 31 10:43:59 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 31 Jul 2013 23:43:59 +0800 Subject: [petsc-users] FEM on 2D poisson equation In-Reply-To: <51F92D2F.1080106@avignon.inra.fr> References: <51E7BEAF.4090901@avignon.inra.fr> <51E7EADD.6010401@avignon.inra.fr> <51E7F011.2020408@avignon.inra.fr> <51F7C860.3030704@avignon.inra.fr> <51F92D2F.1080106@avignon.inra.fr> Message-ID: <87bo5i68g0.fsf@mcs.anl.gov> Olivier Bonnefon writes: > Hello, > > You are right. I have to define the Jacobian function of the variational > formulation. I'm using snes and the petsFem struc (like in ex12). > > I need some information about the petscFem struct. I didn't find any > document about that, is there one ? > > The field f0Funcs is used for the term \int f_0(u,gradu,x)*v > The field f1Funcs is used for the term \int f_1(u,gradu,x).grad v > > Are f0 and f1 used for the rhs of the linearized problem ? We think about these problems as being nonlinear whether they are or not. For a linear problem, you can apply one iteration of Newton's method using '-snes_type ksponly'. The Jacobian consists of the derivatives of f_0 and f_1 with respect to u. > But what about g0,g1,g2 and g3 functions? I guess I have to use it to > define the Jacobian ? Those are the derivatives of the f_0 and f_1. For example, see the notation in Eq. 3 and 5 of this paper: http://59A2.org/na/Brown-EfficientNonlinearSolversNodalHighOrder3D-2010.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From Wadud.Miah at awe.co.uk Wed Jul 31 10:46:47 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Wed, 31 Jul 2013 15:46:47 +0000 Subject: [petsc-users] PETSc problems with MVAPICH2 Message-ID: <201307311546.r6VFkqP6011998@msw2.awe.co.uk> Hello, I have built PETSc 3.4.2 with MVAPICH2 1.9b and my application code crashes when I spawn it with 16 processes; it works with 4 and 8 processes. This occurs at the MatAssemblyEnd subroutine when the non-local values are broadcast to other processes. Has anyone else had issues with MVAPICH2? Regards, Wadud. ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 31 10:56:06 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 31 Jul 2013 23:56:06 +0800 Subject: [petsc-users] PETSc problems with MVAPICH2 In-Reply-To: <201307311546.r6VFkqP6011998@msw2.awe.co.uk> References: <201307311546.r6VFkqP6011998@msw2.awe.co.uk> Message-ID: <871u6e67vt.fsf@mcs.anl.gov> Wadud.Miah at awe.co.uk writes: > Hello, > > I have built PETSc 3.4.2 with MVAPICH2 1.9b and my application code > crashes when I spawn it with 16 processes; it works with 4 and 8 > processes. This occurs at the MatAssemblyEnd subroutine when the > non-local values are broadcast to other processes. Has anyone else had > issues with MVAPICH2? Can you get a stack trace? Does it work correctly with other MPI implementations? Does a basic PETSc example show the same problem? -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Jul 31 11:34:31 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 31 Jul 2013 11:34:31 -0500 Subject: [petsc-users] how to select values from certain positions of a vec from different process to process 0. In-Reply-To: <5c9b1ad1.11c4b.14034cbe647.Coremail.ztdepyahoo@163.com> References: <706b0a8.d79.14031f201a5.Coremail.ztdepyahoo@163.com> <81B5DEFB-C245-47AF-BF95-3E4CB1EEE74F@mcs.anl.gov> <5c9b1ad1.11c4b.14034cbe647.Coremail.ztdepyahoo@163.com> Message-ID: <6077ED32-482B-4437-B878-B403C9C454DD@mcs.anl.gov> On Jul 31, 2013, at 7:55 AM, ??? wrote: > Does the function VecScatterCreateToZero resolve this problem? > What problem? I told you how to do it, there is no problem. VecScatterCreateToZero() moves ALL values to process 0 which for large problems is not scalable, so if you only want some values then do what I suggest otherwise use VecScatterCreateToZero. Barry > > > > > > At 2013-07-31 09:08:26,"Barry Smith" wrote: > > > > > > create an IS with the global indices you want on process 0 and a seq Vec of the same size, > > > > create an IS of size 0 on all the other processes and a seq Vec of the size zero on all other processes > > > > create a VecScatter using these IS, the global Vec and the seq Vec. > > > > use VecScatterBegin/End to send over the values. > > > > Barry > > > > > >On Jul 30, 2013, at 6:37 PM, ??? wrote: > > > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > > > > > From bsmith at mcs.anl.gov Wed Jul 31 13:05:43 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 31 Jul 2013 13:05:43 -0500 Subject: [petsc-users] Segmentatoin fault In-Reply-To: <43ee585c.11ae0.14034c3d34b.Coremail.ztdepyahoo@163.com> References: <43ee585c.11ae0.14034c3d34b.Coremail.ztdepyahoo@163.com> Message-ID: <765D6B84-8429-4C98-96BE-F06C451AC499@mcs.anl.gov> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind On Jul 31, 2013, at 7:46 AM, ??? wrote: > I write a FVM code for the convection diffusion problem on a 2D cartesian grid. > the code can run correctly with a grid of 720*720. and it use only 26% of the system memory(8G) during its running. > but if i increase the grid to 800*800, it gives me the following error? could you please give me some information how to resolve it. > > > > mpiexec -np 4 ./main > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 11 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > =================================================================================== > YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11) > This typically refers to a problem with your application. > Please see the FAQ page for debugging suggestions > ztdep at linuxdell:~/Projects/CartesianPFlowingHeat2D-KDE/build> mpiexec -np 4 ./main > write the grid to Grid.cgns sucessfully > > > > > > > > > > > > > From bsmith at mcs.anl.gov Wed Jul 31 16:38:33 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 31 Jul 2013 16:38:33 -0500 Subject: [petsc-users] PETSc problems with MVAPICH2 In-Reply-To: <871u6e67vt.fsf@mcs.anl.gov> References: <201307311546.r6VFkqP6011998@msw2.awe.co.uk> <871u6e67vt.fsf@mcs.anl.gov> Message-ID: <7A94A33D-AB2D-4C24-871D-59677C9FCC07@mcs.anl.gov> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind On Jul 31, 2013, at 10:56 AM, Jed Brown wrote: > Wadud.Miah at awe.co.uk writes: > >> Hello, >> >> I have built PETSc 3.4.2 with MVAPICH2 1.9b and my application code >> crashes when I spawn it with 16 processes; it works with 4 and 8 >> processes. This occurs at the MatAssemblyEnd subroutine when the >> non-local values are broadcast to other processes. Has anyone else had >> issues with MVAPICH2? > > Can you get a stack trace? Does it work correctly with other MPI > implementations? Does a basic PETSc example show the same problem? From ztdepyahoo at 163.com Wed Jul 31 17:41:41 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Thu, 1 Aug 2013 06:41:41 +0800 (CST) Subject: [petsc-users] Segmentatoin fault In-Reply-To: <87iozq68s9.fsf@mcs.anl.gov> References: <43ee585c.11ae0.14034c3d34b.Coremail.ztdepyahoo@163.com> <87iozq68s9.fsf@mcs.anl.gov> Message-ID: <5fd2b791.f18.14036e4d6df.Coremail.ztdepyahoo@163.com> i get the infor from the system load monitor ? 2013-07-31 23:36:38?"Jed Brown" ??? >??? writes: > >> I write a FVM code for the convection diffusion problem on a 2D >> cartesian grid. the code can run correctly with a grid of >> 720*720. and it use only 26% of the system memory(8G) during its >> running. > >How are you testing the total application memory usage? > >> but if i increase the grid to 800*800, it gives me the following >> error? could you please give me some information how to resolve it. >> >> >> >> mpiexec -np 4 ./main >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 11 >> = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP >> MESSAGES >> =================================================================================== >> YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault >> (signal 11) This typically refers to a problem with your application. >> Please see the FAQ page for debugging suggestions >> ztdep at linuxdell:~/Projects/CartesianPFlowingHeat2D-KDE/build> mpiexec >> -np 4 ./main write the grid to Grid.cgns sucessfully -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Wed Jul 31 18:35:36 2013 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 31 Jul 2013 16:35:36 -0700 Subject: [petsc-users] Is it possible to build PETSc project in Visual Studio? Message-ID: <51F99F48.8060002@gmail.com> Hi All, I am currently developing codes under Windows utilizing intel visual fortran + visual studio. When I need to build the project, I switch to CYGWIN, compile the project with makefile and it works fine. Recently, I found an Opensource project pflotran that also uses Petsc (https://bitbucket.org/pflotran/pflotran-dev/wiki/Home) and it is said the project can be build under windows with visual studio (https://bitbucket.org/pflotran/pflotran-dev/wiki/Installation/Windows). I tried a simple example following the instructions of pflotran to see if it works, unfortunately, it failed in linking, showing lots of unresolved external symbol. The following directories have been included in the project property - General: C:\cygwin\packages\petsc-3.3-p4;C:\cygwin\packages\petsc-3.3-p4\include;C:\cygwin\packages\petsc-3.3-p4\arch-mswin-c-debug\include;C:\Program Files\MPICH2\include And the following directories have been included in the project property - Linker: C:\cygwin\packages\petsc-3.4.2\arch-mswin-c-debug\lib;C:\Program Files\MPICH2\lib; libpetsc.lib fmpich2.lib fmpich2g.lib mpi.lib I just want to make a confirmation if Petsc project can be built under visual studio. And if the answer is yes, how to configure the project. Thanks and regards, Danyang -------------- next part -------------- A non-text attachment was scrubbed... Name: Petsc-windows-test.zip Type: application/x-zip-compressed Size: 16392 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Jul 31 19:38:41 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 31 Jul 2013 19:38:41 -0500 Subject: [petsc-users] Is it possible to build PETSc project in Visual Studio? In-Reply-To: <51F99F48.8060002@gmail.com> References: <51F99F48.8060002@gmail.com> Message-ID: I have forwarded this to the pflotran developer who uses developer studio, hopefully he will be able to answer your questions. Barry On Jul 31, 2013, at 6:35 PM, Danyang Su wrote: > Hi All, > > I am currently developing codes under Windows utilizing intel visual fortran + visual studio. When I need to build the project, I switch to CYGWIN, compile the project with makefile and it works fine. > > Recently, I found an Opensource project pflotran that also uses Petsc (https://bitbucket.org/pflotran/pflotran-dev/wiki/Home) and it is said the project can be build under windows with visual studio (https://bitbucket.org/pflotran/pflotran-dev/wiki/Installation/Windows). I tried a simple example following the instructions of pflotran to see if it works, unfortunately, it failed in linking, showing lots of unresolved external symbol. > > The following directories have been included in the project property - General: > C:\cygwin\packages\petsc-3.3-p4;C:\cygwin\packages\petsc-3.3-p4\include;C:\cygwin\packages\petsc-3.3-p4\arch-mswin-c-debug\include;C:\Program Files\MPICH2\include > > And the following directories have been included in the project property - Linker: > C:\cygwin\packages\petsc-3.4.2\arch-mswin-c-debug\lib;C:\Program Files\MPICH2\lib; > libpetsc.lib fmpich2.lib fmpich2g.lib mpi.lib > > I just want to make a confirmation if Petsc project can be built under visual studio. And if the answer is yes, how to configure the project. > > Thanks and regards, > > Danyang > From ztdepyahoo at 163.com Wed Jul 31 23:58:19 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Thu, 1 Aug 2013 12:58:19 +0800 (CST) Subject: [petsc-users] Use kspsolve repeatly Message-ID: <2fc48037.8f45.140383da796.Coremail.ztdepyahoo@163.com> I need to use the kspsolve(A,b,x) repeatly in my code in the following style. but i have noticed from the system load monitor that during the code running, it allocate new memeory every step. I use the default setting for the PC. Could you please told me how to resolve this problem. for (int i=0;i