From zonexo at gmail.com Sun Apr 4 10:32:59 2010 From: zonexo at gmail.com (Wee-Beng TAY) Date: Sun, 04 Apr 2010 23:32:59 +0800 Subject: [petsc-users] Version of HYPRE installed in PETSc and installation in windows cygwin Message-ID: <4BB8B12B.6080805@gmail.com> Hi, May I know what is the version of HYPRE when I use the --download-hypre? I understand that the new version of HYPRE 2.6b can be installed under cygwin. Hence if I install PETSc with HYPRE, will it auto install HYPRE as well? -- Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay From knepley at gmail.com Sun Apr 4 10:34:41 2010 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 4 Apr 2010 10:34:41 -0500 Subject: [petsc-users] Version of HYPRE installed in PETSc and installation in windows cygwin In-Reply-To: <4BB8B12B.6080805@gmail.com> References: <4BB8B12B.6080805@gmail.com> Message-ID: Yes, that is the version we use. Matt On Sun, Apr 4, 2010 at 10:32 AM, Wee-Beng TAY wrote: > Hi, > > May I know what is the version of HYPRE when I use the --download-hypre? > > I understand that the new version of HYPRE 2.6b can be installed under > cygwin. Hence if I install PETSc with HYPRE, will it auto install HYPRE as > well? > > -- > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivoroghair at gmail.com Tue Apr 6 17:52:18 2010 From: ivoroghair at gmail.com (Ivo Roghair) Date: Wed, 7 Apr 2010 00:52:18 +0200 Subject: [petsc-users] No speedup of tutorial in parallel Message-ID: Hi, I am trying to build PETSc into my CFD code. I just built the 3.1-p0 against a version of open-mpi 1.4.1. I thought src/ksp/ksp/ex/tut/ex2.c seemed like a comparable case (though much smaller and fewer matrix bands) of what I need. I tried running the default ex2.c example on a single-core and on my dualcore (laptop), without any changes. The code however runs much, much faster on a singlecore (mpirun -np 1 ./ex2) than on 2 cores. I also tried a loop over all pc_types and over all ksp_types (loop-in-loop), but the problem remains. Is this normal, am I doing something wrong or should I take a look at a different example? Thanks, Ivo From bsmith at mcs.anl.gov Tue Apr 6 18:14:32 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 6 Apr 2010 18:14:32 -0500 Subject: [petsc-users] No speedup of tutorial in parallel In-Reply-To: References: Message-ID: <82E3D220-9BA0-4A14-A242-3347E7FAEB16@mcs.anl.gov> http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#computers Also run both cases with -log_summary and see what parts of the run are not speeding up. Barry On Apr 6, 2010, at 5:52 PM, Ivo Roghair wrote: > Hi, > > I am trying to build PETSc into my CFD code. I just built the 3.1-p0 > against a version of open-mpi 1.4.1. I thought > src/ksp/ksp/ex/tut/ex2.c seemed like a comparable case (though much > smaller and fewer matrix bands) of what I need. I tried running the > default ex2.c example on a single-core and on my dualcore (laptop), > without any changes. The code however runs much, much faster on a > singlecore (mpirun -np 1 ./ex2) than on 2 cores. I also tried a loop > over all pc_types and over all ksp_types (loop-in-loop), but the > problem remains. Is this normal, am I doing something wrong or should > I take a look at a different example? > > Thanks, > Ivo From gdiso at ustc.edu Tue Apr 6 19:04:08 2010 From: gdiso at ustc.edu (Gong Ding) Date: Wed, 7 Apr 2010 08:04:08 +0800 Subject: [petsc-users] PaStiX does not work Message-ID: Dear Petsc developer, I found a problem that PASTIX solver can not be loaded from PETSC, even for 3.1 version. When I use following code, the ksp_view told me that PC is ilu. And the snes solver can not convergence at all. ierr = KSPSetType (ksp, (char*) KSPPREONLY); assert(!ierr);//it seems PaStiX don't work with KSPPREONLY ierr = PCSetType (pc, (char*) PCLU); ierr = PCFactorSetMatSolverPackage (pc, MAT_SOLVER_PASTIX); assert(!ierr); Is it a bug or I made something wrong? Yours Gong Ding From bsmith at mcs.anl.gov Tue Apr 6 19:55:39 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 6 Apr 2010 19:55:39 -0500 Subject: [petsc-users] PaStiX does not work In-Reply-To: References: Message-ID: <51C598FB-5BE3-4861-AD17-7996AA1DBE69@mcs.anl.gov> On Apr 6, 2010, at 7:04 PM, Gong Ding wrote: > Dear Petsc developer, > > I found a problem that PASTIX solver can not be loaded from PETSC, > even for 3.1 version. > > When I use following code, the ksp_view told me that PC is ilu. And > the snes solver can not convergence at all. > > ierr = KSPSetType (ksp, (char*) KSPPREONLY); assert(!ierr);//it > seems PaStiX don't work with KSPPREONLY > ierr = PCSetType (pc, (char*) PCLU); > ierr = PCFactorSetMatSolverPackage (pc, MAT_SOLVER_PASTIX); assert(! > ierr); > > Is it a bug or I made something wrong? You are doing something wrong but one cannot tell from just this code fragment. Send the entire code to petsc-maint at mcs.anl.gov and the options you used to run it and the output it produced. Barry > > Yours > Gong Ding From gdiso at ustc.edu Tue Apr 6 21:33:10 2010 From: gdiso at ustc.edu (Gong Ding) Date: Wed, 7 Apr 2010 10:33:10 +0800 Subject: [petsc-users] PaStiX does not work References: <51C598FB-5BE3-4861-AD17-7996AA1DBE69@mcs.anl.gov> Message-ID: <439BAEC166F24F5FAA3FF8746C047C73@cogendaeda> Dear Barry The mainly petsc related file is attached. The whole project is very large, it is a 3D/parallel code for semiconductor simulation. You can download the source code from http://www.cogenda.com/downloads/category/7-genius-open-source-edition.html It can be compiled with petsc 3.0. I am now adding 3.1 support. I had use MUMPS solver for quite a long time by the same code: PCFactorSetMatSolverPackage (pc, MAT_SOLVER_MUMPS); It works well for most of the time. However, it may crash (when other solver like KSP type works). I'd like to try PASTIX if it is faster and more stable. Yours Gong Ding > > On Apr 6, 2010, at 7:04 PM, Gong Ding wrote: > >> Dear Petsc developer, >> >> I found a problem that PASTIX solver can not be loaded from PETSC, >> even for 3.1 version. >> >> When I use following code, the ksp_view told me that PC is ilu. And >> the snes solver can not convergence at all. >> >> ierr = KSPSetType (ksp, (char*) KSPPREONLY); assert(!ierr);//it >> seems PaStiX don't work with KSPPREONLY >> ierr = PCSetType (pc, (char*) PCLU); >> ierr = PCFactorSetMatSolverPackage (pc, MAT_SOLVER_PASTIX); assert(! >> ierr); >> >> Is it a bug or I made something wrong? > > You are doing something wrong but one cannot tell from just this > code fragment. Send the entire code to petsc-maint at mcs.anl.gov and > the options you used to run it and the output it produced. > > Barry > >> >> Yours >> Gong Ding > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: fvm_nonlinear_solver.cc URL: From bsmith at mcs.anl.gov Tue Apr 6 21:40:38 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 6 Apr 2010 21:40:38 -0500 Subject: [petsc-users] PaStiX does not work In-Reply-To: <439BAEC166F24F5FAA3FF8746C047C73@cogendaeda> References: <51C598FB-5BE3-4861-AD17-7996AA1DBE69@mcs.anl.gov> <439BAEC166F24F5FAA3FF8746C047C73@cogendaeda> Message-ID: If you use MUMPS then you should be able to use PaStiX just by changing that one line. It should not suddenly say that it is using ILU when you have indicated LU. Are you sure that PETSc was installed to use PasTIX? PasTIX certainly will work with KSPREONLY (just like MUMPS). Perhaps you should run the entire application with valgrind (www.valgrind.org ) http://www.mcs.anl.gov/petsc/petsc-as/documentation/ faq.html#valgrind to see if there is some memory corruption that is causing the problem. So run with MUMPS, change that one line and run with PASTIX and send us the output (with like -ksp_view) for the two cases all to petsc-maint at mcs.anl.gov not petsc-users Barry On Apr 6, 2010, at 9:33 PM, Gong Ding wrote: > Dear Barry > The mainly petsc related file is attached. > The whole project is very large, it is a 3D/parallel code for > semiconductor simulation. > You can download the source code from http://www.cogenda.com/downloads/category/7-genius-open-source-edition.html > It can be compiled with petsc 3.0. I am now adding 3.1 support. > > I had use MUMPS solver for quite a long time by the same code: > PCFactorSetMatSolverPackage (pc, MAT_SOLVER_MUMPS); > It works well for most of the time. However, it may crash (when > other solver like KSP type works). > > I'd like to try PASTIX if it is faster and more stable. > > Yours > Gong Ding > > >> >> On Apr 6, 2010, at 7:04 PM, Gong Ding wrote: >> >>> Dear Petsc developer, >>> >>> I found a problem that PASTIX solver can not be loaded from PETSC, >>> even for 3.1 version. >>> >>> When I use following code, the ksp_view told me that PC is ilu. And >>> the snes solver can not convergence at all. >>> >>> ierr = KSPSetType (ksp, (char*) KSPPREONLY); assert(!ierr);//it >>> seems PaStiX don't work with KSPPREONLY >>> ierr = PCSetType (pc, (char*) PCLU); >>> ierr = PCFactorSetMatSolverPackage (pc, MAT_SOLVER_PASTIX); assert(! >>> ierr); >>> >>> Is it a bug or I made something wrong? >> >> You are doing something wrong but one cannot tell from just this >> code fragment. Send the entire code to petsc-maint at mcs.anl.gov and >> the options you used to run it and the output it produced. >> >> Barry >> >>> >>> Yours >>> Gong Ding > From sekikawa at msi.co.jp Wed Apr 7 03:08:03 2010 From: sekikawa at msi.co.jp (Takuya Sekikawa) Date: Wed, 07 Apr 2010 17:08:03 +0900 Subject: [petsc-users] configure failed on MPI and Intel Compiler environment Message-ID: <20100407165824.FDB5.SEKIKAWA@msi.co.jp> Dear Petsc developpers, I tried to configure PETSc (ver 3.0) on Linux (64bit), MPI-enabled, Intel Compiler (icc/icpc) environment, but it failed. ./config/configure.py --with-mpi=1 --with-mpi-dir=/opt/home/sekikawa/bin-personal/mpich2 \ --with-x=0 --with-fc=0 --with-debugging=0 --with-blas-lapack-dir=${MKL_DIR} ================================================================================= Configuring PETSc to compile on your system ================================================================================= ================================================================================= WARNING! Compiling PETSc with no debugging, this should only be done for timing and production runs. All development should be done when configured using --with-debugging=1 ================================================================================= TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasL********************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------- You set a value for --with-blas-lapack-dir=, but /opt/intel/mkl/10.0.010/lib/em64t cannot be used ********************************************************************************* MKL_DIR is set to intel mkl library. without MPI (--with-mpi=0), this configure succeeded with no problem. and MPICH2 is successfully compiled and running with Intel compiler. I would very much like for any suggestion. Thanks in advance. Takuya --------------------------------------------------------------- Takuya Sekikawa Mathematical Systems, Inc sekikawa at msi.co.jp --------------------------------------------------------------- From knepley at gmail.com Wed Apr 7 05:22:45 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Apr 2010 12:22:45 +0200 Subject: [petsc-users] configure failed on MPI and Intel Compiler environment In-Reply-To: <20100407165824.FDB5.SEKIKAWA@msi.co.jp> References: <20100407165824.FDB5.SEKIKAWA@msi.co.jp> Message-ID: You must send configure.log to petsc-maint at mcs.anl.gov in order for us to help you. Matt On Wed, Apr 7, 2010 at 10:08 AM, Takuya Sekikawa wrote: > Dear Petsc developpers, > > I tried to configure PETSc (ver 3.0) > on Linux (64bit), MPI-enabled, Intel Compiler (icc/icpc) environment, > but it failed. > > ./config/configure.py --with-mpi=1 > --with-mpi-dir=/opt/home/sekikawa/bin-personal/mpich2 \ > --with-x=0 --with-fc=0 --with-debugging=0 --with-blas-lapack-dir=${MKL_DIR} > > > ================================================================================= > Configuring PETSc to compile on your system > > ================================================================================= > > ================================================================================= > WARNING! Compiling PETSc with no debugging, this should > only be done for timing and production runs. All > development should > be done when configured using --with-debugging=1 > > ================================================================================= > TESTING: checkLib from > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasL********************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > --------------------------------------------------------------------------------------- > You set a value for --with-blas-lapack-dir=, but > /opt/intel/mkl/10.0.010/lib/em64t cannot be used > > ********************************************************************************* > > MKL_DIR is set to intel mkl library. > without MPI (--with-mpi=0), this configure succeeded with no problem. > and MPICH2 is successfully compiled and running with Intel compiler. > > I would very much like for any suggestion. > Thanks in advance. > > Takuya > --------------------------------------------------------------- > Takuya Sekikawa > Mathematical Systems, Inc > sekikawa at msi.co.jp > --------------------------------------------------------------- > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Wed Apr 7 05:47:32 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Wed, 07 Apr 2010 12:47:32 +0200 Subject: [petsc-users] sqrt for a PetscScalar Message-ID: <4BBC62C4.5040401@tudelft.nl> Dear all, I am getting a compilation error from the sqrt(cmath header) function with PETSc 3.1, I configured PETSc with complex scalar type so I am suspecting that this is the source of the problem. Is there a simple workaround for this? And Is there an sqrt function for a 'PetscScalar', I could not find that in the documentation? PetscScalar xTMx; ... VecScale(x, 1/sqrt(xTMx)); /home/utabak/thesis/C++/c++Projects/workDirectory/trunk/src/evpSolver1.cc:502: error: no match for ?operator/? in ?1 / std::sqrt [with _Tp = double](((const std::complex&)((const std::complex*)(& xTMx))))? /usr/include/c++/4.3/bits/stl_pair.h: In constructor ?std::pair<_T1, _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = std::complex, _U2 = int, _T1 = const double, _T2 = int]?: Best regards, Umut From knepley at gmail.com Wed Apr 7 05:54:31 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Apr 2010 12:54:31 +0200 Subject: [petsc-users] sqrt for a PetscScalar In-Reply-To: <4BBC62C4.5040401@tudelft.nl> References: <4BBC62C4.5040401@tudelft.nl> Message-ID: On Wed, Apr 7, 2010 at 12:47 PM, Umut Tabak wrote: > Dear all, > > I am getting a compilation error from the sqrt(cmath header) function with > PETSc 3.1, I configured PETSc with complex scalar type so I am suspecting > that this is the source of the problem. > > Is there a simple workaround for this? And Is there an sqrt function for a > 'PetscScalar', I could not find that in the documentation? > > PetscScalar xTMx; > My guess is that you want PetscReal xTMx; Matt > ... > VecScale(x, 1/sqrt(xTMx)); > > /home/utabak/thesis/C++/c++Projects/workDirectory/trunk/src/evpSolver1.cc:502: > error: no match for ?operator/? in ?1 / std::sqrt [with _Tp = > double](((const std::complex&)((const std::complex*)(& > xTMx))))? > /usr/include/c++/4.3/bits/stl_pair.h: In constructor ?std::pair<_T1, > _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = std::complex, _U2 > = int, _T1 = const double, _T2 = int]?: > > > Best regards, > Umut > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Wed Apr 7 06:03:14 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Wed, 07 Apr 2010 13:03:14 +0200 Subject: [petsc-users] sqrt for a PetscScalar In-Reply-To: References: <4BBC62C4.5040401@tudelft.nl> Message-ID: <4BBC6672.1090204@tudelft.nl> Matthew Knepley wrote: > > > My guess is that you want > > PetscReal xTMx; > > Dear Matthew, Thanks for the quick reply, Not sure, here is the code for the related part, I used a VecTDot before that so I need PetscScalar I guess. PetscErrorCode ierr; Vec x, Bx; int rSz, cSz; PetscScalar xTMx; ierr = MatGetSize(modalMat.getMatrix(), &rSz, &cSz); CHKERRQ(ierr); // create the vectors to be used ierr = VecCreate(MPI_COMM_SELF, &x); CHKERRQ(ierr); ierr = VecCreate(MPI_COMM_SELF, &Bx); CHKERRQ(ierr); ierr = VecSetSizes(x, rSz, PETSC_DECIDE); CHKERRQ(ierr); ierr = VecSetSizes(Bx, rSz, PETSC_DECIDE); CHKERRQ(ierr); VecSetFromOptions(x); VecSetFromOptions(Bx); // for(int k=0; k References: <4BBC62C4.5040401@tudelft.nl> <4BBC6672.1090204@tudelft.nl> Message-ID: Yes, but do you understand what you mean by 1/sqrt(c) where c is complex? If so, cast 1 to complex. Matt On Wed, Apr 7, 2010 at 1:03 PM, Umut Tabak wrote: > Matthew Knepley wrote: > >> >> >> My guess is that you want >> >> PetscReal xTMx; >> >> >> > Dear Matthew, > > Thanks for the quick reply, Not sure, here is the code for the related > part, I used a VecTDot before that so I need PetscScalar I guess. > > PetscErrorCode ierr; > Vec x, Bx; > int rSz, cSz; > PetscScalar xTMx; > ierr = MatGetSize(modalMat.getMatrix(), &rSz, &cSz); CHKERRQ(ierr); > // create the vectors to be used > ierr = VecCreate(MPI_COMM_SELF, &x); CHKERRQ(ierr); > ierr = VecCreate(MPI_COMM_SELF, &Bx); CHKERRQ(ierr); > ierr = VecSetSizes(x, rSz, PETSC_DECIDE); CHKERRQ(ierr); > ierr = VecSetSizes(Bx, rSz, PETSC_DECIDE); CHKERRQ(ierr); > VecSetFromOptions(x); > VecSetFromOptions(Bx); > // > for(int k=0; k { > // retrive the mode vector > MatGetColumnVector(modalMat.getMatrix(), x, k); > MatMult(B, x, Bx); > VecTDot(Bx, x, &xTMx); > > VecScale(x, 1/sqrt(xTMx)); > > Best regards, > Umut > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Wed Apr 7 08:06:23 2010 From: gdiso at ustc.edu (Gong Ding) Date: Wed, 7 Apr 2010 21:06:23 +0800 Subject: [petsc-users] PaStiX does not work References: <51C598FB-5BE3-4861-AD17-7996AA1DBE69@mcs.anl.gov><439BAEC166F24F5FAA3FF8746C047C73@cogendaeda> Message-ID: Dear Barry, Very sorry. It is my mistake. I set PC to ILU again in another function when PasTIX is used (but MUMPS is right). Now it works, thanks. Yours Gong Ding > > If you use MUMPS then you should be able to use PaStiX just by > changing that one line. It should not suddenly say that it is using > ILU when you have indicated LU. > Are you sure that PETSc was installed to use PasTIX? PasTIX certainly > will work with KSPREONLY (just like MUMPS). > > Perhaps you should run the entire application with valgrind (www.valgrind.org > ) http://www.mcs.anl.gov/petsc/petsc-as/documentation/ > faq.html#valgrind to see if there is some memory corruption that is > causing the problem. > > So run with MUMPS, change that one line and run with PASTIX and > send us the output (with like -ksp_view) for the two cases all to petsc-maint at mcs.anl.gov > not petsc-users > > > Barry > > On Apr 6, 2010, at 9:33 PM, Gong Ding wrote: > >> Dear Barry >> The mainly petsc related file is attached. >> The whole project is very large, it is a 3D/parallel code for >> semiconductor simulation. >> You can download the source code from http://www.cogenda.com/downloads/category/7-genius-open-source-edition.html >> It can be compiled with petsc 3.0. I am now adding 3.1 support. >> >> I had use MUMPS solver for quite a long time by the same code: >> PCFactorSetMatSolverPackage (pc, MAT_SOLVER_MUMPS); >> It works well for most of the time. However, it may crash (when >> other solver like KSP type works). >> >> I'd like to try PASTIX if it is faster and more stable. >> >> Yours >> Gong Ding >> >> >>> >>> On Apr 6, 2010, at 7:04 PM, Gong Ding wrote: >>> >>>> Dear Petsc developer, >>>> >>>> I found a problem that PASTIX solver can not be loaded from PETSC, >>>> even for 3.1 version. >>>> >>>> When I use following code, the ksp_view told me that PC is ilu. And >>>> the snes solver can not convergence at all. >>>> >>>> ierr = KSPSetType (ksp, (char*) KSPPREONLY); assert(!ierr);//it >>>> seems PaStiX don't work with KSPPREONLY >>>> ierr = PCSetType (pc, (char*) PCLU); >>>> ierr = PCFactorSetMatSolverPackage (pc, MAT_SOLVER_PASTIX); assert(! >>>> ierr); >>>> >>>> Is it a bug or I made something wrong? >>> >>> You are doing something wrong but one cannot tell from just this >>> code fragment. Send the entire code to petsc-maint at mcs.anl.gov and >>> the options you used to run it and the output it produced. >>> >>> Barry >>> >>>> >>>> Yours >>>> Gong Ding >> > From bsmith at mcs.anl.gov Wed Apr 7 11:28:52 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 7 Apr 2010 11:28:52 -0500 Subject: [petsc-users] sqrt for a PetscScalar In-Reply-To: <4BBC62C4.5040401@tudelft.nl> References: <4BBC62C4.5040401@tudelft.nl> Message-ID: <6230C9A6-FED9-46D9-A8FE-0B3113A60757@mcs.anl.gov> > no match for ?operator/? in ?1 / std::sqrt [with ^^^^ It is complaining about the operator / not the sqrt. Try 1.0 / sqrt(xTMx) instead. Sometimes the complex class is real braindead about casting from ints. Barry On Apr 7, 2010, at 5:47 AM, Umut Tabak wrote: > Dear all, > > I am getting a compilation error from the sqrt(cmath header) > function with PETSc 3.1, I configured PETSc with complex scalar type > so I am suspecting that this is the source of the problem. > > Is there a simple workaround for this? And Is there an sqrt function > for a 'PetscScalar', I could not find that in the documentation? > > PetscScalar xTMx; > ... > VecScale(x, 1/sqrt(xTMx)); > > /home/utabak/thesis/C++/c++Projects/workDirectory/trunk/src/ > evpSolver1.cc:502: error: no match for ?operator/? in ?1 / std::sqrt > [with _Tp = double](((const std::complex&)((const > std::complex*)(& xTMx))))? > /usr/include/c++/4.3/bits/stl_pair.h: In constructor ?std::pair<_T1, > _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = > std::complex, _U2 = int, _T1 = const double, _T2 = int]?: > > > Best regards, > Umut From av.nova at gmail.com Wed Apr 7 17:57:43 2010 From: av.nova at gmail.com (NovA) Date: Thu, 8 Apr 2010 02:57:43 +0400 Subject: [petsc-users] configure failed on MPI and Intel Compiler environment In-Reply-To: <20100407165824.FDB5.SEKIKAWA@msi.co.jp> References: <20100407165824.FDB5.SEKIKAWA@msi.co.jp> Message-ID: Hi! I'm not a PETSc developer but have some thoughts about the issue... I've got similar errors trying to build with MKL-10 on Windows. Intel MKL v10+ has very different libraries layout as against previous versions (see users manual). It seems that PETSc configure script can't deal with this new layout yet. So you need to specify exact blas-lapack library file names manually via "--with-blas-lapack-lib" option. The value --with-blas-lapack-lib=["${MKL_DIR}/libmkl_intel_lp64.a", "${MKL_DIR}/libmkl_sequential.a", "${MKL_DIR}/libmkl_core.a"] worked for me. Regards, Andrey 2010/4/7 Takuya Sekikawa : > Dear Petsc developpers, > > I tried to configure PETSc (ver 3.0) > on Linux (64bit), MPI-enabled, Intel Compiler (icc/icpc) environment, > but it failed. > > ./config/configure.py --with-mpi=1 --with-mpi-dir=/opt/home/sekikawa/bin-personal/mpich2 \ > --with-x=0 --with-fc=0 --with-debugging=0 --with-blas-lapack-dir=${MKL_DIR} > > ================================================================================= > ? ? ? ? ? ? Configuring PETSc to compile on your system > ================================================================================= > ================================================================================= > ? ? ? ? ? ? ? ?WARNING! Compiling PETSc with no debugging, this should > ? ? ? ? ? ? ? ? ? ? ?only be done for timing and production runs. All development should > ? ? ? ? ? ? ? ? ? ? ?be done when configured using --with-debugging=1 > ================================================================================= > TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasL********************************************************************************* > ? ? ? ? UNABLE to CONFIGURE with GIVEN OPTIONS ? ?(see configure.log for details): > --------------------------------------------------------------------------------------- > You set a value for --with-blas-lapack-dir=, but /opt/intel/mkl/10.0.010/lib/em64t cannot be used > ********************************************************************************* > > MKL_DIR is set to intel mkl library. > without MPI (--with-mpi=0), this configure succeeded with no problem. > and MPICH2 is successfully compiled and running with Intel compiler. > > I would very much like for any suggestion. > Thanks in advance. > > Takuya > --------------------------------------------------------------- > ? Takuya Sekikawa > ? ? ? ? Mathematical Systems, Inc > ? ? ? ? ? ? ? ? ? ?sekikawa at msi.co.jp > --------------------------------------------------------------- > > > From u.tabak at tudelft.nl Wed Apr 7 17:58:32 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Thu, 08 Apr 2010 00:58:32 +0200 Subject: [petsc-users] minres: PETSc&MATLAB Message-ID: <4BBD0E18.1090705@tudelft.nl> Dear all, I have tried on some ill-conditioned(indefinite actually) matrices some iterative solutions first with minres of Matlab. Matlab converged in something like 30 iterations to some reasonable tolerances, say 1e-5, without any preconditioner. However, the same matrices did not converge with the minres solver of PETSc. I wondered what could be the differences in implementation? I would expect PETSc to perform better, no supporting arguments for this expectation though. I read the matrices in binary format, and use on the command line(with some modifications on the ex1.c code) ./ex1 -ksp_type minres -pc_type none Best regards, Umut From balay at mcs.anl.gov Wed Apr 7 18:11:49 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 7 Apr 2010 18:11:49 -0500 (CDT) Subject: [petsc-users] configure failed on MPI and Intel Compiler environment In-Reply-To: References: <20100407165824.FDB5.SEKIKAWA@msi.co.jp> Message-ID: petsc-3.0 [with latest patches] configure should be able to handle MLK-10. Don't remember MLK-11. > > without MPI (--with-mpi=0), this configure succeeded with no problem. Claims MLK-10 was detected properly without mpi. This is weird - but again configure.log for both builds will clarify the issue. Satish On Thu, 8 Apr 2010, NovA wrote: > Hi! > > I'm not a PETSc developer but have some thoughts about the issue... > > I've got similar errors trying to build with MKL-10 on Windows. Intel > MKL v10+ has very different libraries layout as against previous > versions (see users manual). It seems that PETSc configure script > can't deal with this new layout yet. So you need to specify exact > blas-lapack library file names manually via "--with-blas-lapack-lib" > option. The value > --with-blas-lapack-lib=["${MKL_DIR}/libmkl_intel_lp64.a", > "${MKL_DIR}/libmkl_sequential.a", "${MKL_DIR}/libmkl_core.a"] worked > for me. > > Regards, > Andrey > > > 2010/4/7 Takuya Sekikawa : > > Dear Petsc developpers, > > > > I tried to configure PETSc (ver 3.0) > > on Linux (64bit), MPI-enabled, Intel Compiler (icc/icpc) environment, > > but it failed. > > > > ./config/configure.py --with-mpi=1 --with-mpi-dir=/opt/home/sekikawa/bin-personal/mpich2 \ > > --with-x=0 --with-fc=0 --with-debugging=0 --with-blas-lapack-dir=${MKL_DIR} > > > > ================================================================================= > > ? ? ? ? ? ? Configuring PETSc to compile on your system > > ================================================================================= > > ================================================================================= > > ? ? ? ? ? ? ? ?WARNING! Compiling PETSc with no debugging, this should > > ? ? ? ? ? ? ? ? ? ? ?only be done for timing and production runs. All development should > > ? ? ? ? ? ? ? ? ? ? ?be done when configured using --with-debugging=1 > > ================================================================================= > > TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasL********************************************************************************* > > ? ? ? ? UNABLE to CONFIGURE with GIVEN OPTIONS ? ?(see configure.log for details): > > --------------------------------------------------------------------------------------- > > You set a value for --with-blas-lapack-dir=, but /opt/intel/mkl/10.0.010/lib/em64t cannot be used > > ********************************************************************************* > > > > MKL_DIR is set to intel mkl library. > > without MPI (--with-mpi=0), this configure succeeded with no problem. > > and MPICH2 is successfully compiled and running with Intel compiler. > > > > I would very much like for any suggestion. > > Thanks in advance. > > > > Takuya > > --------------------------------------------------------------- > > ? Takuya Sekikawa > > ? ? ? ? Mathematical Systems, Inc > > ? ? ? ? ? ? ? ? ? ?sekikawa at msi.co.jp > > --------------------------------------------------------------- > > > > > > > From bsmith at mcs.anl.gov Wed Apr 7 18:49:02 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 7 Apr 2010 18:49:02 -0500 Subject: [petsc-users] minres: PETSc&MATLAB In-Reply-To: <4BBD0E18.1090705@tudelft.nl> References: <4BBD0E18.1090705@tudelft.nl> Message-ID: <3663AE1A-2AB2-4BFF-8064-5E7AD2748507@mcs.anl.gov> Umut, Send one of the matrices with this behavior to petsc-maint at mcs.anl.gov and we'll see what is going on. Also in that email tell use exactly how you run it in Matlab. Barry On Apr 7, 2010, at 5:58 PM, Umut Tabak wrote: > Dear all, > > I have tried on some ill-conditioned(indefinite actually) matrices > some iterative solutions first with minres of Matlab. Matlab > converged in something like 30 iterations to some reasonable > tolerances, say 1e-5, without any preconditioner. However, the same > matrices did not converge with the minres solver of PETSc. I > wondered what could be the differences in implementation? I would > expect PETSc to perform better, no supporting arguments for this > expectation though. > > I read the matrices in binary format, and use on the command > line(with some modifications on the ex1.c code) > > ./ex1 -ksp_type minres -pc_type none > > Best regards, > > Umut From sekikawa at msi.co.jp Wed Apr 7 19:19:29 2010 From: sekikawa at msi.co.jp (Takuya Sekikawa) Date: Thu, 08 Apr 2010 09:19:29 +0900 Subject: [petsc-users] configure failed on MPI and Intel Compiler environment In-Reply-To: References: Message-ID: <20100408090840.1A6D.SEKIKAWA@msi.co.jp> Thanks for everyone. I sent configure.log to petsc-maint at mcs.anl.gov. It is best if I can use MKL on MPI-enabled environment, but for now, I don't need lapack (because I solve my problem with KRYLOV-SCHUR). so it is enough if I can build petsc without lapack. (of course if KRYLOV-SCHUR method is independent with lapack) Takuya. On Wed, 7 Apr 2010 18:11:49 -0500 (CDT) Satish Balay wrote: > petsc-3.0 [with latest patches] configure should be able to handle > MLK-10. Don't remember MLK-11. > > > > without MPI (--with-mpi=0), this configure succeeded with no problem. > > Claims MLK-10 was detected properly without mpi. This is weird - but > again configure.log for both builds will clarify the issue. > > > Satish > > On Thu, 8 Apr 2010, NovA wrote: > > > Hi! > > > > I'm not a PETSc developer but have some thoughts about the issue... > > > > I've got similar errors trying to build with MKL-10 on Windows. Intel > > MKL v10+ has very different libraries layout as against previous > > versions (see users manual). It seems that PETSc configure script > > can't deal with this new layout yet. So you need to specify exact > > blas-lapack library file names manually via "--with-blas-lapack-lib" > > option. The value > > --with-blas-lapack-lib=["${MKL_DIR}/libmkl_intel_lp64.a", > > "${MKL_DIR}/libmkl_sequential.a", "${MKL_DIR}/libmkl_core.a"] worked > > for me. > > > > Regards, > > Andrey > > > > > > 2010/4/7 Takuya Sekikawa : > > > Dear Petsc developpers, > > > > > > I tried to configure PETSc (ver 3.0) > > > on Linux (64bit), MPI-enabled, Intel Compiler (icc/icpc) environment, > > > but it failed. > > > > > > ./config/configure.py --with-mpi=1 --with-mpi-dir=/opt/home/sekikawa/bin-personal/mpich2 \ > > > --with-x=0 --with-fc=0 --with-debugging=0 --with-blas-lapack-dir=${MKL_DIR} > > > > > > ================================================================================= > > > ?  ?  ?  ?  ?  ?  Configuring PETSc to compile on your system > > > ================================================================================= > > > ================================================================================= > > > ?  ?  ?  ?  ?  ?  ?  ? WARNING! Compiling PETSc with no debugging, this should > > > ?  ?  ?  ?  ?  ?  ?  ?  ?  ?  ? only be done for timing and production runs. All development should > > > ?  ?  ?  ?  ?  ?  ?  ?  ?  ?  ? be done when configured using --with-debugging=1 > > > ================================================================================= > > > TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasL********************************************************************************* > > > ?  ?  ?  ?  UNABLE to CONFIGURE with GIVEN OPTIONS ?  ? (see configure.log for details): > > > --------------------------------------------------------------------------------------- > > > You set a value for --with-blas-lapack-dir=, but /opt/intel/mkl/10.0.010/lib/em64t cannot be used > > > ********************************************************************************* > > > > > > MKL_DIR is set to intel mkl library. > > > without MPI (--with-mpi=0), this configure succeeded with no problem. > > > and MPICH2 is successfully compiled and running with Intel compiler. > > > > > > I would very much like for any suggestion. > > > Thanks in advance. > > > > > > Takuya > > > --------------------------------------------------------------- > > > ?  Takuya Sekikawa > > > ?  ?  ?  ?  Mathematical Systems, Inc > > > ?  ?  ?  ?  ?  ?  ?  ?  ?  ? sekikawa at msi.co.jp > > > --------------------------------------------------------------- > > > > > > > > > > > --------------------------------------------------------------- ? Takuya Sekikawa ??? Mathematical Systems, Inc ? sekikawa at msi.co.jp --------------------------------------------------------------- From torres.pedrozpk at gmail.com Sat Apr 10 21:20:07 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sat, 10 Apr 2010 23:20:07 -0300 Subject: [petsc-users] Load a parallel matrix with user-partitioned rows with MatLoad() Message-ID: Hello, I want to load (in parallel)a matrix from a binary file saved with MatView, but when I create the matrix I don't leave to PETSC 'decide' the number of rows on each process. So, how can I recovery the same distributions when I load the matrix from a file. Is it MatLoad() helps in this case?. What can I do?. I really appreciate any suggestions. Thanks a lot! -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Apr 11 08:31:13 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 11 Apr 2010 08:31:13 -0500 Subject: [petsc-users] Load a parallel matrix with user-partitioned rows with MatLoad() In-Reply-To: References: Message-ID: <42DF1C51-CDC7-41D2-9465-E53993A7B768@mcs.anl.gov> We plan a rework of the loading to allow this. Currently you need to edit src/mat/impls/aij/mpi/mpiaij.c MatLoad_MPIAIJ() and change the code for force the layout you want. Barry On Apr 10, 2010, at 9:20 PM, Pedro Torres wrote: > Hello, > > I want to load (in parallel)a matrix from a binary file saved with > MatView, but when I create the matrix I don't leave to PETSC > 'decide' the number of rows on each process. So, how can I recovery > the same distributions when I load the matrix from a file. Is it > MatLoad() helps in this case?. What can I do?. I really appreciate > any suggestions. > > Thanks a lot! > -- > Pedro Torres > GESAR/UERJ > Rua Fonseca Teles 121, S?o Crist?v?o > Rio de Janeiro - Brasil From torres.pedrozpk at gmail.com Mon Apr 12 13:07:59 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Mon, 12 Apr 2010 15:07:59 -0300 Subject: [petsc-users] Load a parallel matrix with user-partitioned rows with MatLoad() In-Reply-To: <42DF1C51-CDC7-41D2-9465-E53993A7B768@mcs.anl.gov> References: <42DF1C51-CDC7-41D2-9465-E53993A7B768@mcs.anl.gov> Message-ID: Thanks, I will try it. Best Regards Pedro 2010/4/11 Barry Smith > > We plan a rework of the loading to allow this. Currently you need to edit > src/mat/impls/aij/mpi/mpiaij.c MatLoad_MPIAIJ() and change the code for > force the layout you want. > > Barry > > > On Apr 10, 2010, at 9:20 PM, Pedro Torres wrote: > > Hello, >> >> I want to load (in parallel)a matrix from a binary file saved with >> MatView, but when I create the matrix I don't leave to PETSC 'decide' the >> number of rows on each process. So, how can I recovery the same >> distributions when I load the matrix from a file. Is it MatLoad() helps in >> this case?. What can I do?. I really appreciate any suggestions. >> >> Thanks a lot! >> -- >> Pedro Torres >> GESAR/UERJ >> Rua Fonseca Teles 121, S?o Crist?v?o >> Rio de Janeiro - Brasil >> > > -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From tribur at vision.ee.ethz.ch Tue Apr 13 08:49:38 2010 From: tribur at vision.ee.ethz.ch (tribur at vision.ee.ethz.ch) Date: Tue, 13 Apr 2010 15:49:38 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero Message-ID: <20100413154938.20427cr0dgbluxki@email.ee.ethz.ch> Hi, using ML I got the error "[0]PETSC ERROR: Detected zero pivot in LU factorization" As recommended at http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html, I tried -pc_factor_shift_nonzero but it doesn't have the desired effect using ML. How do I have to formulate the command line option? What does -[level]_pc_factor_shift_nonzero mean? What other parallel preconditioner could I try besides Hypre/Boomeramg or ML? Thanks in advance for your precious help, Kathrin From knepley at gmail.com Tue Apr 13 08:51:55 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 13 Apr 2010 14:51:55 +0100 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: <20100413154938.20427cr0dgbluxki@email.ee.ethz.ch> References: <20100413154938.20427cr0dgbluxki@email.ee.ethz.ch> Message-ID: On Tue, Apr 13, 2010 at 2:49 PM, wrote: > Hi, > > using ML I got the error > > "[0]PETSC ERROR: Detected zero pivot in LU factorization" > > As recommended at > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html, > I tried -pc_factor_shift_nonzero but it doesn't have the desired effect > using ML. > > How do I have to formulate the command line option? What does > -[level]_pc_factor_shift_nonzero mean? What other parallel preconditioner > could I try besides Hypre/Boomeramg or ML? > This means the MG level, like 2. You can see all available options using -help. Matt > Thanks in advance for your precious help, > > Kathrin > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 13 10:08:13 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 13 Apr 2010 10:08:13 -0500 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: References: <20100413154938.20427cr0dgbluxki@email.ee.ethz.ch> Message-ID: <2557501C-6F7E-4F16-B9B1-218EFE7F7A1E@mcs.anl.gov> -mg_coarse_pc_factor_shift_nonzero since it is the coarse level of the multigrid that is producing the zero pivot. Barry On Apr 13, 2010, at 8:51 AM, Matthew Knepley wrote: > On Tue, Apr 13, 2010 at 2:49 PM, wrote: > Hi, > > using ML I got the error > > "[0]PETSC ERROR: Detected zero pivot in LU factorization" > > As recommended at http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html > , I tried -pc_factor_shift_nonzero but it doesn't have the desired > effect using ML. > > How do I have to formulate the command line option? What does - > [level]_pc_factor_shift_nonzero mean? What other parallel > preconditioner could I try besides Hypre/Boomeramg or ML? > > This means the MG level, like 2. You can see all available options > using -help. > > Matt > > Thanks in advance for your precious help, > > Kathrin > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Apr 13 15:53:02 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 13 Apr 2010 15:53:02 -0500 (CDT) Subject: [petsc-users] Improve of win32fe In-Reply-To: <16707556C1DA42C9A9C4EBD701838841@cogendaeda> References: <16707556C1DA42C9A9C4EBD701838841@cogendaeda> Message-ID: We have now added a copyright notice in the source tree. Its similar to petsc license - and is cygwin.dll compatible [but not GPL] http://petsc.cs.iit.edu/petsc/win32fe/raw-file/71989470a6d3/readme.html Satish On Sat, 27 Mar 2010, Gong Ding wrote: > Dear Petsc developers > For long time I am using win32fe as an excellent tool to port > my code to windows. However, I found it slows down the compiling speed too much. > > Yesterday I investigate the code and find the my_cygwin_conv_to_full_win32_path > is the bottleneck. For each unix path to windows path convertion, a cygwin command > 'cygpath -aw PATH' should be executed. My code has many include path and source file, as a result, > compiling each cc file involves ~20 calls of cygpath. > And convert path of .o file when linking takse more than 30s. > > I'd like to replace cygpath execution with cygwin function cygwin_conv_path. > Of course, this function only exist in cygwin system. > So I use cygwin-gcc to compile win32fe (only with some small changes) > > There's a new problem appear. The new win32fe can not accept environment variable loaded in cygwin.bat. > I then use batch file to wrap the win32fe.exe and set environment variable for cl/icl in the batch file. > > Now everything is ok. I tested with my code. The compiling time is greatly reduced, from 2h to 52min. > > I'd like to share this method. But there are some license problem. > First, I even don't know the license of win32fe. > Second, since new version of win32fe dependent on cygwin (cygwin1.dll), it must be GPL. > I wonder if I can receive a notice that I can release it under GPL. > > BTW, there seems a small bug at compilerfe.cpp 340-341 > linkarg.push_front(outfile); > OutputFlag = --linkarg.begin(); > I think it should be > OutputFlag = linkarg.begin(); > > Sincerely > Gong Ding > From chenleping at yahoo.cn Wed Apr 14 08:25:58 2010 From: chenleping at yahoo.cn (=?utf-8?B?6ZmI5LmQ5bmz?=) Date: Wed, 14 Apr 2010 21:25:58 +0800 (CST) Subject: [petsc-users] about snes Message-ID: <241271.99980.qm@web92401.mail.cnh.yahoo.com> petsc teams, When I used the snes,as follows, call SNESSetFunction(snes,r,FormFunction,PETSC_NULL_OBJECT,ierr) ? subroutine FormFunction(snes,x,r,dummy,ierr) I wonder if I can change the parameters of FormFunction(),for example, FormFunction(snes,x,r,dummy,K,max,ierr) and so on? Beacuse I need pass some parameters into FormFunction(),I don't want use common blocks. Thanks, ? Leping ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Apr 14 08:46:27 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 14 Apr 2010 15:46:27 +0200 Subject: [petsc-users] about snes In-Reply-To: <241271.99980.qm@web92401.mail.cnh.yahoo.com> References: <241271.99980.qm@web92401.mail.cnh.yahoo.com> Message-ID: <878w8qkuz0.fsf@59A2.org> On Wed, 14 Apr 2010 21:25:58 +0800 (CST), ??? wrote: > petsc teams, > > When I used the snes,as follows, > > call SNESSetFunction(snes,r,FormFunction,PETSC_NULL_OBJECT,ierr) > ? > subroutine FormFunction(snes,x,r,dummy,ierr) The fourth argument is for all of this data. Here is an F90 example. http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex5f90.F.html Jed From torres.pedrozpk at gmail.com Wed Apr 14 16:02:11 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Wed, 14 Apr 2010 18:02:11 -0300 Subject: [petsc-users] KSP_SpeedUP Message-ID: Hello, Sorry if this questions its not appropiate for the petsc-list, but I really want to known what happen when I'm getting differente KSP time results. For example, allocating two process in the same node I get 6.24 sec, and when allocating two process in two nodes (1 process per node) I get 4.7sec. Is there a memory contention problem in my node?? The problem get worst when increase the number of process. I send attached a snapshot of the results. I really appreciate any clue. Thanks in advance. -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ksp_speedup.JPG Type: image/jpeg Size: 187533 bytes Desc: not available URL: From jed at 59A2.org Wed Apr 14 16:10:17 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 14 Apr 2010 23:10:17 +0200 Subject: [petsc-users] KSP_SpeedUP In-Reply-To: References: Message-ID: <87vdbtkafa.fsf@59A2.org> On Wed, 14 Apr 2010 18:02:11 -0300, Pedro Torres wrote: > Hello, > > Sorry if this questions its not appropiate for the petsc-list, but I really > want to known what happen when I'm getting differente KSP time results. For > example, allocating two process in the same node I get 6.24 sec, and when > allocating two process in two nodes (1 process per node) I get 4.7sec. Is > there a memory contention problem in my node?? The problem get worst when > increase the number of process. Sparse matrix kernels are primarily limited by memory bandwidth which does not increase much with multicore hardware (vendors rarely mention this). When you use multiple cores per socket, they have to share the available bandwidth, so you get lower performance. It's *usually* still faster to use the available cores, but the per-core performance is definitely lower than when using only one core per socket. Jed From bsmith at mcs.anl.gov Wed Apr 14 16:17:01 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 14 Apr 2010 16:17:01 -0500 Subject: [petsc-users] KSP_SpeedUP In-Reply-To: <87vdbtkafa.fsf@59A2.org> References: <87vdbtkafa.fsf@59A2.org> Message-ID: <221DED12-BBFD-422F-B3B5-1B626CE78348@mcs.anl.gov> Second bullet at http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#computers Barry On Apr 14, 2010, at 4:10 PM, Jed Brown wrote: > On Wed, 14 Apr 2010 18:02:11 -0300, Pedro Torres > wrote: >> Hello, >> >> Sorry if this questions its not appropiate for the petsc-list, but >> I really >> want to known what happen when I'm getting differente KSP time >> results. For >> example, allocating two process in the same node I get 6.24 sec, >> and when >> allocating two process in two nodes (1 process per node) I get >> 4.7sec. Is >> there a memory contention problem in my node?? The problem get >> worst when >> increase the number of process. > > Sparse matrix kernels are primarily limited by memory bandwidth which > does not increase much with multicore hardware (vendors rarely mention > this). When you use multiple cores per socket, they have to share the > available bandwidth, so you get lower performance. It's *usually* > still > faster to use the available cores, but the per-core performance is > definitely lower than when using only one core per socket. > > Jed From torres.pedrozpk at gmail.com Wed Apr 14 17:07:10 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Wed, 14 Apr 2010 19:07:10 -0300 Subject: [petsc-users] KSP_SpeedUP In-Reply-To: <221DED12-BBFD-422F-B3B5-1B626CE78348@mcs.anl.gov> References: <87vdbtkafa.fsf@59A2.org> <221DED12-BBFD-422F-B3B5-1B626CE78348@mcs.anl.gov> Message-ID: Thanks, for a quickly reply. I have four nodes (GigaEthernet), each node with two Quad Core E5410 at 2.33GHz, Mem 16Gb - DDR2 667Mhz., and definitely I'm not have the enough memory bandwidth for a reasonable speedup. This may be a dummy question but in the second bullet says "its own memory bandwith of roughly 2 or more gigabytes", this means gigabytes/seconds, or refers to amount of memory per core?. Thanks a lot. Pedro 2010/4/14 Barry Smith > > Second bullet at > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#computers > > Barry > > > On Apr 14, 2010, at 4:10 PM, Jed Brown wrote: > > On Wed, 14 Apr 2010 18:02:11 -0300, Pedro Torres < >> torres.pedrozpk at gmail.com> wrote: >> >>> Hello, >>> >>> Sorry if this questions its not appropiate for the petsc-list, but I >>> really >>> want to known what happen when I'm getting differente KSP time results. >>> For >>> example, allocating two process in the same node I get 6.24 sec, and when >>> allocating two process in two nodes (1 process per node) I get 4.7sec. >>> Is >>> there a memory contention problem in my node?? The problem get worst when >>> increase the number of process. >>> >> >> Sparse matrix kernels are primarily limited by memory bandwidth which >> does not increase much with multicore hardware (vendors rarely mention >> this). When you use multiple cores per socket, they have to share the >> available bandwidth, so you get lower performance. It's *usually* still >> faster to use the available cores, but the per-core performance is >> definitely lower than when using only one core per socket. >> >> Jed >> > > -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Wed Apr 14 19:05:22 2010 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 14 Apr 2010 21:05:22 -0300 Subject: [petsc-users] KSP_SpeedUP In-Reply-To: References: <87vdbtkafa.fsf@59A2.org> <221DED12-BBFD-422F-B3B5-1B626CE78348@mcs.anl.gov> Message-ID: On 14 April 2010 19:07, Pedro Torres wrote: > Thanks, for a quickly reply. I have four nodes (GigaEthernet), each node > with two Quad Core E5410 at 2.33GHz, Mem 16Gb - DDR2 667Mhz., and definitely > I'm not have the enough memory bandwidth for a reasonable speedup. > Smart process to core binding may help you (mpiexec.hydra -n 4/8 --binding topo:sockets), Open MPI has equivalent functionality http://wiki.mcs.anl.gov/mpich2/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding -- Lisandro Dalcin --------------- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169 From torres.pedrozpk at gmail.com Wed Apr 14 20:27:17 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Wed, 14 Apr 2010 22:27:17 -0300 Subject: [petsc-users] KSP_SpeedUP In-Reply-To: References: <87vdbtkafa.fsf@59A2.org> <221DED12-BBFD-422F-B3B5-1B626CE78348@mcs.anl.gov> Message-ID: 2010/4/14 Lisandro Dalcin > On 14 April 2010 19:07, Pedro Torres wrote: > > Thanks, for a quickly reply. I have four nodes (GigaEthernet), each node > > with two Quad Core E5410 at 2.33GHz, Mem 16Gb - DDR2 667Mhz., and > definitely > > I'm not have the enough memory bandwidth for a reasonable speedup. > > > > Smart process to core binding may help you (mpiexec.hydra -n 4/8 > --binding topo:sockets), Open MPI has equivalent functionality > > > http://wiki.mcs.anl.gov/mpich2/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding > Yes, It really helped me. Thank you very much!. Regards > > > -- > Lisandro Dalcin > --------------- > CIMEC (INTEC/CONICET-UNL) > Predio CONICET-Santa Fe > Colectora RN 168 Km 472, Paraje El Pozo > Tel: +54-342-4511594 (ext 1011) > Tel/Fax: +54-342-4511169 > -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 15 03:03:29 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 15 Apr 2010 10:03:29 +0200 Subject: [petsc-users] KSP_SpeedUP In-Reply-To: References: <87vdbtkafa.fsf@59A2.org> <221DED12-BBFD-422F-B3B5-1B626CE78348@mcs.anl.gov> Message-ID: <87r5mhjg6m.fsf@59A2.org> On Wed, 14 Apr 2010 19:07:10 -0300, Pedro Torres wrote: > This may be a dummy question but in the second bullet says "its own memory > bandwith of roughly 2 or more gigabytes", this means gigabytes/seconds Yes, but this is a bit dated and very architecture dependent. For example, relative to current AMD/Intel offerings, BlueGene has much slower cores and slightly slower memory, leading to (usually) better scalability. In addition to the factors of in-socket scalability, it can be faster (if your network and MPI support it) for the network hardware to perform the copies in send/recieve operations on the same node (i.e. even when you have shared memory, the copies are often better done by network hardware than by the kernel). Arguably the only meaningful scalability study is by choosing how to utilize each node and then increasing the number of nodes. Jed From hsharma.tgjobs at gmail.com Thu Apr 15 21:14:49 2010 From: hsharma.tgjobs at gmail.com (Harsh Sharma) Date: Thu, 15 Apr 2010 21:14:49 -0500 Subject: [petsc-users] PetscBinaryWrite/PetscBinarySynchronizedWrite SEGV fault Message-ID: Hi, I'm a first-time user of the PETSc toolkit. I'm getting a "Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range" error. My program (called "pPCA_makeC0" below) is doing simple stuff : create a bunch of vectors, set its components to random numbers using VecSetRandom(), then compute the NORM_2 type vector-norm for each of them, and finally write them to a binary file. My vectors are quite large (2821728 dimensions) and even if I create just one such vector, the above-mentioned error occurs. From the output of the program, it appears that MPI is having some issue with the binary-file-writing part of the program. This problem occurs regardless of the number of processes/processors I use when invoking petscmpiexec. I've pasted the erroneous output at the end of this mail, for two scenarios: 1 processor and 1 or 5 processes. Any help in resolving this would be much appreciated. Thanks! Best, Harsh ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- output of my program (D = vector-dimension = 2821728 here, k = number of vectors = 5 here), 1 process, 1 processor ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/bin/petscmpiexec -np 1 ./pPCA_makeC0 -c 1 -m /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/C0.mat -D 2821728 -k 5 2-norm of column 1 of C0 = 969.272 2-norm of column 2 of C0 = 969.218 2-norm of column 3 of C0 = 969.087 2-norm of column 4 of C0 = 969.599 2-norm of column 5 of C0 = 969.547 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] PetscByteSwapScalar line 133 src/sys/fileio/sysio.c [0]PETSC ERROR: [0] PetscByteSwap line 179 src/sys/fileio/sysio.c [0]PETSC ERROR: [0] PetscBinaryWrite line 315 src/sys/fileio/sysio.c [0]PETSC ERROR: [0] PetscBinarySynchronizedWrite line 585 src/sys/fileio/sysio.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 CST 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:03:44 2010 [0]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib [0]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 [0]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 --with-scalar-type=real --with-precision=single --with-c++-support --with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 rank 0 in job 3 ifp-32.ifp.uiuc.edu_57355 caused collective abort of all ranks exit status of rank 0: return code 59 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- output of my program (D = vector-dimension = 2821728 here, k = number of vectors = 5 here), 5 processes, 1 processor ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/bin/petscmpiexec -np 5 ./pPCA_makeC0 -c 1 -m /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/C0.mat -D 2821728 -k 5 2-norm of column 1 of C0 = 970.149 2-norm of column 2 of C0 = 969.699 2-norm of column 3 of C0 = 969.517 2-norm of column 4 of C0 = 970.253 2-norm of column 5 of C0 = 969.81 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Write to file failed! [1]PETSC ERROR: Error writing to file.! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 CST 2010 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 [1]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib [1]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 [1]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/cwo[2]PETSC ERROR: --------------------- Error Message ------------------------------------ [2]PETSC ERROR: Write to file failed! [2]PETSC ERROR: Error writing to file.! [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 CST 2010 [2]PETSC ERROR: See docs/changes/index.html for recent updates. [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [2]PETSC ERROR: See docs/index.html for manual pages. [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 [2]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib [2]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 [2]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 --with-scalar-type=real --with-precision=single --with-c++-support --with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: PetscBinaryWrite() line 337 in src/sys/fileio/sysio.c [1]PETSC ERROR: PetscBinarySynchronizedWrite() line 588 in src/sys/fileio/sysio.c [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Error: Unable to write C0 row dimension D! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 CST 2010 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html forkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 --with-scalar-type=real --with-precision=single --with-c++-support --with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: PetscBinaryWrite() line 337 in src/sys/fileio/sysio.c [2]PETSC ERROR: PetscBinarySynchronizedWrite() line 588 in src/sys/fileio/sysio.c [2]PETSC ERROR: --------------------- Error Message ------------------------------------ [2]PETSC ERROR: Error: Unable to write C0 row dimension D! [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 CST 2010 [2]PETSC ERROR: See docs/changes/index.html for recent updates. [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [2]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 [1]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib [1]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 [1]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 --with-scalar-type=real --with-precision=single --with-c++-support --with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 123 in /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/pPCA_makeC0.c r manual pages. [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 [2]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib [2]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 [2]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 --with-scalar-type=real --with-precision=single --with-c++-support --with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 [2]PETSC ERROR: ------------------------------------------------------------------------ [2]PETSC ERROR: User provided function() line 123 in /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/pPCA_makeC0.c [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] PetscByteSwapScalar line 133 src/sys/fileio/sysio.c [0]PETSC ERROR: [0] PetscByteSwap line 179 src/sys/fileio/sysio.c [0]PETSC ERROR: [0] PetscBinaryWrite line 315 src/sys/fileio/sysio.c [0]PETSC ERROR: [0] PetscBinarySynchronizedWrite line 585 src/sys/fileio/sysio.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 CST 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 [0]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib [0]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 [0]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 --with-scalar-type=real --with-precision=single --with-c++-support --with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [3]PETSC ERROR: rank 2 in job 4 ifp-32.ifp.uiuc.edu_57355 caused collective abort of all ranks exit status of rank 2: return code 1 rank 1 in job 4 ifp-32.ifp.uiuc.edu_57355 caused collective abort of all ranks exit status of rank 1: return code 1 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 15 21:31:31 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Apr 2010 21:31:31 -0500 Subject: [petsc-users] PetscBinaryWrite/PetscBinarySynchronizedWrite SEGV fault In-Reply-To: References: Message-ID: You can get a stack trace using the debugger. We also recommend tracking this down using valgrind. Matt On Thu, Apr 15, 2010 at 9:14 PM, Harsh Sharma wrote: > Hi, > > I'm a first-time user of the PETSc toolkit. > > I'm getting a "Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range" error. > > My program (called "pPCA_makeC0" below) is doing simple stuff : create a > bunch of vectors, set its components to random numbers using VecSetRandom(), > then compute the NORM_2 type vector-norm for each of them, and finally write > them to a binary file. > > My vectors are quite large (2821728 dimensions) and even if I create just > one such vector, the above-mentioned error occurs. From the output of the > program, it appears that MPI is having some issue with the > binary-file-writing part of the program. > > This problem occurs regardless of the number of processes/processors I use > when invoking petscmpiexec. I've pasted the erroneous output at the end of > this mail, for two scenarios: 1 processor and 1 or 5 processes. > > Any help in resolving this would be much appreciated. Thanks! > > Best, > Harsh > > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > output of my program (D = vector-dimension = 2821728 here, k = number of > vectors = 5 here), 1 process, 1 processor > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/bin/petscmpiexec > -np 1 ./pPCA_makeC0 -c 1 -m > /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/C0.mat -D 2821728 -k 5 > 2-norm of column 1 of C0 = 969.272 > 2-norm of column 2 of C0 = 969.218 > 2-norm of column 3 of C0 = 969.087 > 2-norm of column 4 of C0 = 969.599 > 2-norm of column 5 of C0 = 969.547 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on linux or man libgmalloc on Apple to find memory > corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] PetscByteSwapScalar line 133 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscByteSwap line 179 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinaryWrite line 315 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinarySynchronizedWrite line 585 > src/sys/fileio/sysio.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 > CST 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by > hsharma Thu Apr 15 21:03:44 2010 > [0]PETSC ERROR: Libraries linked from > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib > [0]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [0]PETSC ERROR: Configure options > --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal > --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib > --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install > --with-large-file-io=1 --with-shared=0 --with-scalar-type=real > --with-precision=single --with-c++-support --with-c-support > --with-64-bit-indices=0 --with-log=1 --with-info=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > rank 0 in job 3 ifp-32.ifp.uiuc.edu_57355 caused collective abort of all > ranks > exit status of rank 0: return code 59 > > > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > output of my program (D = vector-dimension = 2821728 here, k = number of > vectors = 5 here), 5 processes, 1 processor > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/bin/petscmpiexec > -np 5 ./pPCA_makeC0 -c 1 -m > /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/C0.mat -D 2821728 -k 5 > 2-norm of column 1 of C0 = 970.149 > 2-norm of column 2 of C0 = 969.699 > 2-norm of column 3 of C0 = 969.517 > 2-norm of column 4 of C0 = 970.253 > 2-norm of column 5 of C0 = 969.81 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on linux or man libgmalloc on Apple to find memory > corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Write to file failed! > [1]PETSC ERROR: Error writing to file.! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 > CST 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by > hsharma Thu Apr 15 21:09:23 2010 > [1]PETSC ERROR: Libraries linked from > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib > [1]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [1]PETSC ERROR: Configure options > --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal > --with-blas-lapack-dir=/cwo[2]PETSC ERROR: --------------------- Error > Message ------------------------------------ > [2]PETSC ERROR: Write to file failed! > [2]PETSC ERROR: Error writing to file.! > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 > CST 2010 > [2]PETSC ERROR: See docs/changes/index.html for recent updates. > [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [2]PETSC ERROR: See docs/index.html for manual pages. > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by > hsharma Thu Apr 15 21:09:23 2010 > [2]PETSC ERROR: Libraries linked from > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib > [2]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [2]PETSC ERROR: Configure options > --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal > --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib > --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install > --with-large-file-io=1 --with-shared=0 --with-scalar-type=real > --with-precision=single --with-c++-support --with-c-support > --with-64-bit-indices=0 --with-log=1 --with-info=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: PetscBinaryWrite() line 337 in src/sys/fileio/sysio.c > [1]PETSC ERROR: PetscBinarySynchronizedWrite() line 588 in > src/sys/fileio/sysio.c > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Error: Unable to write C0 row dimension D! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 > CST 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html > forkspace/ifp-32-2/hasegawa/hsharma/apps/myLib > --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install > --with-large-file-io=1 --with-shared=0 --with-scalar-type=real > --with-precision=single --with-c++-support --with-c-support > --with-64-bit-indices=0 --with-log=1 --with-info=1 > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: PetscBinaryWrite() line 337 in src/sys/fileio/sysio.c > [2]PETSC ERROR: PetscBinarySynchronizedWrite() line 588 in > src/sys/fileio/sysio.c > [2]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [2]PETSC ERROR: Error: Unable to write C0 row dimension D! > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 > CST 2010 > [2]PETSC ERROR: See docs/changes/index.html for recent updates. > [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [2]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by > hsharma Thu Apr 15 21:09:23 2010 > [1]PETSC ERROR: Libraries linked from > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib > [1]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [1]PETSC ERROR: Configure options > --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal > --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib > --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install > --with-large-file-io=1 --with-shared=0 --with-scalar-type=real > --with-precision=single --with-c++-support --with-c-support > --with-64-bit-indices=0 --with-log=1 --with-info=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 123 in > /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/pPCA_makeC0.c > r manual pages. > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by > hsharma Thu Apr 15 21:09:23 2010 > [2]PETSC ERROR: Libraries linked from > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib > [2]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [2]PETSC ERROR: Configure options > --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal > --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib > --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install > --with-large-file-io=1 --with-shared=0 --with-scalar-type=real > --with-precision=single --with-c++-support --with-c-support > --with-64-bit-indices=0 --with-log=1 --with-info=1 > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: User provided function() line 123 in > /cworkspace/ifp-32-2/hasegawa/hsharma/testPPCA/pPCA_makeC0.c > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] PetscByteSwapScalar line 133 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscByteSwap line 179 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinaryWrite line 315 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinarySynchronizedWrite line 585 > src/sys/fileio/sysio.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 11:01:51 > CST 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named ifp-32.ifp.uiuc.edu by > hsharma Thu Apr 15 21:09:23 2010 > [0]PETSC ERROR: Libraries linked from > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/lib > [0]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [0]PETSC ERROR: Configure options > --prefix=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal > --with-blas-lapack-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib > --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install > --with-large-file-io=1 --with-shared=0 --with-scalar-type=real > --with-precision=single --with-c++-support --with-c-support > --with-64-bit-indices=0 --with-log=1 --with-info=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > [3]PETSC ERROR: rank 2 in job 4 ifp-32.ifp.uiuc.edu_57355 caused > collective abort of all ranks > exit status of rank 2: return code 1 > rank 1 in job 4 ifp-32.ifp.uiuc.edu_57355 caused collective abort of all > ranks > exit status of rank 1: return code 1 > > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 15 21:39:15 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 15 Apr 2010 21:39:15 -0500 Subject: [petsc-users] PetscBinaryWrite/PetscBinarySynchronizedWrite SEGV fault In-Reply-To: References: Message-ID: <0F441F8B-6BFF-429E-A5C4-90F616B534B4@mcs.anl.gov> You should run your code under valgrind http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind to find the memory corruption or use of unitialized or out of range memory that is causing the problem. With valgrind you will find your bug in less then five minutes, without it you could waste hours futzing around before finding the exact problem. Barry On Apr 15, 2010, at 9:14 PM, Harsh Sharma wrote: > Hi, > > I'm a first-time user of the PETSc toolkit. > > I'm getting a "Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range" error. > > My program (called "pPCA_makeC0" below) is doing simple stuff : > create a bunch of vectors, set its components to random numbers > using VecSetRandom(), then compute the NORM_2 type vector-norm for > each of them, and finally write them to a binary file. > > My vectors are quite large (2821728 dimensions) and even if I create > just one such vector, the above-mentioned error occurs. From the > output of the program, it appears that MPI is having some issue with > the binary-file-writing part of the program. > > This problem occurs regardless of the number of processes/processors > I use when invoking petscmpiexec. I've pasted the erroneous output > at the end of this mail, for two scenarios: 1 processor and 1 or 5 > processes. > > Any help in resolving this would be much appreciated. Thanks! > > Best, > Harsh > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > output of my program (D = vector-dimension = 2821728 here, k = > number of vectors = 5 here), 1 process, 1 processor > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/ > bin/petscmpiexec -np 1 ./pPCA_makeC0 -c 1 -m /cworkspace/ifp-32-2/ > hasegawa/hsharma/testPPCA/C0.mat -D 2821728 -k 5 > 2-norm of column 1 of C0 = 969.272 > 2-norm of column 2 of C0 = 969.218 > 2-norm of column 3 of C0 = 969.087 > 2-norm of column 4 of C0 = 969.599 > 2-norm of column 5 of C0 = 969.547 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or - > on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal > [0]PETSC ERROR: or try http://valgrind.org on linux or man > libgmalloc on Apple to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] PetscByteSwapScalar line 133 src/sys/fileio/ > sysio.c > [0]PETSC ERROR: [0] PetscByteSwap line 179 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinaryWrite line 315 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinarySynchronizedWrite line 585 src/sys/ > fileio/sysio.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 > 11:01:51 CST 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named > ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:03:44 2010 > [0]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/petsc-3.0.0p11-fltReal/lib > [0]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [0]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/ > hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with- > large-file-io=1 --with-shared=0 --with-scalar-type=real --with- > precision=single --with-c++-support --with-c-support --with-64-bit- > indices=0 --with-log=1 --with-info=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > rank 0 in job 3 ifp-32.ifp.uiuc.edu_57355 caused collective abort > of all ranks > exit status of rank 0: return code 59 > > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > output of my program (D = vector-dimension = 2821728 here, k = > number of vectors = 5 here), 5 processes, 1 processor > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > /cworkspace/ifp-32-2/hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal/ > bin/petscmpiexec -np 5 ./pPCA_makeC0 -c 1 -m /cworkspace/ifp-32-2/ > hasegawa/hsharma/testPPCA/C0.mat -D 2821728 -k 5 > 2-norm of column 1 of C0 = 970.149 > 2-norm of column 2 of C0 = 969.699 > 2-norm of column 3 of C0 = 969.517 > 2-norm of column 4 of C0 = 970.253 > 2-norm of column 5 of C0 = 969.81 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or - > on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal > [0]PETSC ERROR: or try http://valgrind.org on linux or man > libgmalloc on Apple to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Write to file failed! > [1]PETSC ERROR: Error writing to file.! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 > 11:01:51 CST 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named > ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 > [1]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/petsc-3.0.0p11-fltReal/lib > [1]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [1]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/ > hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/ > cwo[2]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [2]PETSC ERROR: Write to file failed! > [2]PETSC ERROR: Error writing to file.! > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 > 11:01:51 CST 2010 > [2]PETSC ERROR: See docs/changes/index.html for recent updates. > [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [2]PETSC ERROR: See docs/index.html for manual pages. > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named > ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 > [2]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/petsc-3.0.0p11-fltReal/lib > [2]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [2]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/ > hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with- > large-file-io=1 --with-shared=0 --with-scalar-type=real --with- > precision=single --with-c++-support --with-c-support --with-64-bit- > indices=0 --with-log=1 --with-info=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: PetscBinaryWrite() line 337 in src/sys/fileio/sysio.c > [1]PETSC ERROR: PetscBinarySynchronizedWrite() line 588 in src/sys/ > fileio/sysio.c > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Error: Unable to write C0 row dimension D! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 > 11:01:51 CST 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html forkspace/ifp-32-2/hasegawa/ > hsharma/apps/myLib --with-mpi-dir=/cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/mpich2-install --with-large-file-io=1 --with-shared=0 -- > with-scalar-type=real --with-precision=single --with-c++-support -- > with-c-support --with-64-bit-indices=0 --with-log=1 --with-info=1 > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: PetscBinaryWrite() line 337 in src/sys/fileio/sysio.c > [2]PETSC ERROR: PetscBinarySynchronizedWrite() line 588 in src/sys/ > fileio/sysio.c > [2]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [2]PETSC ERROR: Error: Unable to write C0 row dimension D! > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 > 11:01:51 CST 2010 > [2]PETSC ERROR: See docs/changes/index.html for recent updates. > [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [2]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named > ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 > [1]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/petsc-3.0.0p11-fltReal/lib > [1]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [1]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/ > hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with- > large-file-io=1 --with-shared=0 --with-scalar-type=real --with- > precision=single --with-c++-support --with-c-support --with-64-bit- > indices=0 --with-log=1 --with-info=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 123 in /cworkspace/ > ifp-32-2/hasegawa/hsharma/testPPCA/pPCA_makeC0.c > r manual pages. > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named > ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 > [2]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/petsc-3.0.0p11-fltReal/lib > [2]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [2]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/ > hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with- > large-file-io=1 --with-shared=0 --with-scalar-type=real --with- > precision=single --with-c++-support --with-c-support --with-64-bit- > indices=0 --with-log=1 --with-info=1 > [2]PETSC ERROR: > ------------------------------------------------------------------------ > [2]PETSC ERROR: User provided function() line 123 in /cworkspace/ > ifp-32-2/hasegawa/hsharma/testPPCA/pPCA_makeC0.c > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] PetscByteSwapScalar line 133 src/sys/fileio/ > sysio.c > [0]PETSC ERROR: [0] PetscByteSwap line 179 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinaryWrite line 315 src/sys/fileio/sysio.c > [0]PETSC ERROR: [0] PetscBinarySynchronizedWrite line 585 src/sys/ > fileio/sysio.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 11, Mon Feb 1 > 11:01:51 CST 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./pPCA_makeC0 on a lx26-amd6 named > ifp-32.ifp.uiuc.edu by hsharma Thu Apr 15 21:09:23 2010 > [0]PETSC ERROR: Libraries linked from /cworkspace/ifp-32-2/hasegawa/ > hsharma/apps/petsc-3.0.0p11-fltReal/lib > [0]PETSC ERROR: Configure run at Wed Mar 24 14:18:28 2010 > [0]PETSC ERROR: Configure options --prefix=/cworkspace/ifp-32-2/ > hasegawa/hsharma/apps/petsc-3.0.0p11-fltReal --with-blas-lapack-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/myLib --with-mpi-dir=/ > cworkspace/ifp-32-2/hasegawa/hsharma/apps/mpich2-install --with- > large-file-io=1 --with-shared=0 --with-scalar-type=real --with- > precision=single --with-c++-support --with-c-support --with-64-bit- > indices=0 --with-log=1 --with-info=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > [3]PETSC ERROR: rank 2 in job 4 ifp-32.ifp.uiuc.edu_57355 caused > collective abort of all ranks > exit status of rank 2: return code 1 > rank 1 in job 4 ifp-32.ifp.uiuc.edu_57355 caused collective abort > of all ranks > exit status of rank 1: return code 1 > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Apr 15 22:27:29 2010 From: zonexo at gmail.com (Wee-Beng Tay) Date: Fri, 16 Apr 2010 11:27:29 +0800 Subject: [petsc-users] Error during compiling my own code Message-ID: Hi, I have successfully built the PETSc libraries no my linux system. make ex1f also works. However, when compiling my own code, I got the error: [atlas5-c01]$ /app1/mvapich2/current/bin/mpif90 -c -O3 -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include -I/home/svu/g0306332/codes/petsc-3.1-p0/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o global.F -132 /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(10): error #6418: This name has already been assigned a data type. [NORM_1] integer(kind=selected_int_kind(5)) NORM_1 -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(11): error #6418: This name has already been assigned a data type. [NORM_2] integer(kind=selected_int_kind(5)) NORM_2 -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(12): error #6418: This name has already been assigned a data type. [NORM_FROBENIUS] integer(kind=selected_int_kind(5)) NORM_FROBENIUS -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(13): error #6418: This name has already been assigned a data type. [NORM_INFINITY] integer(kind=selected_int_kind(5)) NORM_INFINITY -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(14): error #6418: This name has already been assigned a data type. [NORM_MAX] integer(kind=selected_int_kind(5)) NORM_MAX -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(15): error #6418: This name has already been assigned a data type. [NORM_1_AND_2] integer(kind=selected_int_kind(5)) NORM_1_AND_2 -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(22): error #6418: This name has already been assigned a data type. [NOT_SET_VALUES] integer(kind=selected_int_kind(5)) NOT_SET_VALUES -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(23): error #6418: This name has already been assigned a data type. [INSERT_VALUES] integer(kind=selected_int_kind(5)) INSERT_VALUES -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(24): error #6418: This name has already been assigned a data type. [ADD_VALUES] integer(kind=selected_int_kind(5)) ADD_VALUES -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(25): error #6418: This name has already been assigned a data type. [MAX_VALUES] integer(kind=selected_int_kind(5)) MAX_VALUES -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(32): error #6418: This name has already been assigned a data type. [SCATTER_FORWARD] integer(kind=selected_int_kind(5)) SCATTER_FORWARD -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(33): error #6418: This name has already been assigned a data type. [SCATTER_REVERSE] integer(kind=selected_int_kind(5)) SCATTER_REVERSE -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(34): error #6418: This name has already been assigned a data type. [SCATTER_FORWARD_LOCAL] integer(kind=selected_int_kind(5)) SCATTER_FORWARD_LOCAL -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(35): error #6418: This name has already been assigned a data type. [SCATTER_REVERSE_LOCAL] integer(kind=selected_int_kind(5)) SCATTER_REVERSE_LOCAL -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(36): error #6418: This name has already been assigned a data type. [SCATTER_LOCAL] integer(kind=selected_int_kind(5)) SCATTER_LOCAL -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(44): error #6418: This name has already been assigned a data type. [VEC_IGNORE_OFF_PROC_ENTRIES] integer(kind=selected_int_kind(5)) VEC_IGNORE_OFF_PROC_ENTRIES -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(45): error #6418: This name has already been assigned a data type. [VEC_IGNORE_NEGATIVE_INDICES] integer(kind=selected_int_kind(5)) VEC_IGNORE_NEGATIVE_INDICES -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(53): error #6418: This name has already been assigned a data type. [VECOP_VIEW] integer(kind=selected_int_kind(5)) VECOP_VIEW -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(54): error #6418: This name has already been assigned a data type. [VECOP_LOADINTOVECTOR] integer(kind=selected_int_kind(5)) VECOP_LOADINTOVECTOR -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(10): error #6418: This name has already been assigned a data type. [MAT_FLUSH_ASSEMBLY] integer(kind=selected_int_kind(5)) MAT_FLUSH_ASSEMBLY -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(11): error #6418: This name has already been assigned a data type. [MAT_FINAL_ASSEMBLY] integer(kind=selected_int_kind(5)) MAT_FINAL_ASSEMBLY -----------------------------------------^ /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(17): error #6418: This name has already been assigned a data type. [MAT_FACTO I don't remember having this error in prev version of PETSc. May I know what went wrong? The 1st few lines of my code are: module global_data implicit none save #include "finclude/petsc.h" #include "finclude/petscvec.h" #include "finclude/petscmat.h" #include "finclude/petscksp.h" #include "finclude/petscpc.h" #include "finclude/petscsys.h" integer :: size_x,size_y,Total_time_step,new_start,interval,gridgen,safe_int,OS,airfoil_no integer :: steady,quasi_steady,Total_k,time,mom_solver,poisson_solver,start_time,motion !size_x must be in multiples of 37/32/36/40/55/41, !size_y must be in multiples of 26/16/36 !gridgen1 - 32x20, gridgen4 - 30x22, gridgen5 - 30x24 real(8) :: CFL, Re, scheme, B, AA, BB,AB, ld,air_centy,Pi,hy0,k0,freq,phase_ang,theta0,loc_rot real(8) :: time_sta,act_time,vel_h,vel_hn,inv_Re Thanks alot! -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 15 22:38:48 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Apr 2010 22:38:48 -0500 Subject: [petsc-users] Error during compiling my own code In-Reply-To: References: Message-ID: If you are using petsc-dev, you only need petsc.h Matt On Thu, Apr 15, 2010 at 10:27 PM, Wee-Beng Tay wrote: > Hi, > > I have successfully built the PETSc libraries no my linux system. > > make ex1f also works. > > However, when compiling my own code, I got the error: > > [atlas5-c01]$ /app1/mvapich2/current/bin/mpif90 -c -O3 > -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include > -I/home/svu/g0306332/codes/petsc-3.1-p0/include > -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include > -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include > -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include > -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include > -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o > global.F -132 > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(10): > error #6418: This name has already been assigned a data type. [NORM_1] > integer(kind=selected_int_kind(5)) NORM_1 > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(11): > error #6418: This name has already been assigned a data type. [NORM_2] > integer(kind=selected_int_kind(5)) NORM_2 > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(12): > error #6418: This name has already been assigned a data type. > [NORM_FROBENIUS] > integer(kind=selected_int_kind(5)) NORM_FROBENIUS > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(13): > error #6418: This name has already been assigned a data type. > [NORM_INFINITY] > integer(kind=selected_int_kind(5)) NORM_INFINITY > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(14): > error #6418: This name has already been assigned a data type. [NORM_MAX] > integer(kind=selected_int_kind(5)) NORM_MAX > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(15): > error #6418: This name has already been assigned a data type. > [NORM_1_AND_2] > integer(kind=selected_int_kind(5)) NORM_1_AND_2 > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(22): > error #6418: This name has already been assigned a data type. > [NOT_SET_VALUES] > integer(kind=selected_int_kind(5)) NOT_SET_VALUES > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(23): > error #6418: This name has already been assigned a data type. > [INSERT_VALUES] > integer(kind=selected_int_kind(5)) INSERT_VALUES > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(24): > error #6418: This name has already been assigned a data type. [ADD_VALUES] > integer(kind=selected_int_kind(5)) ADD_VALUES > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(25): > error #6418: This name has already been assigned a data type. [MAX_VALUES] > integer(kind=selected_int_kind(5)) MAX_VALUES > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(32): > error #6418: This name has already been assigned a data type. > [SCATTER_FORWARD] > integer(kind=selected_int_kind(5)) SCATTER_FORWARD > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(33): > error #6418: This name has already been assigned a data type. > [SCATTER_REVERSE] > integer(kind=selected_int_kind(5)) SCATTER_REVERSE > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(34): > error #6418: This name has already been assigned a data type. > [SCATTER_FORWARD_LOCAL] > integer(kind=selected_int_kind(5)) SCATTER_FORWARD_LOCAL > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(35): > error #6418: This name has already been assigned a data type. > [SCATTER_REVERSE_LOCAL] > integer(kind=selected_int_kind(5)) SCATTER_REVERSE_LOCAL > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(36): > error #6418: This name has already been assigned a data type. > [SCATTER_LOCAL] > integer(kind=selected_int_kind(5)) SCATTER_LOCAL > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(44): > error #6418: This name has already been assigned a data type. > [VEC_IGNORE_OFF_PROC_ENTRIES] > integer(kind=selected_int_kind(5)) VEC_IGNORE_OFF_PROC_ENTRIES > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(45): > error #6418: This name has already been assigned a data type. > [VEC_IGNORE_NEGATIVE_INDICES] > integer(kind=selected_int_kind(5)) VEC_IGNORE_NEGATIVE_INDICES > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(53): > error #6418: This name has already been assigned a data type. [VECOP_VIEW] > integer(kind=selected_int_kind(5)) VECOP_VIEW > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(54): > error #6418: This name has already been assigned a data type. > [VECOP_LOADINTOVECTOR] > integer(kind=selected_int_kind(5)) VECOP_LOADINTOVECTOR > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(10): > error #6418: This name has already been assigned a data type. > [MAT_FLUSH_ASSEMBLY] > integer(kind=selected_int_kind(5)) MAT_FLUSH_ASSEMBLY > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(11): > error #6418: This name has already been assigned a data type. > [MAT_FINAL_ASSEMBLY] > integer(kind=selected_int_kind(5)) MAT_FINAL_ASSEMBLY > -----------------------------------------^ > /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(17): > error #6418: This name has already been assigned a data type. [MAT_FACTO > > I don't remember having this error in prev version of PETSc. May I know > what went wrong? > > The 1st few lines of my code are: > > module global_data > > implicit none > > save > > #include "finclude/petsc.h" > #include "finclude/petscvec.h" > #include "finclude/petscmat.h" > #include "finclude/petscksp.h" > #include "finclude/petscpc.h" > #include "finclude/petscsys.h" > > > > integer :: > size_x,size_y,Total_time_step,new_start,interval,gridgen,safe_int,OS,airfoil_no > > integer :: > steady,quasi_steady,Total_k,time,mom_solver,poisson_solver,start_time,motion > > !size_x must be in multiples of 37/32/36/40/55/41, !size_y must be in > multiples of 26/16/36 > > !gridgen1 - 32x20, gridgen4 - 30x22, gridgen5 - 30x24 > > real(8) :: CFL, Re, scheme, B, AA, BB,AB, > ld,air_centy,Pi,hy0,k0,freq,phase_ang,theta0,loc_rot > > real(8) :: time_sta,act_time,vel_h,vel_hn,inv_Re > > Thanks alot! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Apr 15 22:53:13 2010 From: zonexo at gmail.com (Wee-Beng Tay) Date: Fri, 16 Apr 2010 11:53:13 +0800 Subject: [petsc-users] Error during compiling my own code In-Reply-To: References: Message-ID: Hi Matt, I'm using petsc-3.1-p0. But it's now working. I only use 1 #include "finclude/petsc.h" now. However when another of my f90 file has mpi command inside, I got the error: /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of this name conflict with those made accessible by a USE statement. [MPI_SOURCE] INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR ---------------^ /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of this name conflict with those made accessible by a USE statement. [MPI_TAG] INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR ---------------------------^ /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of this name conflict with those made accessible by a USE statement. [MPI_ERROR] INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR ------------------------------------^ What's happening now again? Thanks! On Fri, Apr 16, 2010 at 11:38 AM, Matthew Knepley wrote: > If you are using petsc-dev, you only need petsc.h > > Matt > > > On Thu, Apr 15, 2010 at 10:27 PM, Wee-Beng Tay wrote: > >> Hi, >> >> I have successfully built the PETSc libraries no my linux system. >> >> make ex1f also works. >> >> However, when compiling my own code, I got the error: >> >> [atlas5-c01]$ /app1/mvapich2/current/bin/mpif90 -c -O3 >> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include >> -I/home/svu/g0306332/codes/petsc-3.1-p0/include >> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include >> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include >> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include >> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include >> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o >> global.F -132 >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(10): >> error #6418: This name has already been assigned a data type. [NORM_1] >> integer(kind=selected_int_kind(5)) NORM_1 >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(11): >> error #6418: This name has already been assigned a data type. [NORM_2] >> integer(kind=selected_int_kind(5)) NORM_2 >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(12): >> error #6418: This name has already been assigned a data type. >> [NORM_FROBENIUS] >> integer(kind=selected_int_kind(5)) NORM_FROBENIUS >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(13): >> error #6418: This name has already been assigned a data type. >> [NORM_INFINITY] >> integer(kind=selected_int_kind(5)) NORM_INFINITY >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(14): >> error #6418: This name has already been assigned a data type. [NORM_MAX] >> integer(kind=selected_int_kind(5)) NORM_MAX >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(15): >> error #6418: This name has already been assigned a data type. >> [NORM_1_AND_2] >> integer(kind=selected_int_kind(5)) NORM_1_AND_2 >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(22): >> error #6418: This name has already been assigned a data type. >> [NOT_SET_VALUES] >> integer(kind=selected_int_kind(5)) NOT_SET_VALUES >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(23): >> error #6418: This name has already been assigned a data type. >> [INSERT_VALUES] >> integer(kind=selected_int_kind(5)) INSERT_VALUES >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(24): >> error #6418: This name has already been assigned a data type. [ADD_VALUES] >> integer(kind=selected_int_kind(5)) ADD_VALUES >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(25): >> error #6418: This name has already been assigned a data type. [MAX_VALUES] >> integer(kind=selected_int_kind(5)) MAX_VALUES >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(32): >> error #6418: This name has already been assigned a data type. >> [SCATTER_FORWARD] >> integer(kind=selected_int_kind(5)) SCATTER_FORWARD >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(33): >> error #6418: This name has already been assigned a data type. >> [SCATTER_REVERSE] >> integer(kind=selected_int_kind(5)) SCATTER_REVERSE >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(34): >> error #6418: This name has already been assigned a data type. >> [SCATTER_FORWARD_LOCAL] >> integer(kind=selected_int_kind(5)) SCATTER_FORWARD_LOCAL >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(35): >> error #6418: This name has already been assigned a data type. >> [SCATTER_REVERSE_LOCAL] >> integer(kind=selected_int_kind(5)) SCATTER_REVERSE_LOCAL >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(36): >> error #6418: This name has already been assigned a data type. >> [SCATTER_LOCAL] >> integer(kind=selected_int_kind(5)) SCATTER_LOCAL >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(44): >> error #6418: This name has already been assigned a data type. >> [VEC_IGNORE_OFF_PROC_ENTRIES] >> integer(kind=selected_int_kind(5)) VEC_IGNORE_OFF_PROC_ENTRIES >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(45): >> error #6418: This name has already been assigned a data type. >> [VEC_IGNORE_NEGATIVE_INDICES] >> integer(kind=selected_int_kind(5)) VEC_IGNORE_NEGATIVE_INDICES >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(53): >> error #6418: This name has already been assigned a data type. [VECOP_VIEW] >> integer(kind=selected_int_kind(5)) VECOP_VIEW >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(54): >> error #6418: This name has already been assigned a data type. >> [VECOP_LOADINTOVECTOR] >> integer(kind=selected_int_kind(5)) VECOP_LOADINTOVECTOR >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(10): >> error #6418: This name has already been assigned a data type. >> [MAT_FLUSH_ASSEMBLY] >> integer(kind=selected_int_kind(5)) MAT_FLUSH_ASSEMBLY >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(11): >> error #6418: This name has already been assigned a data type. >> [MAT_FINAL_ASSEMBLY] >> integer(kind=selected_int_kind(5)) MAT_FINAL_ASSEMBLY >> -----------------------------------------^ >> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(17): >> error #6418: This name has already been assigned a data type. [MAT_FACTO >> >> I don't remember having this error in prev version of PETSc. May I know >> what went wrong? >> >> The 1st few lines of my code are: >> >> module global_data >> >> implicit none >> >> save >> >> #include "finclude/petsc.h" >> #include "finclude/petscvec.h" >> #include "finclude/petscmat.h" >> #include "finclude/petscksp.h" >> #include "finclude/petscpc.h" >> #include "finclude/petscsys.h" >> >> >> >> integer :: >> size_x,size_y,Total_time_step,new_start,interval,gridgen,safe_int,OS,airfoil_no >> >> integer :: >> steady,quasi_steady,Total_k,time,mom_solver,poisson_solver,start_time,motion >> >> !size_x must be in multiples of 37/32/36/40/55/41, !size_y must be in >> multiples of 26/16/36 >> >> !gridgen1 - 32x20, gridgen4 - 30x22, gridgen5 - 30x24 >> >> real(8) :: CFL, Re, scheme, B, AA, BB,AB, >> ld,air_centy,Pi,hy0,k0,freq,phase_ang,theta0,loc_rot >> >> real(8) :: time_sta,act_time,vel_h,vel_hn,inv_Re >> >> Thanks alot! >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 15 23:09:57 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Apr 2010 23:09:57 -0500 Subject: [petsc-users] Error during compiling my own code In-Reply-To: References: Message-ID: It looks like you have conflicting definitions. Did you include "use mpi"? Matt On Thu, Apr 15, 2010 at 10:53 PM, Wee-Beng Tay wrote: > Hi Matt, > > I'm using petsc-3.1-p0. But it's now working. I only use 1 #include > "finclude/petsc.h" now. > > However when another of my f90 file has mpi command inside, I got the > error: > > /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > this name conflict with those made accessible by a USE statement. > [MPI_SOURCE] > INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR > ---------------^ > /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > this name conflict with those made accessible by a USE statement. > [MPI_TAG] > INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR > ---------------------------^ > /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > this name conflict with those made accessible by a USE statement. > [MPI_ERROR] > INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR > ------------------------------------^ > > What's happening now again? > > Thanks! > > > > On Fri, Apr 16, 2010 at 11:38 AM, Matthew Knepley wrote: > >> If you are using petsc-dev, you only need petsc.h >> >> Matt >> >> >> On Thu, Apr 15, 2010 at 10:27 PM, Wee-Beng Tay wrote: >> >>> Hi, >>> >>> I have successfully built the PETSc libraries no my linux system. >>> >>> make ex1f also works. >>> >>> However, when compiling my own code, I got the error: >>> >>> [atlas5-c01]$ /app1/mvapich2/current/bin/mpif90 -c -O3 >>> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include >>> -I/home/svu/g0306332/codes/petsc-3.1-p0/include >>> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include >>> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include >>> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include >>> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include >>> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o >>> global.F -132 >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(10): >>> error #6418: This name has already been assigned a data type. [NORM_1] >>> integer(kind=selected_int_kind(5)) NORM_1 >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(11): >>> error #6418: This name has already been assigned a data type. [NORM_2] >>> integer(kind=selected_int_kind(5)) NORM_2 >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(12): >>> error #6418: This name has already been assigned a data type. >>> [NORM_FROBENIUS] >>> integer(kind=selected_int_kind(5)) NORM_FROBENIUS >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(13): >>> error #6418: This name has already been assigned a data type. >>> [NORM_INFINITY] >>> integer(kind=selected_int_kind(5)) NORM_INFINITY >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(14): >>> error #6418: This name has already been assigned a data type. [NORM_MAX] >>> integer(kind=selected_int_kind(5)) NORM_MAX >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(15): >>> error #6418: This name has already been assigned a data type. >>> [NORM_1_AND_2] >>> integer(kind=selected_int_kind(5)) NORM_1_AND_2 >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(22): >>> error #6418: This name has already been assigned a data type. >>> [NOT_SET_VALUES] >>> integer(kind=selected_int_kind(5)) NOT_SET_VALUES >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(23): >>> error #6418: This name has already been assigned a data type. >>> [INSERT_VALUES] >>> integer(kind=selected_int_kind(5)) INSERT_VALUES >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(24): >>> error #6418: This name has already been assigned a data type. [ADD_VALUES] >>> integer(kind=selected_int_kind(5)) ADD_VALUES >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(25): >>> error #6418: This name has already been assigned a data type. [MAX_VALUES] >>> integer(kind=selected_int_kind(5)) MAX_VALUES >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(32): >>> error #6418: This name has already been assigned a data type. >>> [SCATTER_FORWARD] >>> integer(kind=selected_int_kind(5)) SCATTER_FORWARD >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(33): >>> error #6418: This name has already been assigned a data type. >>> [SCATTER_REVERSE] >>> integer(kind=selected_int_kind(5)) SCATTER_REVERSE >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(34): >>> error #6418: This name has already been assigned a data type. >>> [SCATTER_FORWARD_LOCAL] >>> integer(kind=selected_int_kind(5)) SCATTER_FORWARD_LOCAL >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(35): >>> error #6418: This name has already been assigned a data type. >>> [SCATTER_REVERSE_LOCAL] >>> integer(kind=selected_int_kind(5)) SCATTER_REVERSE_LOCAL >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(36): >>> error #6418: This name has already been assigned a data type. >>> [SCATTER_LOCAL] >>> integer(kind=selected_int_kind(5)) SCATTER_LOCAL >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(44): >>> error #6418: This name has already been assigned a data type. >>> [VEC_IGNORE_OFF_PROC_ENTRIES] >>> integer(kind=selected_int_kind(5)) VEC_IGNORE_OFF_PROC_ENTRIES >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(45): >>> error #6418: This name has already been assigned a data type. >>> [VEC_IGNORE_NEGATIVE_INDICES] >>> integer(kind=selected_int_kind(5)) VEC_IGNORE_NEGATIVE_INDICES >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(53): >>> error #6418: This name has already been assigned a data type. [VECOP_VIEW] >>> integer(kind=selected_int_kind(5)) VECOP_VIEW >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(54): >>> error #6418: This name has already been assigned a data type. >>> [VECOP_LOADINTOVECTOR] >>> integer(kind=selected_int_kind(5)) VECOP_LOADINTOVECTOR >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(10): >>> error #6418: This name has already been assigned a data type. >>> [MAT_FLUSH_ASSEMBLY] >>> integer(kind=selected_int_kind(5)) MAT_FLUSH_ASSEMBLY >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(11): >>> error #6418: This name has already been assigned a data type. >>> [MAT_FINAL_ASSEMBLY] >>> integer(kind=selected_int_kind(5)) MAT_FINAL_ASSEMBLY >>> -----------------------------------------^ >>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(17): >>> error #6418: This name has already been assigned a data type. [MAT_FACTO >>> >>> I don't remember having this error in prev version of PETSc. May I know >>> what went wrong? >>> >>> The 1st few lines of my code are: >>> >>> module global_data >>> >>> implicit none >>> >>> save >>> >>> #include "finclude/petsc.h" >>> #include "finclude/petscvec.h" >>> #include "finclude/petscmat.h" >>> #include "finclude/petscksp.h" >>> #include "finclude/petscpc.h" >>> #include "finclude/petscsys.h" >>> >>> >>> >>> integer :: >>> size_x,size_y,Total_time_step,new_start,interval,gridgen,safe_int,OS,airfoil_no >>> >>> integer :: >>> steady,quasi_steady,Total_k,time,mom_solver,poisson_solver,start_time,motion >>> >>> !size_x must be in multiples of 37/32/36/40/55/41, !size_y must be in >>> multiples of 26/16/36 >>> >>> !gridgen1 - 32x20, gridgen4 - 30x22, gridgen5 - 30x24 >>> >>> real(8) :: CFL, Re, scheme, B, AA, BB,AB, >>> ld,air_centy,Pi,hy0,k0,freq,phase_ang,theta0,loc_rot >>> >>> real(8) :: time_sta,act_time,vel_h,vel_hn,inv_Re >>> >>> Thanks alot! >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Apr 15 23:13:16 2010 From: zonexo at gmail.com (Wee-Beng Tay) Date: Fri, 16 Apr 2010 12:13:16 +0800 Subject: [petsc-users] Error during compiling my own code In-Reply-To: References: Message-ID: Hi Matt, Nope, not in the file I'm trying to compile. Everything seems fine before. On Fri, Apr 16, 2010 at 12:09 PM, Matthew Knepley wrote: > It looks like you have conflicting definitions. Did you include "use mpi"? > > Matt > > > On Thu, Apr 15, 2010 at 10:53 PM, Wee-Beng Tay wrote: > >> Hi Matt, >> >> I'm using petsc-3.1-p0. But it's now working. I only use 1 #include >> "finclude/petsc.h" now. >> >> However when another of my f90 file has mpi command inside, I got the >> error: >> >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of >> this name conflict with those made accessible by a USE statement. >> [MPI_SOURCE] >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR >> ---------------^ >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of >> this name conflict with those made accessible by a USE statement. >> [MPI_TAG] >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR >> ---------------------------^ >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of >> this name conflict with those made accessible by a USE statement. >> [MPI_ERROR] >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR >> ------------------------------------^ >> >> What's happening now again? >> >> Thanks! >> >> >> >> On Fri, Apr 16, 2010 at 11:38 AM, Matthew Knepley wrote: >> >>> If you are using petsc-dev, you only need petsc.h >>> >>> Matt >>> >>> >>> On Thu, Apr 15, 2010 at 10:27 PM, Wee-Beng Tay wrote: >>> >>>> Hi, >>>> >>>> I have successfully built the PETSc libraries no my linux system. >>>> >>>> make ex1f also works. >>>> >>>> However, when compiling my own code, I got the error: >>>> >>>> [atlas5-c01]$ /app1/mvapich2/current/bin/mpif90 -c -O3 >>>> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include >>>> -I/home/svu/g0306332/codes/petsc-3.1-p0/include >>>> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include >>>> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include >>>> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include >>>> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include >>>> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o >>>> global.F -132 >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(10): >>>> error #6418: This name has already been assigned a data type. [NORM_1] >>>> integer(kind=selected_int_kind(5)) NORM_1 >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(11): >>>> error #6418: This name has already been assigned a data type. [NORM_2] >>>> integer(kind=selected_int_kind(5)) NORM_2 >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(12): >>>> error #6418: This name has already been assigned a data type. >>>> [NORM_FROBENIUS] >>>> integer(kind=selected_int_kind(5)) NORM_FROBENIUS >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(13): >>>> error #6418: This name has already been assigned a data type. >>>> [NORM_INFINITY] >>>> integer(kind=selected_int_kind(5)) NORM_INFINITY >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(14): >>>> error #6418: This name has already been assigned a data type. [NORM_MAX] >>>> integer(kind=selected_int_kind(5)) NORM_MAX >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(15): >>>> error #6418: This name has already been assigned a data type. >>>> [NORM_1_AND_2] >>>> integer(kind=selected_int_kind(5)) NORM_1_AND_2 >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(22): >>>> error #6418: This name has already been assigned a data type. >>>> [NOT_SET_VALUES] >>>> integer(kind=selected_int_kind(5)) NOT_SET_VALUES >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(23): >>>> error #6418: This name has already been assigned a data type. >>>> [INSERT_VALUES] >>>> integer(kind=selected_int_kind(5)) INSERT_VALUES >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(24): >>>> error #6418: This name has already been assigned a data type. [ADD_VALUES] >>>> integer(kind=selected_int_kind(5)) ADD_VALUES >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(25): >>>> error #6418: This name has already been assigned a data type. [MAX_VALUES] >>>> integer(kind=selected_int_kind(5)) MAX_VALUES >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(32): >>>> error #6418: This name has already been assigned a data type. >>>> [SCATTER_FORWARD] >>>> integer(kind=selected_int_kind(5)) SCATTER_FORWARD >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(33): >>>> error #6418: This name has already been assigned a data type. >>>> [SCATTER_REVERSE] >>>> integer(kind=selected_int_kind(5)) SCATTER_REVERSE >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(34): >>>> error #6418: This name has already been assigned a data type. >>>> [SCATTER_FORWARD_LOCAL] >>>> integer(kind=selected_int_kind(5)) SCATTER_FORWARD_LOCAL >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(35): >>>> error #6418: This name has already been assigned a data type. >>>> [SCATTER_REVERSE_LOCAL] >>>> integer(kind=selected_int_kind(5)) SCATTER_REVERSE_LOCAL >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(36): >>>> error #6418: This name has already been assigned a data type. >>>> [SCATTER_LOCAL] >>>> integer(kind=selected_int_kind(5)) SCATTER_LOCAL >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(44): >>>> error #6418: This name has already been assigned a data type. >>>> [VEC_IGNORE_OFF_PROC_ENTRIES] >>>> integer(kind=selected_int_kind(5)) VEC_IGNORE_OFF_PROC_ENTRIES >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(45): >>>> error #6418: This name has already been assigned a data type. >>>> [VEC_IGNORE_NEGATIVE_INDICES] >>>> integer(kind=selected_int_kind(5)) VEC_IGNORE_NEGATIVE_INDICES >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(53): >>>> error #6418: This name has already been assigned a data type. [VECOP_VIEW] >>>> integer(kind=selected_int_kind(5)) VECOP_VIEW >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(54): >>>> error #6418: This name has already been assigned a data type. >>>> [VECOP_LOADINTOVECTOR] >>>> integer(kind=selected_int_kind(5)) VECOP_LOADINTOVECTOR >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(10): >>>> error #6418: This name has already been assigned a data type. >>>> [MAT_FLUSH_ASSEMBLY] >>>> integer(kind=selected_int_kind(5)) MAT_FLUSH_ASSEMBLY >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(11): >>>> error #6418: This name has already been assigned a data type. >>>> [MAT_FINAL_ASSEMBLY] >>>> integer(kind=selected_int_kind(5)) MAT_FINAL_ASSEMBLY >>>> -----------------------------------------^ >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(17): >>>> error #6418: This name has already been assigned a data type. [MAT_FACTO >>>> >>>> I don't remember having this error in prev version of PETSc. May I know >>>> what went wrong? >>>> >>>> The 1st few lines of my code are: >>>> >>>> module global_data >>>> >>>> implicit none >>>> >>>> save >>>> >>>> #include "finclude/petsc.h" >>>> #include "finclude/petscvec.h" >>>> #include "finclude/petscmat.h" >>>> #include "finclude/petscksp.h" >>>> #include "finclude/petscpc.h" >>>> #include "finclude/petscsys.h" >>>> >>>> >>>> >>>> integer :: >>>> size_x,size_y,Total_time_step,new_start,interval,gridgen,safe_int,OS,airfoil_no >>>> >>>> integer :: >>>> steady,quasi_steady,Total_k,time,mom_solver,poisson_solver,start_time,motion >>>> >>>> !size_x must be in multiples of 37/32/36/40/55/41, !size_y must be >>>> in multiples of 26/16/36 >>>> >>>> !gridgen1 - 32x20, gridgen4 - 30x22, gridgen5 - 30x24 >>>> >>>> real(8) :: CFL, Re, scheme, B, AA, BB,AB, >>>> ld,air_centy,Pi,hy0,k0,freq,phase_ang,theta0,loc_rot >>>> >>>> real(8) :: time_sta,act_time,vel_h,vel_hn,inv_Re >>>> >>>> Thanks alot! >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 16 01:35:49 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 16 Apr 2010 01:35:49 -0500 (CDT) Subject: [petsc-users] Error during compiling my own code In-Reply-To: References: Message-ID: > >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > >> this name conflict with those made accessible by a USE statement. > >> [MPI_SOURCE] > >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR You are using some module - based on the 'use' statement refered by the error above. And this module must be including mpif.h [or using mpi.mod] directly or indirectly - so you'll have to debug this fortran code issue. Or - send us a sample code we can use to reporduces this problem - Without that we are just shooting in the dark.. Satish On Fri, 16 Apr 2010, Wee-Beng Tay wrote: > Hi Matt, > > Nope, not in the file I'm trying to compile. Everything seems fine before. > > On Fri, Apr 16, 2010 at 12:09 PM, Matthew Knepley wrote: > > > It looks like you have conflicting definitions. Did you include "use mpi"? > > > > Matt > > > > > > On Thu, Apr 15, 2010 at 10:53 PM, Wee-Beng Tay wrote: > > > >> Hi Matt, > >> > >> I'm using petsc-3.1-p0. But it's now working. I only use 1 #include > >> "finclude/petsc.h" now. > >> > >> However when another of my f90 file has mpi command inside, I got the > >> error: > >> > >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > >> this name conflict with those made accessible by a USE statement. > >> [MPI_SOURCE] > >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR > >> ---------------^ > >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > >> this name conflict with those made accessible by a USE statement. > >> [MPI_TAG] > >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR > >> ---------------------------^ > >> /app1/mvapich2/current/include/mpif.h(9): error #6401: The attributes of > >> this name conflict with those made accessible by a USE statement. > >> [MPI_ERROR] > >> INTEGER MPI_SOURCE, MPI_TAG, MPI_ERROR > >> ------------------------------------^ > >> > >> What's happening now again? > >> > >> Thanks! > >> > >> > >> > >> On Fri, Apr 16, 2010 at 11:38 AM, Matthew Knepley wrote: > >> > >>> If you are using petsc-dev, you only need petsc.h > >>> > >>> Matt > >>> > >>> > >>> On Thu, Apr 15, 2010 at 10:27 PM, Wee-Beng Tay wrote: > >>> > >>>> Hi, > >>>> > >>>> I have successfully built the PETSc libraries no my linux system. > >>>> > >>>> make ex1f also works. > >>>> > >>>> However, when compiling my own code, I got the error: > >>>> > >>>> [atlas5-c01]$ /app1/mvapich2/current/bin/mpif90 -c -O3 > >>>> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include > >>>> -I/home/svu/g0306332/codes/petsc-3.1-p0/include > >>>> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include > >>>> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include > >>>> -I/home/svu/g0306332/codes/petsc-3.1-p0/atlas5-mpi-nodebug/include > >>>> -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include > >>>> -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o > >>>> global.F -132 > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(10): > >>>> error #6418: This name has already been assigned a data type. [NORM_1] > >>>> integer(kind=selected_int_kind(5)) NORM_1 > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(11): > >>>> error #6418: This name has already been assigned a data type. [NORM_2] > >>>> integer(kind=selected_int_kind(5)) NORM_2 > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(12): > >>>> error #6418: This name has already been assigned a data type. > >>>> [NORM_FROBENIUS] > >>>> integer(kind=selected_int_kind(5)) NORM_FROBENIUS > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(13): > >>>> error #6418: This name has already been assigned a data type. > >>>> [NORM_INFINITY] > >>>> integer(kind=selected_int_kind(5)) NORM_INFINITY > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(14): > >>>> error #6418: This name has already been assigned a data type. [NORM_MAX] > >>>> integer(kind=selected_int_kind(5)) NORM_MAX > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(15): > >>>> error #6418: This name has already been assigned a data type. > >>>> [NORM_1_AND_2] > >>>> integer(kind=selected_int_kind(5)) NORM_1_AND_2 > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(22): > >>>> error #6418: This name has already been assigned a data type. > >>>> [NOT_SET_VALUES] > >>>> integer(kind=selected_int_kind(5)) NOT_SET_VALUES > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(23): > >>>> error #6418: This name has already been assigned a data type. > >>>> [INSERT_VALUES] > >>>> integer(kind=selected_int_kind(5)) INSERT_VALUES > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(24): > >>>> error #6418: This name has already been assigned a data type. [ADD_VALUES] > >>>> integer(kind=selected_int_kind(5)) ADD_VALUES > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(25): > >>>> error #6418: This name has already been assigned a data type. [MAX_VALUES] > >>>> integer(kind=selected_int_kind(5)) MAX_VALUES > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(32): > >>>> error #6418: This name has already been assigned a data type. > >>>> [SCATTER_FORWARD] > >>>> integer(kind=selected_int_kind(5)) SCATTER_FORWARD > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(33): > >>>> error #6418: This name has already been assigned a data type. > >>>> [SCATTER_REVERSE] > >>>> integer(kind=selected_int_kind(5)) SCATTER_REVERSE > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(34): > >>>> error #6418: This name has already been assigned a data type. > >>>> [SCATTER_FORWARD_LOCAL] > >>>> integer(kind=selected_int_kind(5)) SCATTER_FORWARD_LOCAL > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(35): > >>>> error #6418: This name has already been assigned a data type. > >>>> [SCATTER_REVERSE_LOCAL] > >>>> integer(kind=selected_int_kind(5)) SCATTER_REVERSE_LOCAL > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(36): > >>>> error #6418: This name has already been assigned a data type. > >>>> [SCATTER_LOCAL] > >>>> integer(kind=selected_int_kind(5)) SCATTER_LOCAL > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(44): > >>>> error #6418: This name has already been assigned a data type. > >>>> [VEC_IGNORE_OFF_PROC_ENTRIES] > >>>> integer(kind=selected_int_kind(5)) VEC_IGNORE_OFF_PROC_ENTRIES > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(45): > >>>> error #6418: This name has already been assigned a data type. > >>>> [VEC_IGNORE_NEGATIVE_INDICES] > >>>> integer(kind=selected_int_kind(5)) VEC_IGNORE_NEGATIVE_INDICES > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(53): > >>>> error #6418: This name has already been assigned a data type. [VECOP_VIEW] > >>>> integer(kind=selected_int_kind(5)) VECOP_VIEW > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscvec.h(54): > >>>> error #6418: This name has already been assigned a data type. > >>>> [VECOP_LOADINTOVECTOR] > >>>> integer(kind=selected_int_kind(5)) VECOP_LOADINTOVECTOR > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(10): > >>>> error #6418: This name has already been assigned a data type. > >>>> [MAT_FLUSH_ASSEMBLY] > >>>> integer(kind=selected_int_kind(5)) MAT_FLUSH_ASSEMBLY > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(11): > >>>> error #6418: This name has already been assigned a data type. > >>>> [MAT_FINAL_ASSEMBLY] > >>>> integer(kind=selected_int_kind(5)) MAT_FINAL_ASSEMBLY > >>>> -----------------------------------------^ > >>>> /home/svu/g0306332/codes/petsc-3.1-p0/include/finclude/petscmat.h(17): > >>>> error #6418: This name has already been assigned a data type. [MAT_FACTO > >>>> > >>>> I don't remember having this error in prev version of PETSc. May I know > >>>> what went wrong? > >>>> > >>>> The 1st few lines of my code are: > >>>> > >>>> module global_data > >>>> > >>>> implicit none > >>>> > >>>> save > >>>> > >>>> #include "finclude/petsc.h" > >>>> #include "finclude/petscvec.h" > >>>> #include "finclude/petscmat.h" > >>>> #include "finclude/petscksp.h" > >>>> #include "finclude/petscpc.h" > >>>> #include "finclude/petscsys.h" > >>>> > >>>> > >>>> > >>>> integer :: > >>>> size_x,size_y,Total_time_step,new_start,interval,gridgen,safe_int,OS,airfoil_no > >>>> > >>>> integer :: > >>>> steady,quasi_steady,Total_k,time,mom_solver,poisson_solver,start_time,motion > >>>> > >>>> !size_x must be in multiples of 37/32/36/40/55/41, !size_y must be > >>>> in multiples of 26/16/36 > >>>> > >>>> !gridgen1 - 32x20, gridgen4 - 30x22, gridgen5 - 30x24 > >>>> > >>>> real(8) :: CFL, Re, scheme, B, AA, BB,AB, > >>>> ld,air_centy,Pi,hy0,k0,freq,phase_ang,theta0,loc_rot > >>>> > >>>> real(8) :: time_sta,act_time,vel_h,vel_hn,inv_Re > >>>> > >>>> Thanks alot! > >>>> > >>> > >>> > >>> > >>> -- > >>> What most experimenters take for granted before they begin their > >>> experiments is infinitely more interesting than any results to which their > >>> experiments lead. > >>> -- Norbert Wiener > >>> > >> > >> > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > -- Norbert Wiener > > > From tribur at vision.ee.ethz.ch Fri Apr 16 06:51:13 2010 From: tribur at vision.ee.ethz.ch (tribur at vision.ee.ethz.ch) Date: Fri, 16 Apr 2010 13:51:13 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero Message-ID: <20100416135113.57976b8sa1v482lt@email.ee.ethz.ch> Dear Barry and Matt, thanks for your helpful response. ML works now using, e.g., -mg_coarse_redundant_pc_factor_shift_type POSITIVE_DEFINITE. However, it converges very slowly using the default REDUNDANT for the coarse solve. On 10 processors, e.g., even bjacobi plus -mg_coarse_ksp_max_it 10 works better. What solver do you recommend for the coarse solve? Maybe superlu? Best regards, Kathrin Quoting "Barry Smith" : > > -mg_coarse_pc_factor_shift_nonzero since it is the coarse level of > the multigrid that is producing the zero pivot. > > Barry > > On Apr 13, 2010, at 8:51 AM, Matthew Knepley wrote: > >> On Tue, Apr 13, 2010 at 2:49 PM, wrote: >> Hi, >> >> using ML I got the error >> >> "[0]PETSC ERROR: Detected zero pivot in LU factorization" >> >> As recommended at >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html , >> I tried -pc_factor_shift_nonzero but it doesn't have the desired >> effect using ML. >> >> How do I have to formulate the command line option? What does - >> [level]_pc_factor_shift_nonzero mean? What other parallel >> preconditioner could I try besides Hypre/Boomeramg or ML? >> >> This means the MG level, like 2. You can see all available options >> using -help. >> >> Matt >> >> Thanks in advance for your precious help, >> >> Kathrin >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > From jed at 59A2.org Fri Apr 16 06:58:23 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 16 Apr 2010 13:58:23 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: <20100416135113.57976b8sa1v482lt@email.ee.ethz.ch> References: <20100416135113.57976b8sa1v482lt@email.ee.ethz.ch> Message-ID: <87aat3k3s0.fsf@59A2.org> On Fri, 16 Apr 2010 13:51:13 +0200, tribur at vision.ee.ethz.ch wrote: > ML works now using, e.g., -mg_coarse_redundant_pc_factor_shift_type > POSITIVE_DEFINITE. However, it converges very slowly using the default > REDUNDANT for the coarse solve. "Converges slowly" or "the coarse-level solve is expensive"? I suggest starting with -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_package mumps or varying parameters in ML to see if you can make the coarse level problem smaller without hurting convergence rate. You can do semi-redundant solves if you scale processor counts beyond what MUMPS works well with. Depending on what problem you are solving, ML could be producing a (nearly) singular coarse level operator in which case you can expect very confusing and inconsistent behavior. Jed From chenleping at yahoo.cn Thu Apr 15 08:17:19 2010 From: chenleping at yahoo.cn (=?gb2312?B?s8LA1sa9?=) Date: Thu, 15 Apr 2010 21:17:19 +0800 Subject: [petsc-users] array and vec Message-ID: <201004152117168114636@yahoo.cn> petsc teams, for following definitions: double precision u(n) Vec v then, how can I actualize mutual updating between u(n) and v? for example, when u(i) has changed, v will be updated, vice versa. thanks very much, leping -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Fri Apr 16 08:21:35 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 16 Apr 2010 15:21:35 +0200 Subject: [petsc-users] array and vec In-Reply-To: <201004152117168114636@yahoo.cn> References: <201004152117168114636@yahoo.cn> Message-ID: <878w8njzxc.fsf@59A2.org> On Thu, 15 Apr 2010 21:17:19 +0800, "=?gb2312?B?s8LA1sa9?=" wrote: > petsc teams, > > for following definitions: > > double precision u(n) > Vec v > > then, how can I actualize mutual updating between u(n) and v? I'm not sure what you mean, if these are the same quantity then they shouldn't be stored in separate locations. See VecGetArray: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecGetArray.html Jed From bsmith at mcs.anl.gov Fri Apr 16 08:25:24 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 16 Apr 2010 08:25:24 -0500 Subject: [petsc-users] array and vec In-Reply-To: <878w8njzxc.fsf@59A2.org> References: <201004152117168114636@yahoo.cn> <878w8njzxc.fsf@59A2.org> Message-ID: <2CA1B75B-92D0-414A-AAA3-1E136E57F301@mcs.anl.gov> On Apr 16, 2010, at 8:21 AM, Jed Brown wrote: > On Thu, 15 Apr 2010 21:17:19 +0800, "=?gb2312?B?s8LA1sa9?=" > wrote: >> petsc teams, >> >> for following definitions: >> >> double precision u(n) >> Vec v >> >> then, how can I actualize mutual updating between u(n) and v? > > I'm not sure what you mean, if these are the same quantity then they > shouldn't be stored in separate locations. See VecGetArray: > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecGetArray.html and also http://www.ib.cnea.gov.ar/~ipc/ptpde/petsc-dir/docs/manualpages/Vec/VecCreateMPIWithArray.html is an alternative approach. Barry > > Jed From chenleping at yahoo.cn Thu Apr 15 09:14:57 2010 From: chenleping at yahoo.cn (=?gb2312?B?s8LA1sa9o6hMZXBpbmcgQ2hlbqOp?=) Date: Thu, 15 Apr 2010 22:14:57 +0800 Subject: [petsc-users] array and vec References: <201004152117168114636@yahoo.cn> Message-ID: <201004152209074955857@yahoo.cn> hello, petsc teams, when I use FormFuncion(), I need use a value of x; but x is defined by Vec, I don't know how to get a value of x, for example, the 3rd value. thanks, leping ???? Jed Brown ????? 2010-04-16 21:21:47 ???? chenleping; PETSc users list ??? ??? Re: [petsc-users] array and vec On Thu, 15 Apr 2010 21:17:19 +0800, "=?gb2312?B?s8LA1sa9?=" wrote: > petsc teams, > > for following definitions: > > double precision u(n) > Vec v > > then, how can I actualize mutual updating between u(n) and v? I'm not sure what you mean, if these are the same quantity then they shouldn't be stored in separate locations. See VecGetArray: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecGetArray.html Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 16 09:48:17 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 16 Apr 2010 09:48:17 -0500 Subject: [petsc-users] array and vec In-Reply-To: <201004152209074955857@yahoo.cn> References: <201004152117168114636@yahoo.cn> <201004152209074955857@yahoo.cn> Message-ID: 2010/4/15 ????Leping Chen? > hello, petsc teams, > > when I use FormFuncion(), I need use a value of x; > > but x is defined by Vec, I don't know how to get a value of x, for > example, the 3rd value. > Exactly as Jed wrote, you can use VecGetArray() to access any local values of x. Matt > > thanks, > > leping > > ------------------------------ > *????* Jed Brown > *?????* 2010-04-16 21:21:47 > *????* chenleping; PETSc users list > *???* > *???* Re: [petsc-users] array and vec > On Thu, 15 Apr 2010 21:17:19 +0800, "=?gb2312?B?s8LA1sa9?=" < > chenleping at yahoo.cn> wrote: > > petsc teams, > > > > for following definitions: > > > > double precision u(n) > > Vec v > > > > then, how can I actualize mutual updating between u(n) and v? > I'm not sure what you mean, if these are the same quantity then they > shouldn't be stored in separate locations. See VecGetArray: > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecGetArray.html > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Apr 16 12:00:22 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 16 Apr 2010 12:00:22 -0500 (CDT) Subject: [petsc-users] array and vec In-Reply-To: References: <201004152117168114636@yahoo.cn> <201004152209074955857@yahoo.cn> Message-ID: On Fri, 16 Apr 2010, Matthew Knepley wrote: > 2010/4/15 ????Leping Chen? > > > hello, petsc teams, > > > > when I use FormFuncion(), I need use a value of x; > > > > but x is defined by Vec, I don't know how to get a value of x, for > > example, the 3rd value. > > > > Exactly as Jed wrote, you can use VecGetArray() to access any local values > of x. > > > for following definitions: > > > > > > double precision u(n) > > > Vec v Since you are uisng fortran check VecGetArrayF90(). for eg: check src/vec/vec/examples/tutorials/ex4f90.F Satish From chenleping at yahoo.cn Thu Apr 15 20:55:38 2010 From: chenleping at yahoo.cn (=?gb2312?B?s8LA1sa9o6hMZXBpbmcgQ2hlbqOp?=) Date: Fri, 16 Apr 2010 09:55:38 +0800 Subject: [petsc-users] array and vec References: <201004152117168114636@yahoo.cn>, <201004152209074955857@yahoo.cn> Message-ID: <201004160955360135681@yahoo.cn> thanks, but I don't understand VecGetArray(), that is to say, I don't know how to call VecGetArray() when I plan to get any local values of x, Maybe, double precision u(n) Vec x PetscOffset uu_i call VecGetArray(x,u,uu_i,ierr) is it right? it seems don't work. how can I call it? thanks, leping ???? Matthew Knepley ????? 2010-04-16 22:48:18 ???? chenleping; PETSc users list ??? ??? Re: [petsc-users] array and vec 2010/4/15 ????Leping Chen? hello, petsc teams, when I use FormFuncion(), I need use a value of x; but x is defined by Vec, I don't know how to get a value of x, for example, the 3rd value. Exactly as Jed wrote, you can use VecGetArray() to access any local values of x. Matt thanks, leping ???? Jed Brown ????? 2010-04-16 21:21:47 ???? chenleping; PETSc users list ??? ??? Re: [petsc-users] array and vec On Thu, 15 Apr 2010 21:17:19 +0800, "=?gb2312?B?s8LA1sa9?=" wrote: > petsc teams, > > for following definitions: > > double precision u(n) > Vec v > > then, how can I actualize mutual updating between u(n) and v? I'm not sure what you mean, if these are the same quantity then they shouldn't be stored in separate locations. See VecGetArray: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecGetArray.html Jed -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 16 21:16:54 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 16 Apr 2010 21:16:54 -0500 Subject: [petsc-users] array and vec In-Reply-To: <201004160955360135681@yahoo.cn> References: <201004152117168114636@yahoo.cn>, <201004152209074955857@yahoo.cn> <201004160955360135681@yahoo.cn> Message-ID: The manual page that Jed pointed out to you has links to several examples that use VecGetArray() from Fortran. For example http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex4f.F.html In particular you don't declare u(n) you just declare u(1) > it seems don't work. This is the most useless type of email. If all you tell us is "it seems don't work", then we have NO idea of what happened, what didn't work, so cannot provide advice. Cut and paste all error messages and everything that that would help us see "what didn't work". Barry On Apr 15, 2010, at 8:55 PM, ????Leping Chen? wrote: > thanks, > > but I don't understand VecGetArray(), that is to say, I don't know > how to call VecGetArray() > > when I plan to get any local values of x, > > Maybe, > > double precision u(n) > > Vec x > > PetscOffset uu_i > > call VecGetArray(x,u,uu_i,ierr) > > is it right? > > it seems don't work. how can I call it? > > thanks, > > leping > ???? Matthew Knepley > ????? 2010-04-16 22:48:18 > ???? chenleping; PETSc users list > ??? > ??? Re: [petsc-users] array and vec > 2010/4/15 ????Leping Chen? > hello, petsc teams, > > when I use FormFuncion(), I need use a value of x; > > but x is defined by Vec, I don't know how to get a value of x, for > example, the 3rd value. > > Exactly as Jed wrote, you can use VecGetArray() to access any local > values > of x. > > Matt > > > thanks, > > leping > > ???? Jed Brown > ????? 2010-04-16 21:21:47 > ???? chenleping; PETSc users list > ??? > ??? Re: [petsc-users] array and vec > On Thu, 15 Apr 2010 21:17:19 +0800, "=?gb2312?B?s8LA1sa9?=" > wrote: > > petsc teams, > > > > for following definitions: > > > > double precision u(n) > > Vec v > > > > then, how can I actualize mutual updating between u(n) and v? > I'm not sure what you mean, if these are the same quantity then they > shouldn't be stored in separate locations. See VecGetArray: > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecGetArray.html > Jed > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Sat Apr 17 08:35:39 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Sat, 17 Apr 2010 09:35:39 -0400 Subject: [petsc-users] glibc detected Message-ID: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> Hi, I got a message of glibc detected, and not sure where is it coming from, any ideas? What I have changed from a working copy to this one is to add an element in one of the structure which in the appCtx. Thanks very much! ----------------------------------------------------------------------- (gdb) c Continuing. *** glibc detected *** /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe: free(): invalid next size (normal): 0x08b10750 *** ======= Backtrace: ========= /lib/tls/i686/cmov/libc.so.6[0xb7d9da85] /lib/tls/i686/cmov/libc.so.6(cfree+0x90)[0xb7da14f0] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x873a938] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x873c43b] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x872e8f0] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x866c211] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x86a4768] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8276048] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80729e1] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80d8027] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8100ab3] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80e0d44] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80fa8c9] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80f3dc1] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8052ecc] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x804def5] /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0)[0xb7d48450] /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x804b451] ======= Memory map: ======== 08048000-08918000 r-xp 00000000 08:06 3910764 /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe 08918000-0891e000 rw-p 008cf000 08:06 3910764 /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe 0891e000-08b1f000 rw-p 0891e000 00:00 0 [heap] b7b00000-b7b21000 rw-p b7b00000 00:00 0 b7b21000-b7c00000 ---p b7b21000 00:00 0 b7cef000-b7cf7000 r-xp 00000000 08:05 344321 /lib/tls/i686/cmov/libnss_nis-2.7.so b7cf7000-b7cf9000 rw-p 00007000 08:05 344321 /lib/tls/i686/cmov/libnss_nis-2.7.so b7cf9000-b7d00000 r-xp 00000000 08:05 344317 /lib/tls/i686/cmov/libnss_compat-2.7.so b7d00000-b7d02000 rw-p 00006000 08:05 344317 /lib/tls/i686/cmov/libnss_compat-2.7.so b7d02000-b7d0b000 r-xp 00000000 08:05 344319 /lib/tls/i686/cmov/libnss_files-2.7.so b7d0b000-b7d0d000 rw-p 00008000 08:05 344319 /lib/tls/i686/cmov/libnss_files-2.7.so b7d0d000-b7d0f000 rw-p b7d0d000 00:00 0 b7d0f000-b7d13000 r-xp 00000000 08:05 179791 /usr/lib/libXdmcp.so.6.0.0 b7d13000-b7d14000 rw-p 00003000 08:05 179791 /usr/lib/libXdmcp.so.6.0.0 b7d14000-b7d16000 r-xp 00000000 08:05 179780 /usr/lib/libXau.so.6.0.0 b7d16000-b7d17000 rw-p 00001000 08:05 179780 /usr/lib/libXau.so.6.0.0 b7d17000-b7d2e000 r-xp 00000000 08:05 180628 /usr/lib/libxcb.so.1.0.0 b7d2e000-b7d2f000 rw-p 00016000 08:05 180628 /usr/lib/libxcb.so.1.0.0 b7d2f000-b7d30000 rw-p b7d2f000 00:00 0 b7d30000-b7d31000 r-xp 00000000 08:05 180626 /usr/lib/libxcb-xlib.so.0.0.0 b7d31000-b7d32000 rw-p 00000000 08:05 180626 /usr/lib/libxcb-xlib.so.0.0.0 b7d32000-b7e7b000 r-xp 00000000 08:05 344310 /lib/tls/i686/cmov/libc-2.7.so b7e7b000-b7e7c000 r--p 00149000 08:05 344310 /lib/tls/i686/cmov/libc-2.7.so b7e7c000-b7e7e000 rw-p 0014a000 08:05 344310 /lib/tls/i686/cmov/libc-2.7.so b7e7e000-b7e81000 rw-p b7e7e000 00:00 0 b7e81000-b7e95000 r-xp 00000000 08:05 344324 /lib/tls/i686/cmov/libpthread-2.7.so b7e95000-b7e97000 rw-p 00013000 08:05 344324 /lib/tls/i686/cmov/libpthread-2.7.so b7e97000-b7e99000 rw-p b7e97000 00:00 0 b7e99000-b7e9b000 r-xp 00000000 08:05 344313 /lib/tls/i686/cmov/libdl-2.7.so b7e9b000-b7e9d000 rw-p 00001000 08:05 344313 /lib/tls/i686/cmov/libdl-2.7.so b7e9d000-b7ea4000 r-xp 00000000 08:05 344326 /lib/tls/i686/cmov/librt-2.7.so b7ea4000-b7ea6000 rw-p 00006000 08:05 344326 /lib/tls/i686/cmov/librt-2.7.so b7ea6000-b7eba000 r-xp 00000000 08:05 344316 /lib/tls/i686/cmov/libnsl-2.7.so b7eba000-b7ebc000 rw-p 00013000 08:05 344316 /lib/tls/i686/cmov/libnsl-2.7.so b7ebc000-b7ebf000 rw-p b7ebc000 00:00 0 b7ebf000-b7ee2000 r-xp 00000000 08:05 344314 /lib/tls/i686/cmov/libm-2.7.so b7ee2000-b7ee4000 rw-p 00023000 08:05 344314 /lib/tls/i686/cmov/libm-2.7.so b7ee4000-b7fc8000 r-xp 00000000 08:05 179774 /usr/lib/libX11.so.6.2.0 b7fc8000-b7fcb000 rw-p 000e4000 08:05 179774 /usr/lib/libX11.so.6.2.0 b7fcf000-b7fd9000 r-xp 00000000 08:05 308438 /lib/libgcc_s.so.1 b7fd9000-b7fda000 rw-p 0000a000 08:05 308438 /lib/libgcc_s.so.1 b7fda000-b7fdd000 rw-p b7fda000 00:00 0 b7fdd000-b7fde000 r-xp b7fdd000 00:00 0 [vdso] b7fde000-b7ff8000 r-xp 00000000 08:05 309874 /lib/ld-2.7.so b7ff8000-b7ffa000 rw-p 00019000 08:05 309874 /lib/ld-2.7.so bfb4a000-bfb5f000 rw-p bffeb000 00:00 0 [stack] Program received signal SIGABRT, Aborted. 0xb7fdd410 in __kernel_vsyscall () -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From knepley at gmail.com Sat Apr 17 08:37:47 2010 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 17 Apr 2010 08:37:47 -0500 Subject: [petsc-users] glibc detected In-Reply-To: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> References: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> Message-ID: This means there is memory corruption. I would use valgrind to find it. Matt On Sat, Apr 17, 2010 at 8:35 AM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I got a message of glibc detected, and not sure where is it coming from, > any ideas? > > What I have changed from a working copy to this one is to add an element in > one of the structure which in the appCtx. > > Thanks very much! > ----------------------------------------------------------------------- > (gdb) c > Continuing. > *** glibc detected *** > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe: > free(): invalid next size (normal): 0x08b10750 *** > ======= Backtrace: ========= > /lib/tls/i686/cmov/libc.so.6[0xb7d9da85] > /lib/tls/i686/cmov/libc.so.6(cfree+0x90)[0xb7da14f0] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x873a938] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x873c43b] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x872e8f0] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x866c211] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x86a4768] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8276048] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80729e1] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80d8027] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8100ab3] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80e0d44] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80fa8c9] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80f3dc1] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8052ecc] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x804def5] > /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0)[0xb7d48450] > > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x804b451] > ======= Memory map: ======== > 08048000-08918000 r-xp 00000000 08:06 3910764 > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe > 08918000-0891e000 rw-p 008cf000 08:06 3910764 > /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe > 0891e000-08b1f000 rw-p 0891e000 00:00 0 [heap] > b7b00000-b7b21000 rw-p b7b00000 00:00 0 > b7b21000-b7c00000 ---p b7b21000 00:00 0 > b7cef000-b7cf7000 r-xp 00000000 08:05 344321 /lib/tls/i686/cmov/ > libnss_nis-2.7.so > b7cf7000-b7cf9000 rw-p 00007000 08:05 344321 /lib/tls/i686/cmov/ > libnss_nis-2.7.so > b7cf9000-b7d00000 r-xp 00000000 08:05 344317 /lib/tls/i686/cmov/ > libnss_compat-2.7.so > b7d00000-b7d02000 rw-p 00006000 08:05 344317 /lib/tls/i686/cmov/ > libnss_compat-2.7.so > b7d02000-b7d0b000 r-xp 00000000 08:05 344319 /lib/tls/i686/cmov/ > libnss_files-2.7.so > b7d0b000-b7d0d000 rw-p 00008000 08:05 344319 /lib/tls/i686/cmov/ > libnss_files-2.7.so > b7d0d000-b7d0f000 rw-p b7d0d000 00:00 0 > b7d0f000-b7d13000 r-xp 00000000 08:05 179791 /usr/lib/libXdmcp.so.6.0.0 > b7d13000-b7d14000 rw-p 00003000 08:05 179791 /usr/lib/libXdmcp.so.6.0.0 > b7d14000-b7d16000 r-xp 00000000 08:05 179780 /usr/lib/libXau.so.6.0.0 > b7d16000-b7d17000 rw-p 00001000 08:05 179780 /usr/lib/libXau.so.6.0.0 > b7d17000-b7d2e000 r-xp 00000000 08:05 180628 /usr/lib/libxcb.so.1.0.0 > b7d2e000-b7d2f000 rw-p 00016000 08:05 180628 /usr/lib/libxcb.so.1.0.0 > b7d2f000-b7d30000 rw-p b7d2f000 00:00 0 > b7d30000-b7d31000 r-xp 00000000 08:05 180626 > /usr/lib/libxcb-xlib.so.0.0.0 > b7d31000-b7d32000 rw-p 00000000 08:05 180626 > /usr/lib/libxcb-xlib.so.0.0.0 > b7d32000-b7e7b000 r-xp 00000000 08:05 344310 /lib/tls/i686/cmov/ > libc-2.7.so > b7e7b000-b7e7c000 r--p 00149000 08:05 344310 /lib/tls/i686/cmov/ > libc-2.7.so > b7e7c000-b7e7e000 rw-p 0014a000 08:05 344310 /lib/tls/i686/cmov/ > libc-2.7.so > b7e7e000-b7e81000 rw-p b7e7e000 00:00 0 > b7e81000-b7e95000 r-xp 00000000 08:05 344324 /lib/tls/i686/cmov/ > libpthread-2.7.so > b7e95000-b7e97000 rw-p 00013000 08:05 344324 /lib/tls/i686/cmov/ > libpthread-2.7.so > b7e97000-b7e99000 rw-p b7e97000 00:00 0 > b7e99000-b7e9b000 r-xp 00000000 08:05 344313 /lib/tls/i686/cmov/ > libdl-2.7.so > b7e9b000-b7e9d000 rw-p 00001000 08:05 344313 /lib/tls/i686/cmov/ > libdl-2.7.so > b7e9d000-b7ea4000 r-xp 00000000 08:05 344326 /lib/tls/i686/cmov/ > librt-2.7.so > b7ea4000-b7ea6000 rw-p 00006000 08:05 344326 /lib/tls/i686/cmov/ > librt-2.7.so > b7ea6000-b7eba000 r-xp 00000000 08:05 344316 /lib/tls/i686/cmov/ > libnsl-2.7.so > b7eba000-b7ebc000 rw-p 00013000 08:05 344316 /lib/tls/i686/cmov/ > libnsl-2.7.so > b7ebc000-b7ebf000 rw-p b7ebc000 00:00 0 > b7ebf000-b7ee2000 r-xp 00000000 08:05 344314 /lib/tls/i686/cmov/ > libm-2.7.so > b7ee2000-b7ee4000 rw-p 00023000 08:05 344314 /lib/tls/i686/cmov/ > libm-2.7.so > b7ee4000-b7fc8000 r-xp 00000000 08:05 179774 /usr/lib/libX11.so.6.2.0 > b7fc8000-b7fcb000 rw-p 000e4000 08:05 179774 /usr/lib/libX11.so.6.2.0 > b7fcf000-b7fd9000 r-xp 00000000 08:05 308438 /lib/libgcc_s.so.1 > b7fd9000-b7fda000 rw-p 0000a000 08:05 308438 /lib/libgcc_s.so.1 > b7fda000-b7fdd000 rw-p b7fda000 00:00 0 > b7fdd000-b7fde000 r-xp b7fdd000 00:00 0 [vdso] > b7fde000-b7ff8000 r-xp 00000000 08:05 309874 /lib/ld-2.7.so > b7ff8000-b7ffa000 rw-p 00019000 08:05 309874 /lib/ld-2.7.so > bfb4a000-bfb5f000 rw-p bffeb000 00:00 0 [stack] > > Program received signal SIGABRT, Aborted. > 0xb7fdd410 in __kernel_vsyscall () > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenleping at yahoo.cn Fri Apr 16 09:07:05 2010 From: chenleping at yahoo.cn (=?gb2312?B?s8LA1sa9o6hMZXBpbmcgQ2hlbqOp?=) Date: Fri, 16 Apr 2010 22:07:05 +0800 Subject: [petsc-users] about Vecgetarray() Message-ID: <201004162206508299796@yahoo.cn> petsc teams, if I want create the relation between u() [array] and x [Vec] ,I can do it as follows, #define u(ib) xx_v(xx_i + (ib)) call VecGetArray(x,xx_v,xx_i,ierr) do 30 i=1,n u(i) = 1000.0*i 30 continue call VecRestoreArray(x,xx_v,xx_i,ierr) However, I don't understand why can not I do it as follows, double precision u(1) or u(6) call VecGetArray(x,u,xx_i,ierr) do 30 i=1,n u(i) = 1000.0*i 30 continue call VecRestoreArray(x,u,xx_i,ierr) Why u() must be created by #define, and u() cannot be defined again,for example "double precision u(1)" or u(5). By the way, if u() is a array of common blocks(fortran), how can I create the relation between u() and x? thanks, Leping 2010-04-16 -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Sat Apr 17 09:10:00 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Sat, 17 Apr 2010 10:10:00 -0400 Subject: [petsc-users] glibc detected In-Reply-To: References: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> Message-ID: <20100417101000.w2x1pxl58g48084w@cubmail.cc.columbia.edu> Dear Matt, I did run with valgrind, and the error message is for a DAGetLocalVector() in FormFunction() which is not related to this structure. Thanks very much! Here is the details from valgrind: ==4871== Invalid write of size 8 ==4871== at 0x8070D09: FormFunction (twgcqt2unffnictvscj.c:2654) ==4871== by 0x80D8026: SNESComputeFunction (snes.c:1093) ==4871== by 0x8100AB2: SNESSolve_LS (ls.c:159) ==4871== by 0x80E0D43: SNESSolve (snes.c:2242) ==4871== by 0x80FA8C8: DMMGSolveSNES (damgsnes.c:510) ==4871== by 0x80F3DC0: DMMGSolve (damg.c:313) ==4871== by 0x8052ECB: Solve (twgcqt2unffnictvscj.c:678) ==4871== by 0x804DEF4: main (twgcqt2unffnictvscj.c:302) ==4871== Address 0x464aea8 is 4 bytes after a block of size 17,092 alloc'd ==4871== at 0x4021E01: memalign (vg_replace_malloc.c:532) ==4871== by 0x873A8B4: PetscMallocAlign (mal.c:33) ==4871== by 0x873B9EB: PetscTrMallocDefault (mtr.c:194) ==4871== by 0x8686B80: VecCreate_Seq (bvec2.c:809) ==4871== by 0x866C373: VecSetType (vecreg.c:54) ==4871== by 0x86923DB: VecCreateSeq (vseqcr.c:40) ==4871== by 0x8271215: DACreateLocalVector (dalocal.c:82) ==4871== by 0x82B9E70: DMCreateLocalVector (dm.c:115) ==4871== by 0x82716E7: DMGetLocalVector (dalocal.c:139) ==4871== by 0x8070719: FormFunction (twgcqt2unffnictvscj.c:2629) ==4871== by 0x80D8026: SNESComputeFunction (snes.c:1093) ==4871== by 0x8100AB2: SNESSolve_LS (ls.c:159) ==4871== ==4871== Invalid write of size 8 ==4871== at 0x8070D46: FormFunction (twgcqt2unffnictvscj.c:2655) ==4871== by 0x80D8026: SNESComputeFunction (snes.c:1093) ==4871== by 0x8100AB2: SNESSolve_LS (ls.c:159) ==4871== by 0x80E0D43: SNESSolve (snes.c:2242) ==4871== by 0x80FA8C8: DMMGSolveSNES (damgsnes.c:510) ==4871== by 0x80F3DC0: DMMGSolve (damg.c:313) ==4871== by 0x8052ECB: Solve (twgcqt2unffnictvscj.c:678) ==4871== by 0x804DEF4: main (twgcqt2unffnictvscj.c:302) ==4871== Address 0x464aeb0 is 12 bytes after a block of size 17,092 alloc'd ==4871== at 0x4021E01: memalign (vg_replace_malloc.c:532) ==4871== by 0x873A8B4: PetscMallocAlign (mal.c:33) ==4871== by 0x873B9EB: PetscTrMallocDefault (mtr.c:194) ==4871== by 0x8686B80: VecCreate_Seq (bvec2.c:809) ==4871== by 0x866C373: VecSetType (vecreg.c:54) ==4871== by 0x86923DB: VecCreateSeq (vseqcr.c:40) ==4871== by 0x8271215: DACreateLocalVector (dalocal.c:82) ==4871== by 0x82B9E70: DMCreateLocalVector (dm.c:115) ==4871== by 0x82716E7: DMGetLocalVector (dalocal.c:139) ==4871== by 0x8070719: FormFunction (twgcqt2unffnictvscj.c:2629) ==4871== by 0x80D8026: SNESComputeFunction (snes.c:1093) ==4871== by 0x8100AB2: SNESSolve_LS (ls.c:159) ==4871== [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Floating point exception! [0]PETSC ERROR: User provided compute function generated a Not-a-Number! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./twgcqt2unffnictvscj.exe on a linux-gnu named YuanWork by rebecca Sat Apr 17 10:09:32 2010 [0]PETSC ERROR: Libraries linked from /home/rebecca/soft/petsc-dev/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Tue Jan 26 13:06:44 2010 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=0 --download-c-blas-lapack=/home/rebecca/soft/petsc-dev/f2cblaslapack-3.1.1.tar.gz --download-mpich=/home/rebecca/soft/petsc-dev/mpich2-1.2.1.tar.gz --download-sowing=/home/rebecca/soft/petsc-dev/sowing-1.1.15.tar.gz --download-c2html=/home/rebecca/soft/petsc-dev/c2html.tar.gz --with-shared=0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: SNESSolve_LS() line 168 in src/snes/impls/ls/ls.c [0]PETSC ERROR: SNESSolve() line 2242 in src/snes/interface/snes.c [0]PETSC ERROR: DMMGSolveSNES() line 510 in src/snes/utils/damgsnes.c [0]PETSC ERROR: DMMGSolve() line 313 in src/snes/utils/damg.c [0]PETSC ERROR: Solve() line 678 in twgcqt2unffnictvscj.c [0]PETSC ERROR: main() line 302 in twgcqt2unffnictvscj.c application called MPI_Abort(MPI_COMM_WORLD, 72) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 72) - process 0 ==4871== ==4871== HEAP SUMMARY: ==4871== in use at exit: 983,336 bytes in 1,927 blocks ==4871== total heap usage: 4,540 allocs, 2,583 frees, 3,421,236 bytes allocated ==4871== ==4871== LEAK SUMMARY: ==4871== definitely lost: 1,780 bytes in 13 blocks ==4871== indirectly lost: 120 bytes in 10 blocks ==4871== possibly lost: 963,492 bytes in 1,837 blocks ==4871== still reachable: 20,820 bytes in 97 blocks ==4871== suppressed: 0 bytes in 0 blocks ==4871== Rerun with --leak-check=full to see details of leaked memory ==4871== ==4871== For counts of detected and suppressed errors, rerun with: -v ==4871== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 40 from 10) Quoting Matthew Knepley : > This means there is memory corruption. I would use valgrind to find it. > > Matt > > On Sat, Apr 17, 2010 at 8:35 AM, (Rebecca) Xuefei YUAN > wrote: > >> Hi, >> >> I got a message of glibc detected, and not sure where is it coming from, >> any ideas? >> >> What I have changed from a working copy to this one is to add an element in >> one of the structure which in the appCtx. >> >> Thanks very much! >> ----------------------------------------------------------------------- >> (gdb) c >> Continuing. >> *** glibc detected *** >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe: >> free(): invalid next size (normal): 0x08b10750 *** >> ======= Backtrace: ========= >> /lib/tls/i686/cmov/libc.so.6[0xb7d9da85] >> /lib/tls/i686/cmov/libc.so.6(cfree+0x90)[0xb7da14f0] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x873a938] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x873c43b] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x872e8f0] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x866c211] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x86a4768] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8276048] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80729e1] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80d8027] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8100ab3] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80e0d44] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80fa8c9] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x80f3dc1] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x8052ecc] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x804def5] >> /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0)[0xb7d48450] >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe[0x804b451] >> ======= Memory map: ======== >> 08048000-08918000 r-xp 00000000 08:06 3910764 >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe >> 08918000-0891e000 rw-p 008cf000 08:06 3910764 >> >> /home/rebecca/linux/code/twoway/twoway_new/workingspace/twgcqt2unffnictvscj.exe >> 0891e000-08b1f000 rw-p 0891e000 00:00 0 [heap] >> b7b00000-b7b21000 rw-p b7b00000 00:00 0 >> b7b21000-b7c00000 ---p b7b21000 00:00 0 >> b7cef000-b7cf7000 r-xp 00000000 08:05 344321 /lib/tls/i686/cmov/ >> libnss_nis-2.7.so >> b7cf7000-b7cf9000 rw-p 00007000 08:05 344321 /lib/tls/i686/cmov/ >> libnss_nis-2.7.so >> b7cf9000-b7d00000 r-xp 00000000 08:05 344317 /lib/tls/i686/cmov/ >> libnss_compat-2.7.so >> b7d00000-b7d02000 rw-p 00006000 08:05 344317 /lib/tls/i686/cmov/ >> libnss_compat-2.7.so >> b7d02000-b7d0b000 r-xp 00000000 08:05 344319 /lib/tls/i686/cmov/ >> libnss_files-2.7.so >> b7d0b000-b7d0d000 rw-p 00008000 08:05 344319 /lib/tls/i686/cmov/ >> libnss_files-2.7.so >> b7d0d000-b7d0f000 rw-p b7d0d000 00:00 0 >> b7d0f000-b7d13000 r-xp 00000000 08:05 179791 /usr/lib/libXdmcp.so.6.0.0 >> b7d13000-b7d14000 rw-p 00003000 08:05 179791 /usr/lib/libXdmcp.so.6.0.0 >> b7d14000-b7d16000 r-xp 00000000 08:05 179780 /usr/lib/libXau.so.6.0.0 >> b7d16000-b7d17000 rw-p 00001000 08:05 179780 /usr/lib/libXau.so.6.0.0 >> b7d17000-b7d2e000 r-xp 00000000 08:05 180628 /usr/lib/libxcb.so.1.0.0 >> b7d2e000-b7d2f000 rw-p 00016000 08:05 180628 /usr/lib/libxcb.so.1.0.0 >> b7d2f000-b7d30000 rw-p b7d2f000 00:00 0 >> b7d30000-b7d31000 r-xp 00000000 08:05 180626 >> /usr/lib/libxcb-xlib.so.0.0.0 >> b7d31000-b7d32000 rw-p 00000000 08:05 180626 >> /usr/lib/libxcb-xlib.so.0.0.0 >> b7d32000-b7e7b000 r-xp 00000000 08:05 344310 /lib/tls/i686/cmov/ >> libc-2.7.so >> b7e7b000-b7e7c000 r--p 00149000 08:05 344310 /lib/tls/i686/cmov/ >> libc-2.7.so >> b7e7c000-b7e7e000 rw-p 0014a000 08:05 344310 /lib/tls/i686/cmov/ >> libc-2.7.so >> b7e7e000-b7e81000 rw-p b7e7e000 00:00 0 >> b7e81000-b7e95000 r-xp 00000000 08:05 344324 /lib/tls/i686/cmov/ >> libpthread-2.7.so >> b7e95000-b7e97000 rw-p 00013000 08:05 344324 /lib/tls/i686/cmov/ >> libpthread-2.7.so >> b7e97000-b7e99000 rw-p b7e97000 00:00 0 >> b7e99000-b7e9b000 r-xp 00000000 08:05 344313 /lib/tls/i686/cmov/ >> libdl-2.7.so >> b7e9b000-b7e9d000 rw-p 00001000 08:05 344313 /lib/tls/i686/cmov/ >> libdl-2.7.so >> b7e9d000-b7ea4000 r-xp 00000000 08:05 344326 /lib/tls/i686/cmov/ >> librt-2.7.so >> b7ea4000-b7ea6000 rw-p 00006000 08:05 344326 /lib/tls/i686/cmov/ >> librt-2.7.so >> b7ea6000-b7eba000 r-xp 00000000 08:05 344316 /lib/tls/i686/cmov/ >> libnsl-2.7.so >> b7eba000-b7ebc000 rw-p 00013000 08:05 344316 /lib/tls/i686/cmov/ >> libnsl-2.7.so >> b7ebc000-b7ebf000 rw-p b7ebc000 00:00 0 >> b7ebf000-b7ee2000 r-xp 00000000 08:05 344314 /lib/tls/i686/cmov/ >> libm-2.7.so >> b7ee2000-b7ee4000 rw-p 00023000 08:05 344314 /lib/tls/i686/cmov/ >> libm-2.7.so >> b7ee4000-b7fc8000 r-xp 00000000 08:05 179774 /usr/lib/libX11.so.6.2.0 >> b7fc8000-b7fcb000 rw-p 000e4000 08:05 179774 /usr/lib/libX11.so.6.2.0 >> b7fcf000-b7fd9000 r-xp 00000000 08:05 308438 /lib/libgcc_s.so.1 >> b7fd9000-b7fda000 rw-p 0000a000 08:05 308438 /lib/libgcc_s.so.1 >> b7fda000-b7fdd000 rw-p b7fda000 00:00 0 >> b7fdd000-b7fde000 r-xp b7fdd000 00:00 0 [vdso] >> b7fde000-b7ff8000 r-xp 00000000 08:05 309874 /lib/ld-2.7.so >> b7ff8000-b7ffa000 rw-p 00019000 08:05 309874 /lib/ld-2.7.so >> bfb4a000-bfb5f000 rw-p bffeb000 00:00 0 [stack] >> >> Program received signal SIGABRT, Aborted. >> 0xb7fdd410 in __kernel_vsyscall () >> >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> >> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From knepley at gmail.com Sat Apr 17 09:15:22 2010 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 17 Apr 2010 09:15:22 -0500 Subject: [petsc-users] about Vecgetarray() In-Reply-To: <201004162206508299796@yahoo.cn> References: <201004162206508299796@yahoo.cn> Message-ID: 2010/4/16 ????Leping Chen? > petsc teams, > > if I want create the relation between u() [array] and x [Vec] ,I can do it > as follows, > > #define u(ib) xx_v(xx_i + (ib)) > call VecGetArray(x,xx_v,xx_i,ierr) > do 30 i=1,n > u(i) = 1000.0*i > 30 continue > call VecRestoreArray(x,xx_v,xx_i,ierr) > > However, I don't understand why can not I do it as follows, > > double precision u(1) or u(6) > call VecGetArray(x,u,xx_i,ierr) > do 30 i=1,n > u(i) = 1000.0*i > 30 continue > call VecRestoreArray(x,u,xx_i,ierr) > > Why u() must be created by #define, and u() cannot be defined again,for > example "double precision u(1)" or u(5). > 1) It does not have to be #define. This is shown for convenience. 2) You must still declare xx_v in your first example, probably exactly as you declare u in your second example. These are not PETSc questions. They are basic Fortran programming questions. There are many excellent books on this. Matt > > By the way, if u() is a array of common blocks(fortran), how can I create > the relation between u() and x? > > thanks, > > Leping > ------------------------------ > 2010-04-16 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Sat Apr 17 09:37:59 2010 From: jed at 59A2.org (Jed Brown) Date: Sat, 17 Apr 2010 16:37:59 +0200 Subject: [petsc-users] glibc detected In-Reply-To: <20100417101000.w2x1pxl58g48084w@cubmail.cc.columbia.edu> References: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> <20100417101000.w2x1pxl58g48084w@cubmail.cc.columbia.edu> Message-ID: <874ojajgag.fsf@59A2.org> On Sat, 17 Apr 2010 10:10:00 -0400, "(Rebecca) Xuefei YUAN" wrote: > Dear Matt, > > I did run with valgrind, and the error message is for a > DAGetLocalVector() in FormFunction() which is not related to this > structure. This first error is causing memory corruption which causes the invalid free() you see later. Run with valgrind and -malloc 0, then fix every item in the order they appear (many of the later items will be caused by earlier corruption, so rerun valgrind after each fix). Jed From chenleping at yahoo.cn Fri Apr 16 09:38:33 2010 From: chenleping at yahoo.cn (=?gb2312?B?s8LA1sa9o6hMZXBpbmcgQ2hlbqOp?=) Date: Fri, 16 Apr 2010 22:38:33 +0800 Subject: [petsc-users] about Vecgetarray() References: <201004162206508299796@yahoo.cn> Message-ID: <201004162238299863851@yahoo.cn> petsc teams, PetscOffset xx_i Vec x double precision u(6) call VecGetArray(x,u,xx_i,ierr) do 30 i=1,n u(i) = 1000.0*i 30 continue call VecRestoreArray(x,u,xx_i,ierr) I don't understand why x cannot be changed when u() has been changed? how can I do it? the output are as follows, x vector: 10 20 30 40 50 60 u() array 1000 2000 3000 4000 5000 6000 thanks, leping ???? Matthew Knepley ????? 2010-04-17 22:15:24 ???? chenleping; PETSc users list ??? ??? Re: [petsc-users] about Vecgetarray() 2010/4/16 ????Leping Chen? petsc teams, if I want create the relation between u() [array] and x [Vec] ,I can do it as follows, #define u(ib) xx_v(xx_i + (ib)) call VecGetArray(x,xx_v,xx_i,ierr) do 30 i=1,n u(i) = 1000.0*i 30 continue call VecRestoreArray(x,xx_v,xx_i,ierr) However, I don't understand why can not I do it as follows, double precision u(1) or u(6) call VecGetArray(x,u,xx_i,ierr) do 30 i=1,n u(i) = 1000.0*i 30 continue call VecRestoreArray(x,u,xx_i,ierr) Why u() must be created by #define, and u() cannot be defined again,for example "double precision u(1)" or u(5). 1) It does not have to be #define. This is shown for convenience. 2) You must still declare xx_v in your first example, probably exactly as you declare u in your second example. These are not PETSc questions. They are basic Fortran programming questions. There are many excellent books on this. Matt By the way, if u() is a array of common blocks(fortran), how can I create the relation between u() and x? thanks, Leping 2010-04-16 -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Apr 17 09:42:16 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 17 Apr 2010 09:42:16 -0500 (CDT) Subject: [petsc-users] about Vecgetarray() In-Reply-To: <201004162238299863851@yahoo.cn> References: <201004162206508299796@yahoo.cn> <201004162238299863851@yahoo.cn> Message-ID: As matt mentioned - if you are programming in fortran - you need to learn fortran. On Fri, 16 Apr 2010, ????Leping Chen? wrote: > petsc teams, > > PetscOffset xx_i > Vec x > double precision u(6) > > call VecGetArray(x,u,xx_i,ierr) > do 30 i=1,n > u(i) = 1000.0*i This is incorrect usage. Check the examples. It should be: u(i+xx_i) = 1000.0*i Or check VecGetArrayF90 as mentioned before. Satish > 30 continue > call VecRestoreArray(x,u,xx_i,ierr) > > I don't understand why x cannot be changed when u() has been changed? > how can I do it? > the output are as follows, > x vector: > 10 > 20 > 30 > 40 > 50 > 60 > u() array > 1000 > 2000 > 3000 > 4000 > 5000 > 6000 > thanks, > leping > > > > > ???? Matthew Knepley > ????? 2010-04-17 22:15:24 > ???? chenleping; PETSc users list > ??? > ??? Re: [petsc-users] about Vecgetarray() > 2010/4/16 ????Leping Chen? > > petsc teams, > > if I want create the relation between u() [array] and x [Vec] ,I can do it as follows, > > #define u(ib) xx_v(xx_i + (ib)) > call VecGetArray(x,xx_v,xx_i,ierr) > do 30 i=1,n > u(i) = 1000.0*i > 30 continue > call VecRestoreArray(x,xx_v,xx_i,ierr) > > However, I don't understand why can not I do it as follows, > > double precision u(1) or u(6) > call VecGetArray(x,u,xx_i,ierr) > do 30 i=1,n > u(i) = 1000.0*i > 30 continue > call VecRestoreArray(x,u,xx_i,ierr) > > Why u() must be created by #define, and u() cannot be defined again,for example "double precision u(1)" or u(5). > > > 1) It does not have to be #define. This is shown for convenience. > > > 2) You must still declare xx_v in your first example, probably exactly as you > declare u in your second example. > > > These are not PETSc questions. They are basic Fortran programming questions. > There are many excellent books on this. > > > Matt > > > By the way, if u() is a array of common blocks(fortran), how can I create the relation between u() and x? > > thanks, > > Leping > > > > 2010-04-16 > > > > > From balay at mcs.anl.gov Sat Apr 17 10:00:11 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 17 Apr 2010 10:00:11 -0500 (CDT) Subject: [petsc-users] about Vecgetarray() In-Reply-To: References: <201004162206508299796@yahoo.cn> <201004162238299863851@yahoo.cn> Message-ID: > On Fri, 16 Apr 2010, ????Leping Chen? wrote: > > PetscOffset xx_i > > Vec x > > double precision u(6) > > > > call VecGetArray(x,u,xx_i,ierr) > > do 30 i=1,n > > u(i) = 1000.0*i > u(i+xx_i) = 1000.0*i > > 30 continue > > call VecRestoreArray(x,u,xx_i,ierr) Hopefully the following clears up the confusion. You are expecting the following to work: >>>> integer n=6 double precision u(6) call VecGetArray(x,u,ierr) do 30 i=1,n u(i) = 1000.0*i 30 continue call VecRestoreArray(x,u,ierr) <<< Here you are expecting VecGetArray to *copy* the values of 'x' into 'u'. And VecRestoreArray() to *copy* values from 'u' to 'x'. However these copies are inefficient - so we do not do that. http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.1/docs/manualpages/Vec/VecGetArray.html Here VecGetArray() tries to return the *pointer* to the array stored in the Vec. This is possible in C - but not fortran77. Hence the concept of using offset in the f77 interface. > u(i+xx_i) = 1000.0*i Note: the above is out-of-bounds access of 'u' wrt F77 - and it is a workarround against the limitation of F77 language. [so its not pure F77 code]. So if you need pure language compliant code - check VecGetArrayF90() as indicated before. It returns a F90 pointer from a Vec. Satish From xy2102 at columbia.edu Sat Apr 17 10:07:55 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Sat, 17 Apr 2010 11:07:55 -0400 Subject: [petsc-users] glibc detected In-Reply-To: <874ojajgag.fsf@59A2.org> References: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> <20100417101000.w2x1pxl58g48084w@cubmail.cc.columbia.edu> <874ojajgag.fsf@59A2.org> Message-ID: <20100417110755.0zokjzx4owcgo0c8@cubmail.cc.columbia.edu> Dear Jed, What did I do wrong on this? Nothing comes out from -malloc 0: rebecca at YuanWork:~/linux/code/twoway/twoway_new/workingspace$ valgrind -malloc 0 ./twgcqt2unffnictvscj.exe -options_file option_all valgrind: 0: command not found rebecca at YuanWork:~/linux/code/twoway/twoway_new/workingspace$ valgrind --malloc 0 ./twgcqt2unffnictvscj.exe -options_file option_all valgrind: 0: command not found Thanks a lot! Rebecca Quoting Jed Brown : > On Sat, 17 Apr 2010 10:10:00 -0400, "(Rebecca) Xuefei YUAN" > wrote: >> Dear Matt, >> >> I did run with valgrind, and the error message is for a >> DAGetLocalVector() in FormFunction() which is not related to this >> structure. > > This first error is causing memory corruption which causes the invalid > free() you see later. Run with valgrind and -malloc 0, then fix every > item in the order they appear (many of the later items will be caused by > earlier corruption, so rerun valgrind after each fix). > > Jed > > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From jed at 59A2.org Sat Apr 17 11:24:51 2010 From: jed at 59A2.org (Jed Brown) Date: Sat, 17 Apr 2010 18:24:51 +0200 Subject: [petsc-users] glibc detected In-Reply-To: <20100417110755.0zokjzx4owcgo0c8@cubmail.cc.columbia.edu> References: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> <20100417101000.w2x1pxl58g48084w@cubmail.cc.columbia.edu> <874ojajgag.fsf@59A2.org> <20100417110755.0zokjzx4owcgo0c8@cubmail.cc.columbia.edu> Message-ID: <8739yujbcc.fsf@59A2.org> On Sat, 17 Apr 2010 11:07:55 -0400, "(Rebecca) Xuefei YUAN" wrote: > Dear Jed, > > What did I do wrong on this? Nothing comes out from -malloc 0: $ valgrind ./twgcqt2unffnictvscj.exe -options_file option_all -malloc 0 Run it this way. The "-malloc 0" option turns off PETSc's sentinels (that debug PetscMalloc uses to detect memory corruption) so that valgrind can do it's job better. Note that you can have valgrind attach a debugger at the error points with $ valgrind --db-attach=yes ./twgcqt2unffnictvscj.exe -options_file option_all -malloc 0 Jed From xy2102 at columbia.edu Sat Apr 17 16:37:48 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Sat, 17 Apr 2010 17:37:48 -0400 Subject: [petsc-users] glibc detected In-Reply-To: <8739yujbcc.fsf@59A2.org> References: <20100417093539.7ma6xk7ez280kc84@cubmail.cc.columbia.edu> <20100417101000.w2x1pxl58g48084w@cubmail.cc.columbia.edu> <874ojajgag.fsf@59A2.org> <20100417110755.0zokjzx4owcgo0c8@cubmail.cc.columbia.edu> <8739yujbcc.fsf@59A2.org> Message-ID: <20100417173748.can3ei3m8cwssswk@cubmail.cc.columbia.edu> Dear all, I found the bug of the code, it is a mistake at the declaration. I fixed it. Thanks a lot for your kind help! Best, Rebecca Quoting Jed Brown : > On Sat, 17 Apr 2010 11:07:55 -0400, "(Rebecca) Xuefei YUAN" > wrote: >> Dear Jed, >> >> What did I do wrong on this? Nothing comes out from -malloc 0: > > $ valgrind ./twgcqt2unffnictvscj.exe -options_file option_all -malloc 0 > > Run it this way. The "-malloc 0" option turns off PETSc's sentinels > (that debug PetscMalloc uses to detect memory corruption) so that > valgrind can do it's job better. Note that you can have valgrind attach > a debugger at the error points with > > $ valgrind --db-attach=yes ./twgcqt2unffnictvscj.exe -options_file > option_all -malloc 0 > > Jed > > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From torres.pedrozpk at gmail.com Sun Apr 18 16:15:12 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sun, 18 Apr 2010 18:15:12 -0300 Subject: [petsc-users] Measuring memory system performance Message-ID: Hello, I want to measure memory system performance in my node, obtained with the sparse matrix-vector kernel of PETSc. Please, could you recommend any simple tools to perform that?. Thanks a lot. Regards -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Sun Apr 18 16:27:53 2010 From: jed at 59A2.org (Jed Brown) Date: Sun, 18 Apr 2010 23:27:53 +0200 Subject: [petsc-users] Measuring memory system performance In-Reply-To: References: Message-ID: <87eiictpra.fsf@59A2.org> On Sun, 18 Apr 2010 18:15:12 -0300, Pedro Torres wrote: > Hello, > > I want to measure memory system performance in my node, obtained with the > sparse matrix-vector kernel of PETSc. Please, could you recommend any simple > tools to perform that?. Thanks a lot. You can use hardware counters, but there are somewhat platform-specific. You can also take any of the PETSc examples and run with no preconditioner (-pc_type none), use -ksp_view to determine the number of nonzeros in the matrix, and -log_summary to find the amount of time spent in MatMult. A lower bound for the achieved bandwidth is number_of_MatMults * number_of_nonzeros * (sizeof(PetscScalar) + f*sizeof(PetscInt)) / time_in_MatMults where f=1 for AIJ without inodes and f=1/(bs*bs) for BAIJ with block size bs. This does not include the bandwidth for the vector (more significant if you have a poor ordering or the matrix sparsity is such that the vector does not reuse the cache well). For pure bandwidth measurements, see the STREAM benchmark (http://www.cs.virginia.edu/stream/). Jed From tribur at vision.ee.ethz.ch Mon Apr 19 06:29:40 2010 From: tribur at vision.ee.ethz.ch (tribur at vision.ee.ethz.ch) Date: Mon, 19 Apr 2010 13:29:40 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero Message-ID: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> Hi Jed, >> ML works now using, e.g., -mg_coarse_redundant_pc_factor_shift_type >> POSITIVE_DEFINITE. However, it converges very slowly using the default >> REDUNDANT for the coarse solve. > > "Converges slowly" or "the coarse-level solve is expensive"? hm, rather "converges slowly". Using ML inside a preconditioner for the Schur complement system, the overall outer system preconditioned with the approximated Schur complement preconditioner converges slowly, if you understand what I mean. My particular problem is that the convergence rate depends strongly on the number of processors. In case of one processor, using ML for preconditioning the deeply inner system the outer system converges in, e.g., 39 iterations. In case of np=10, however, it needs 69 iterations. This number of iterations is independent on the number of processes using HYPRE (at least if np<80), but the latter is (applied to this inner system, not generally) slower and scales very badly. That's why I would like to use ML. Thinking about it, all this shouldn't have to do anything with the choice of the direct solver of the coarse system inside ML (mumps or petsc-own), should it? The direct solver solves completely, independently from the number of processes, and shouldn't have an influence on the effectiveness of ML, or am I wrong? > I suggest > starting with > > -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_package mumps > > or varying parameters in ML to see if you can make the coarse level > problem smaller without hurting convergence rate. You can do > semi-redundant solves if you scale processor counts beyond what MUMPS > works well with. Thanks. Thus, MUMPS is supposed to be the usually fastest parallel direct solver? > Depending on what problem you are solving, ML could be producing a > (nearly) singular coarse level operator in which case you can expect > very confusing and inconsistent behavior. Could it also be the reason for the decreased convergence rate when increasing from 1 to 10 processors? Even if the equation system remains the same? Thanks a lot, Kathrin From knepley at gmail.com Mon Apr 19 06:34:08 2010 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Apr 2010 06:34:08 -0500 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> References: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> Message-ID: On Mon, Apr 19, 2010 at 6:29 AM, wrote: > Hi Jed, > > > ML works now using, e.g., -mg_coarse_redundant_pc_factor_shift_type >>> POSITIVE_DEFINITE. However, it converges very slowly using the default >>> REDUNDANT for the coarse solve. >>> >> >> "Converges slowly" or "the coarse-level solve is expensive"? >> > > hm, rather "converges slowly". Using ML inside a preconditioner for the > Schur complement system, the overall outer system preconditioned with the > approximated Schur complement preconditioner converges slowly, if you > understand what I mean. > > My particular problem is that the convergence rate depends strongly on the > number of processors. In case of one processor, using ML for preconditioning > the deeply inner system the outer system converges in, e.g., 39 iterations. > In case of np=10, however, it needs 69 iterations. > For Schur complement methods, the inner system usually has to be solved very accurately. Are you accelerating a Krylov method for A^{-1}, or just using ML itself? I would expect for the same linear system tolerance, you get identical convergence for the same system, independent of the number of processors. Matt > This number of iterations is independent on the number of processes using > HYPRE (at least if np<80), but the latter is (applied to this inner system, > not generally) slower and scales very badly. That's why I would like to use > ML. > > Thinking about it, all this shouldn't have to do anything with the choice > of the direct solver of the coarse system inside ML (mumps or petsc-own), > should it? The direct solver solves completely, independently from the > number of processes, and shouldn't have an influence on the effectiveness of > ML, or am I wrong? > > I suggest >> starting with >> >> -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_package mumps >> >> or varying parameters in ML to see if you can make the coarse level >> problem smaller without hurting convergence rate. You can do >> semi-redundant solves if you scale processor counts beyond what MUMPS >> works well with. >> > > Thanks. Thus, MUMPS is supposed to be the usually fastest parallel direct > solver? > > Depending on what problem you are solving, ML could be producing a >> (nearly) singular coarse level operator in which case you can expect >> very confusing and inconsistent behavior. >> > > Could it also be the reason for the decreased convergence rate when > increasing from 1 to 10 processors? Even if the equation system remains the > same? > > > Thanks a lot, > > Kathrin > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Apr 19 06:52:19 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 19 Apr 2010 13:52:19 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> References: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> Message-ID: <87bpdfu0b0.fsf@59A2.org> On Mon, 19 Apr 2010 13:29:40 +0200, tribur at vision.ee.ethz.ch wrote: > >> ML works now using, e.g., -mg_coarse_redundant_pc_factor_shift_type > >> POSITIVE_DEFINITE. However, it converges very slowly using the default > >> REDUNDANT for the coarse solve. > > > > "Converges slowly" or "the coarse-level solve is expensive"? > > hm, rather "converges slowly". Using ML inside a preconditioner for > the Schur complement system, the overall outer system preconditioned > with the approximated Schur complement preconditioner converges > slowly, if you understand what I mean. Sure, but the redundant coarse solve is a direct solve. It may be that the shift (to make it nonsingular) makes it ineffective (and thus outer system converges slowly), but this is the same behavior you would get with a non-redundant solve. I.e. it is the shift that causes the problem, not the REDUNDANT. I don't know which flavor of Schur complement iteration you are currently using. It is true that pure Schur complement reduction requires high-accuracy inner solves, you may of course get away with inexact inner solves if it is part of a full-space iteration. It's worth comparing the number of iterations required to solve the inner (advection-diffusion) block to a given tolerance in parallel and serial. > My particular problem is that the convergence rate depends strongly on > the number of processors. In case of one processor, using ML for > preconditioning the deeply inner system the outer system converges in, > e.g., 39 iterations. In case of np=10, however, it needs 69 iterations. ML with defaults has a significant difference between serial and parallel. Usually the scalability is acceptable from 2 processors up, but the difference between one and two can be quite significant. You can make it stronger, e.g. with -mg_levels_ksp_type gmres -mg_levels_ksp_max_it 1 -mg_levels_pc_type asm -mg_levels_sub_pc_type ilu > This number of iterations is independent on the number of processes > using HYPRE (at least if np<80), but the latter is (applied to this > inner system, not generally) slower and scales very badly. That's why > I would like to use ML. > > Thinking about it, all this shouldn't have to do anything with the > choice of the direct solver of the coarse system inside ML (mumps or > petsc-own), should it? The direct solver solves completely, > independently from the number of processes, and shouldn't have an > influence on the effectiveness of ML, or am I wrong? A shift makes it solve a somewhat different system. How different that perturbed system is depends on the problem and the size of the shift. MUMPS has more sophisticated ordering/pivoting schemes so you should use it if the coarse system demands it (you can also try using different ordering schemes in PETSc, -mg_coarse_redundant_pc_factor_mat_ordering_type). > Thanks. Thus, MUMPS is supposed to be the usually fastest parallel > direct solver? Usually. > > Depending on what problem you are solving, ML could be producing a > > (nearly) singular coarse level operator in which case you can expect > > very confusing and inconsistent behavior. > > Could it also be the reason for the decreased convergence rate when > increasing from 1 to 10 processors? Even if the equation system > remains the same? ML's aggregates change somewhat in parallel (I don't know how much, I haven't investigated precisely what is different) and the smoothers are all different. With a "normal" discretization of an elliptic system, it would seem surprising for ML to produce nearly singular coarse-level operators, in parallel or otherwise. But snes/tutorials/examples/ex48 exhibits pretty bad ML behavior (the coarse-level isn't singular, but the parallel aggregates with default smoothers don't converge despite being an SPD system, ML is informed of translations but not rigid body modes, I haven't investigated ML's troublesome modes for this problem so I don't know if they are rigid body modes or something else). Jed From jed at 59a2.org Mon Apr 19 07:12:06 2010 From: jed at 59a2.org (Jed Brown) Date: Mon, 19 Apr 2010 14:12:06 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: References: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> Message-ID: <87aasztze1.fsf@59A2.org> On Mon, 19 Apr 2010 06:34:08 -0500, Matthew Knepley wrote: > For Schur complement methods, the inner system usually has to be > solved very accurately. Are you accelerating a Krylov method for > A^{-1}, or just using ML itself? I would expect for the same linear > system tolerance, you get identical convergence for the same system, > independent of the number of processors. Matt, run ex48 with ML in parallel and serial, the aggregates are quite different and the parallel case doesn't converge with SOR. Also, from talking with Ray, Eric Cyr, and John Shadid two weeks ago, they are currently using ML on coupled Navier-Stokes systems and usually beating block factorization (i.e. full-space iterations with approximate-commutator Schur-complement preconditioners (PCD or LSC variants) which are beating full Schur-complement reduction). They are using Q1-Q1 with PSPG or Bochev stabilization and SUPG for advection. The trouble is that this method occasionally runs into problems where convergence completely falls apart, despite not having extreme parameter choices. ML has an option "energy minimization" which they are using (PETSc's interface doesn't currently support this, I'll add it if someone doesn't beat me to it) which is apparently crucial for generating reasonable coarse levels for these systems. They always coarsen all the degrees of freedom together, this is not possible with mixed finite element spaces, so you have to trade quality answers produced by a stable approximation along with necessity to make subdomain and coarse-level problems compatible with inf-sup against the wiggle-room you get with stabilized non-mixed discretizations but with possible artifacts and significant divergence error. Jed From knepley at gmail.com Mon Apr 19 07:23:01 2010 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Apr 2010 07:23:01 -0500 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: <87aasztze1.fsf@59A2.org> References: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> <87aasztze1.fsf@59A2.org> Message-ID: On Mon, Apr 19, 2010 at 7:12 AM, Jed Brown wrote: > On Mon, 19 Apr 2010 06:34:08 -0500, Matthew Knepley > wrote: > > For Schur complement methods, the inner system usually has to be > > solved very accurately. Are you accelerating a Krylov method for > > A^{-1}, or just using ML itself? I would expect for the same linear > > system tolerance, you get identical convergence for the same system, > > independent of the number of processors. > > Matt, run ex48 with ML in parallel and serial, the aggregates are quite > different and the parallel case doesn't converge with SOR. Also, from > talking with Ray, Eric Cyr, and John Shadid two weeks ago, they are > currently using ML on coupled Navier-Stokes systems and usually beating > block factorization (i.e. full-space iterations with > approximate-commutator Schur-complement preconditioners (PCD or LSC > variants) which are beating full Schur-complement reduction). They are > using Q1-Q1 with PSPG or Bochev stabilization and SUPG for advection. > So, to see if I understand correctly. You are saying that you can get away with more approximate solves if you do not do full reduction? I know the theory for the case of Stokes, but can you prove this in a general sense? > The trouble is that this method occasionally runs into problems where > convergence completely falls apart, despite not having extreme parameter > choices. ML has an option "energy minimization" which they are using > (PETSc's interface doesn't currently support this, I'll add it if > someone doesn't beat me to it) which is apparently crucial for > generating reasonable coarse levels for these systems. > This sounds like the black magic I expect :) > They always coarsen all the degrees of freedom together, this is not > possible with mixed finite element spaces, so you have to trade quality > answers produced by a stable approximation along with necessity to make > subdomain and coarse-level problems compatible with inf-sup against the > wiggle-room you get with stabilized non-mixed discretizations but with > possible artifacts and significant divergence error. I still maintain that aggregation is a really crappy way to generate coarse systems, especially for mixed elements. We should be generating coarse systems geometrically, and then using a nice (maybe Black-Box) framework for calculating good projectors. Matt > > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59a2.org Mon Apr 19 07:36:42 2010 From: jed at 59a2.org (Jed Brown) Date: Mon, 19 Apr 2010 14:36:42 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero In-Reply-To: References: <20100419132940.18513a8ybl3we8is@email.ee.ethz.ch> <87aasztze1.fsf@59A2.org> Message-ID: <878w8jty91.fsf@59A2.org> On Mon, 19 Apr 2010 07:23:01 -0500, Matthew Knepley wrote: > So, to see if I understand correctly. You are saying that you can get > away with more approximate solves if you do not do full reduction? I > know the theory for the case of Stokes, but can you prove this in a > general sense? The theory is relatively general (as much as preconditioned GMRES is) if you iterate in the full space with either block-diagonal or block-triangular preconditioners. Note that this formulation *never* involves explicit application of a Schur complement. Sometimes I get better convergence with one subcycle on the Schur complement with a very approximate inner solve (FGMRES outer). I'm not sure if Dave sees this, he seems to like doing a couple subcycles in multigrid smoothers. The folks doing Q1-Q1 with ML are not doing *anything* with a Schur complement (approxmate or otherwise). They just coarsen on the full indefinite system and use ASM (overlap 0 or 1) with ILU to precondition the coupled system. This makes a certain amount of sense because for those stabilized formulations, this is similar in spirit to a Vanka smoother (block SOR is a more precise analogue). > This sounds like the black magic I expect :) Yeah, this involves some sort of very local solve to produce the aggregates and interpolations that are not transposes of each other (if I understood Ray and Eric correctly). > I still maintain that aggregation is a really crappy way to generate > coarse systems, especially for mixed elements. We should be generating > coarse systems geometrically, and then using a nice (maybe Black-Box) > framework for calculating good projectors. This whole framework doesn't work for mixed discretizations. Jed From lizs at mail.uc.edu Mon Apr 19 12:28:15 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Mon, 19 Apr 2010 17:28:15 +0000 Subject: [petsc-users] Why am I not getting the parallel running? Message-ID: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com> Hi, I think this is a beginner's question, but still hope someone can help me out. I've tried to run some PETSc examples and my own simple PETSc codes on my office's cluster as well as on Ohio Supercomputer Center's machine. At first I found no speedup for parallel processing and later I noticed by checking the rank that actually each node is doing the same sequential processing. I'm sure the codes are parallel codes (such as ex19.c in SNES and ex7.c in TS tutorial codes). Is there anything I am missing in the installation or the run command such as"mpiexec -n 3 ex7"? How can I realize a parallel running? BTW, is it okay to ask questions here on a specific PETSc tutorial code? Thank you very much! Regards, Zhisong Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 19 12:31:04 2010 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Apr 2010 12:31:04 -0500 Subject: [petsc-users] Why am I not getting the parallel running? In-Reply-To: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: On Mon, Apr 19, 2010 at 12:28 PM, Li, Zhisong (lizs) wrote: > Hi, > > I think this is a beginner's question, but still hope someone can help me > out. > > I've tried to run some PETSc examples and my own simple PETSc codes on my > office's cluster as well as on Ohio Supercomputer Center's machine. At first > I found no speedup for parallel processing and later I noticed by checking > the rank that actually each node is doing the same sequential processing. > I'm sure the codes are parallel codes (such as ex19.c in SNES and ex7.c in > TS tutorial codes). Is there anything I am missing in the installation or > the run command such as"mpiexec -n 3 ex7"? How can I realize a parallel > running? > The mpiexec in your path is not the one you configured with. What MPI is begin used? Matt > BTW, is it okay to ask questions here on a specific PETSc tutorial code? > > Thank you very much! > > Regards, > > > Zhisong Li > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Apr 19 12:58:28 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 19 Apr 2010 19:58:28 +0200 Subject: [petsc-users] Why am I not getting the parallel running? In-Reply-To: References: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <87zl0zs4sb.fsf@59A2.org> On Mon, 19 Apr 2010 12:31:04 -0500, Matthew Knepley wrote: > On Mon, Apr 19, 2010 at 12:28 PM, Li, Zhisong (lizs) wrote: > > > Hi, > > > > I think this is a beginner's question, but still hope someone can help me > > out. > > > > I've tried to run some PETSc examples and my own simple PETSc codes on my > > office's cluster as well as on Ohio Supercomputer Center's machine. At first > > I found no speedup for parallel processing and later I noticed by checking > > the rank that actually each node is doing the same sequential processing. > > I'm sure the codes are parallel codes (such as ex19.c in SNES and ex7.c in > > TS tutorial codes). Is there anything I am missing in the installation or > > the run command such as"mpiexec -n 3 ex7"? How can I realize a parallel > > running? > > > > The mpiexec in your path is not the one you configured with. What > MPI is begin used? Does make test run successfully? If so, then $ cd src/snes/examples/tutorials && make -n runex5 will tell you which mpiexec PETSc is using. Jed From Chun.SUN at 3ds.com Mon Apr 19 13:44:09 2010 From: Chun.SUN at 3ds.com (SUN Chun) Date: Mon, 19 Apr 2010 14:44:09 -0400 Subject: [petsc-users] external binary reader from bin/matlab In-Reply-To: References: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E59B@CORP-CLT-EXB01.ds> Hi PETSc dev, It seems that bin/matlab/PetscBinaryRead.m does not read rectangular dense matrix binary output. I had success in sparse matrix though. Could you confirm this? Or I'm doing something wrong? Do you provide any other external binary reader than this set? Thanks a lot, Chun This email and any attachments are intended solely for the use of the individual or entity to whom it is addressed and may be confidential and/or privileged. If you are not one of the named recipients or have received this email in error, (i) you should not read, disclose, or copy it, (ii) please notify sender of your receipt by reply email and delete this email and all attachments, (iii) Dassault Systemes does not accept or assume any liability or responsibility for any use of or reliance on this email.For other languages, go to http://www.3ds.com/terms/email-disclaimer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 19 14:25:40 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 19 Apr 2010 14:25:40 -0500 Subject: [petsc-users] external binary reader from bin/matlab In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA28E59B@CORP-CLT-EXB01.ds> References: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com> <2545DC7A42DF804AAAB2ADA5043D57DA28E59B@CORP-CLT-EXB01.ds> Message-ID: Chun, I assume you mean dense matrices written with PETSC_VIEWER_NATIVE option to the viewer? Please find attached a version of PetscBinaryRead.m that handles this. Please let me know if it doesn't work for you? Notes: the default viewing of dense matrices converts to sparse so that PetscBinaryRead() does actually work. The reading of native dense format doesn't matter if the matrix is rectangular or square it just was not supported (but is with my new code). Please let me know if I misunderstood your problem. Barry On Apr 19, 2010, at 1:44 PM, SUN Chun wrote: > Hi PETSc dev, > > It seems that bin/matlab/PetscBinaryRead.m does not read rectangular > dense matrix binary output. I had success in sparse matrix though. > > Could you confirm this? Or I'm doing something wrong? Do you provide > any other external binary reader than this set? > > Thanks a lot, > Chun > This email and any attachments are intended solely for the use of > the individual or entity to whom it is addressed and may be > confidential and/or privileged. > If you are not one of the named recipients or have received this > email in error, > (i) you should not read, disclose, or copy it, > (ii) please notify sender of your receipt by reply email and delete > this email and all attachments, > (iii) Dassault Systemes does not accept or assume any liability or > responsibility for any use of or reliance on this email. > For other languages, Click Here -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PetscBinaryRead.m Type: application/octet-stream Size: 3648 bytes Desc: not available URL: From Chun.SUN at 3ds.com Mon Apr 19 14:52:18 2010 From: Chun.SUN at 3ds.com (SUN Chun) Date: Mon, 19 Apr 2010 15:52:18 -0400 Subject: [petsc-users] external binary reader from bin/matlab In-Reply-To: References: <88D7E3BB7E1960428303E76010037451A23C@BL2PRD0103MB060.prod.exchangelabs.com><2545DC7A42DF804AAAB2ADA5043D57DA28E59B@CORP-CLT-EXB01.ds> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E59C@CORP-CLT-EXB01.ds> Barry, This works well. Thanks a lot. Exactly as you said, I was using PETSC_VIEWER_NATIVE for parallel rectangular dense matrix. As far as I can remember, I switched to that option because some cases didn't run in parallel otherwise or something like that.... Didn't raise a bug report because we are still in 2.3.3 for multiple tangled reasons. Thanks for your new version. Works both for sparse and dense. Chun From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Monday, April 19, 2010 3:26 PM To: PETSc users list Subject: Re: [petsc-users] external binary reader from bin/matlab Chun, I assume you mean dense matrices written with PETSC_VIEWER_NATIVE option to the viewer? Please find attached a version of PetscBinaryRead.m that handles this. Please let me know if it doesn't work for you? Notes: the default viewing of dense matrices converts to sparse so that PetscBinaryRead() does actually work. The reading of native dense format doesn't matter if the matrix is rectangular or square it just was not supported (but is with my new code). Please let me know if I misunderstood your problem. Barry On Apr 19, 2010, at 1:44 PM, SUN Chun wrote: Hi PETSc dev, It seems that bin/matlab/PetscBinaryRead.m does not read rectangular dense matrix binary output. I had success in sparse matrix though. Could you confirm this? Or I'm doing something wrong? Do you provide any other external binary reader than this set? Thanks a lot, Chun This email and any attachments are intended solely for the use of the individual or entity to whom it is addressed and may be confidential and/or privileged. If you are not one of the named recipients or have received this email in error, (i) you should not read, disclose, or copy it, (ii) please notify sender of your receipt by reply email and delete this email and all attachments, (iii) Dassault Systemes does not accept or assume any liability or responsibility for any use of or reliance on this email. For other languages, Click Here This email and any attachments are intended solely for the use of the individual or entity to whom it is addressed and may be confidential and/or privileged. If you are not one of the named recipients or have received this email in error, (i) you should not read, disclose, or copy it, (ii) please notify sender of your receipt by reply email and delete this email and all attachments, (iii) Dassault Systemes does not accept or assume any liability or responsibility for any use of or reliance on this email.For other languages, go to http://www.3ds.com/terms/email-disclaimer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lizs at mail.uc.edu Mon Apr 19 15:05:06 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Mon, 19 Apr 2010 20:05:06 +0000 Subject: [petsc-users] petsc-users Digest, Vol 16, Issue 23 In-Reply-To: References: Message-ID: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com> Mat and Jed, Thank you for your reply. As far as I remembered, the make test was successful except the Fortran complier, but I only use C for my work. I tried " $ cd src/snes/examples/tutorials && make -n ex5" and it shows: /home/lizs/petsc/linux-gnu-c-debug/bin/mpicc ex5.c -o ex5 All the machines have mpi package installed before I got to touch PETSc. That might be the potential problem. But "Portability" is one of the main features of PETSc. Doesn't that mean the user can run the final executable anywhere as long as c and mpi libraries are available? Thanks, Zhisong Li ________________________________________ Subject: petsc-users Digest, Vol 16, Issue 23 The mpiexec in your path is not the one you configured with. What MPI is begin used? Matt > > I've tried to run some PETSc examples and my own simple PETSc codes on my > > office's cluster as well as on Ohio Supercomputer Center's machine. At first > > I found no speedup for parallel processing and later I noticed by checking > > the rank that actually each node is doing the same sequential processing. > > I'm sure the codes are parallel codes (such as ex19.c in SNES and ex7.c in > > TS tutorial codes). Is there anything I am missing in the installation or > > the run command such as"mpiexec -n 3 ex7"? How can I realize a parallel > > running? > > > > The mpiexec in your path is not the one you configured with. What > MPI is begin used? Does make test run successfully? If so, then $ cd src/snes/examples/tutorials && make -n runex5 will tell you which mpiexec PETSc is using. Jed From aron.ahmadia at kaust.edu.sa Mon Apr 19 15:13:02 2010 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Mon, 19 Apr 2010 23:13:02 +0300 Subject: [petsc-users] petsc-users Digest, Vol 16, Issue 23 In-Reply-To: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: Dear Zhisong, Jed asked you to use the makefile to actually launch the test, as so: cd src/snes/examples/tutorials && make -n runex5 You'll notice in your reply that you only built ex5, and didn't use the make command to launch the job. This is important to ensure that you can execute your code in parallel using the MPI that PETSc detected or installed. "All the machines have mpi package installed before I got to touch PETSc. That might be the potential problem. But "Portability" is one of the main features of PETSc. Doesn't that mean the user can run the final executable anywhere as long as c and mpi libraries are available?" Traditionally, portability means that the code will compile and link anywhere, not that you can drag and drop executables as you wish. You are responsible for re-building PETSc applications or at least ensuring binary and MPI compatibility. Hope this helps... Warm Regards, Aron Ahmadia On Mon, Apr 19, 2010 at 11:05 PM, Li, Zhisong (lizs) wrote: > Mat and Jed, > > Thank you for your reply. > > As far as I remembered, the make test was successful except the Fortran > complier, but I only use C for my work. > > I tried " $ cd src/snes/examples/tutorials && make -n ex5" and it shows: > /home/lizs/petsc/linux-gnu-c-debug/bin/mpicc ex5.c -o ex5 > > All the machines have mpi package installed before I got to touch PETSc. > That might be the potential problem. But "Portability" is one of the main > features of PETSc. Doesn't that mean the user can run the final executable > anywhere as long as c and mpi libraries are available? > > Thanks, > > Zhisong Li > > > ________________________________________ > Subject: petsc-users Digest, Vol 16, Issue 23 > > > The mpiexec in your path is not the one you configured with. What > MPI is begin used? > > Matt > > > > > I've tried to run some PETSc examples and my own simple PETSc codes on > my > > > office's cluster as well as on Ohio Supercomputer Center's machine. At > first > > > I found no speedup for parallel processing and later I noticed by > checking > > > the rank that actually each node is doing the same sequential > processing. > > > I'm sure the codes are parallel codes (such as ex19.c in SNES and ex7.c > in > > > TS tutorial codes). Is there anything I am missing in the installation > or > > > the run command such as"mpiexec -n 3 ex7"? How can I realize a parallel > > > running? > > > > > > > The mpiexec in your path is not the one you configured with. What > > MPI is begin used? > > Does make test run successfully? If so, then > > $ cd src/snes/examples/tutorials && make -n runex5 > > will tell you which mpiexec PETSc is using. > > Jed > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Apr 19 15:12:29 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 19 Apr 2010 22:12:29 +0200 Subject: [petsc-users] petsc-users Digest, Vol 16, Issue 23 In-Reply-To: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <87pr1vryky.fsf@59A2.org> On Mon, 19 Apr 2010 20:05:06 +0000, "Li, Zhisong (lizs)" wrote: > Mat and Jed, > > Thank you for your reply. > > As far as I remembered, the make test was successful except the Fortran complier, but I only use C for my work. > > I tried " $ cd src/snes/examples/tutorials && make -n ex5" and it shows: /home/lizs/petsc/linux-gnu-c-debug/bin/mpicc ex5.c -o ex5 Looks like configure has built an MPI for you. Did you use --download-mpich ? > All the machines have mpi package installed before I got to touch > PETSc. Maybe you wanted to use --with-mpi-dir=/path/to/system/mpi ? > But "Portability" is one of the main features of PETSc. Doesn't that > mean the user can run the final executable anywhere as long as c and > mpi libraries are available? PETSc can only be as "portable" as MPI. MPI does not define an ABI (which would allow you to compile with one MPI and run with a different one). So you have to compile the code with the same MPI you are running under. You can use static linking if you want to send binaries around (they will still need a compatible MPI, but won't need all the other libraries your program may use) but this will make your executables *much* larger. Jed From hxie at umn.edu Mon Apr 19 16:04:21 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 19 Apr 2010 16:04:21 -0500 Subject: [petsc-users] zero diagonals In-Reply-To: References: Message-ID: Hi, I have a matrix with some zero diagonals. Now I create a new matrix for preconditioner and use Matshift to avoid the zero diagonals in the preconditioner matrix. It works fine. Is that possible to just use the command line options? If I just want to shift the zero diagonals, how can I do that? Thanks. Bests, Hui From bsmith at mcs.anl.gov Mon Apr 19 16:08:34 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 19 Apr 2010 16:08:34 -0500 Subject: [petsc-users] zero diagonals In-Reply-To: References: Message-ID: <8CCA772D-DD35-4396-9951-DF23AC833863@mcs.anl.gov> -pc_factor_mat_shift_type nonzero or -sub_pc_factor_shift_type nonzero or -mg_coarse_pc_factor_shift_type nonzero depending on where it is used. You can always run with -help and grep for shift_type to find out which one is needed Barry On Apr 19, 2010, at 4:04 PM, hxie at umn.edu wrote: > Hi, > I have a matrix with some zero diagonals. Now I create a new matrix > for preconditioner and use Matshift to avoid the zero diagonals in > the preconditioner matrix. It works fine. Is that possible to just > use the command line options? If I just want to shift the zero > diagonals, how can I do that? Thanks. > > Bests, > Hui > > From lizs at mail.uc.edu Mon Apr 19 16:28:47 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Mon, 19 Apr 2010 21:28:47 +0000 Subject: [petsc-users] petsc-users Digest, Vol 16, Issue 23 In-Reply-To: <87pr1vryky.fsf@59A2.org> References: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com>, <87pr1vryky.fsf@59A2.org> Message-ID: <88D7E3BB7E1960428303E76010037451A272@BL2PRD0103MB060.prod.exchangelabs.com> Hi, Jed Yes, I did have "--download-mpich=1" when I installed PETSc. To use "--with-mpi-dir=/path/to/system/mpi ?", Do you mean I should reinstall PETSc with this option or add this as a run command option? It seems it's not recognized as a run command option. Anyway, this should be a fundamental issue running PETSc. Why doesn't it happen to many other users? On machines of Ohio supercomputer center, I could not easily install any software for myself. I can only upload my complied executable and run it there using their default MPI setup. Thanks, Zhisong Li ________________________________________ From: Jed Brown [five9a2 at gmail.com] on behalf of Jed Brown [jed at 59A2.org] Sent: Monday, April 19, 2010 4:12 PM To: Li, Zhisong (lizs); petsc-users at mcs.anl.gov Subject: Re: [petsc-users] petsc-users Digest, Vol 16, Issue 23 On Mon, 19 Apr 2010 20:05:06 +0000, "Li, Zhisong (lizs)" wrote: > Mat and Jed, > > Thank you for your reply. > > As far as I remembered, the make test was successful except the Fortran complier, but I only use C for my work. > > I tried " $ cd src/snes/examples/tutorials && make -n ex5" and it shows: /home/lizs/petsc/linux-gnu-c-debug/bin/mpicc ex5.c -o ex5 Looks like configure has built an MPI for you. Did you use --download-mpich ? > All the machines have mpi package installed before I got to touch > PETSc. Maybe you wanted to use --with-mpi-dir=/path/to/system/mpi ? Jed From bsmith at mcs.anl.gov Mon Apr 19 16:37:22 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 19 Apr 2010 16:37:22 -0500 Subject: [petsc-users] petsc-users Digest, Vol 16, Issue 23 In-Reply-To: <88D7E3BB7E1960428303E76010037451A272@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A25E@BL2PRD0103MB060.prod.exchangelabs.com>, <87pr1vryky.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A272@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <5F4A72A5-3CF9-4489-B2C3-079D8D8BA9F5@mcs.anl.gov> On Apr 19, 2010, at 4:28 PM, Li, Zhisong (lizs) wrote: > Hi, Jed > > Yes, I did have "--download-mpich=1" when I installed PETSc. To use > "--with-mpi-dir=/path/to/system/mpi ?", Do you mean I should > reinstall PETSc with this option or add this as a run command > option? It seems it's not recognized as a run command option. > > Anyway, this should be a fundamental issue running PETSc. Why > doesn't it happen to many other users? On machines of Ohio > supercomputer center, I could not easily install any software for > myself. I can only upload my complied executable and run it there > using their default MPI setup. Then you need to compile it using their "default MPI setup" and you cannot use --download-mpich Likely they provide a mpicc and mpicxx and mpif90 that are used to compile programs there. You should run ./configure with the options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 making sure that the mpicc, mpicxx and mpif90 are in your path. Barry > > > Thanks, > > Zhisong Li > > ________________________________________ > From: Jed Brown [five9a2 at gmail.com] on behalf of Jed Brown [jed at 59A2.org > ] > Sent: Monday, April 19, 2010 4:12 PM > To: Li, Zhisong (lizs); petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] petsc-users Digest, Vol 16, Issue 23 > > On Mon, 19 Apr 2010 20:05:06 +0000, "Li, Zhisong (lizs)" > wrote: >> Mat and Jed, >> >> Thank you for your reply. >> >> As far as I remembered, the make test was successful except the >> Fortran complier, but I only use C for my work. >> >> I tried " $ cd src/snes/examples/tutorials && make -n ex5" and it >> shows: /home/lizs/petsc/linux-gnu-c-debug/bin/mpicc ex5.c -o ex5 > > Looks like configure has built an MPI for you. Did you use > --download-mpich ? > >> All the machines have mpi package installed before I got to touch >> PETSc. > > Maybe you wanted to use --with-mpi-dir=/path/to/system/mpi ? > > > > Jed From henrik.nordborg at cadfem.ch Tue Apr 20 04:31:20 2010 From: henrik.nordborg at cadfem.ch (Henrik Nordborg) Date: Tue, 20 Apr 2010 11:31:20 +0200 Subject: [petsc-users] PETSc build for Windows 64 bit Message-ID: <2F57B039D39B6F499CA6C65330659BDD903885EF25@muc-dc-03.cadfem-gmbh.cadfem.de> Hello! I am trying to produce a good build of PETSc for 64 bit Windows (with MPICH2 64 bit, Intel C++ and Fortran Compilers and Intel MKL for BLAS). Does anyone have experience doing this? My first test with the Python scripts did not work and I was therefore considering making a Visual Studio project instead. Does anyone have a template? Best regards, Henrik -------------- next part -------------- An HTML attachment was scrubbed... URL: From tribur at vision.ee.ethz.ch Tue Apr 20 04:49:06 2010 From: tribur at vision.ee.ethz.ch (tribur at vision.ee.ethz.ch) Date: Tue, 20 Apr 2010 11:49:06 +0200 Subject: [petsc-users] ML and -pc_factor_shift_nonzero Message-ID: <20100420114906.8772342v8p5haahe@email.ee.ethz.ch> Hi Jed and Matt, thanks a lot for your help and the interesting discussion. Kathrin Quoting "Jed Brown" : > On Mon, 19 Apr 2010 07:23:01 -0500, Matthew Knepley > wrote: >> So, to see if I understand correctly. You are saying that you can get >> away with more approximate solves if you do not do full reduction? I >> know the theory for the case of Stokes, but can you prove this in a >> general sense? > > The theory is relatively general (as much as preconditioned GMRES is) if > you iterate in the full space with either block-diagonal or > block-triangular preconditioners. Note that this formulation *never* > involves explicit application of a Schur complement. Sometimes I get > better convergence with one subcycle on the Schur complement with a very > approximate inner solve (FGMRES outer). I'm not sure if Dave sees this, > he seems to like doing a couple subcycles in multigrid smoothers. > > The folks doing Q1-Q1 with ML are not doing *anything* with a Schur > complement (approxmate or otherwise). They just coarsen on the full > indefinite system and use ASM (overlap 0 or 1) with ILU to precondition > the coupled system. This makes a certain amount of sense because for > those stabilized formulations, this is similar in spirit to a Vanka > smoother (block SOR is a more precise analogue). > >> This sounds like the black magic I expect :) > > Yeah, this involves some sort of very local solve to produce the > aggregates and interpolations that are not transposes of each other (if > I understood Ray and Eric correctly). > >> I still maintain that aggregation is a really crappy way to generate >> coarse systems, especially for mixed elements. We should be generating >> coarse systems geometrically, and then using a nice (maybe Black-Box) >> framework for calculating good projectors. > > This whole framework doesn't work for mixed discretizations. > > Jed > From knepley at gmail.com Tue Apr 20 05:48:35 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Apr 2010 05:48:35 -0500 Subject: [petsc-users] PETSc build for Windows 64 bit In-Reply-To: <2F57B039D39B6F499CA6C65330659BDD903885EF25@muc-dc-03.cadfem-gmbh.cadfem.de> References: <2F57B039D39B6F499CA6C65330659BDD903885EF25@muc-dc-03.cadfem-gmbh.cadfem.de> Message-ID: On Tue, Apr 20, 2010 at 4:31 AM, Henrik Nordborg wrote: > Hello! > > > > I am trying to produce a good build of PETSc for 64 bit Windows (with > MPICH2 64 bit, Intel C++ and Fortran Compilers and Intel MKL for BLAS). Does > anyone have experience doing this? My first test with the Python scripts did > not work and I was therefore > Python scripts for building? What was the error? Send it to petsc-maint. Matt > considering making a Visual Studio project instead. Does anyone have a > template? > > > > Best regards, > > Henrik > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Wed Apr 21 04:34:39 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 21 Apr 2010 05:34:39 -0400 Subject: [petsc-users] If valgrind says no memory prolbem. Message-ID: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> Dear all, I checked the code with valgrind, and there is no memory problem, but when running parallelly, there is a message like [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range srun: error: task 0: Exited with exit code 59 [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 7, Mon Jul 6 11:33:34 CDT 2009 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /tmp/lustre/home/xy2102/April2110/die0/./twqt2ff.exe on a linux-c-g named sci-m0n0.scsystem by xy2102 Wed Apr 21 05:30:10 2010 [0]PETSC ERROR: Libraries linked from /home/xy2102/soft/petsc-3.0.0-p7/linux-c-gnu-debug/lib [0]PETSC ERROR: Configure run at Mon Jul 20 13:56:37 2009 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif77 --with-mpiexec=srun --with-debugging=1 --with-fortran-kernels=generic --with-shared=0 --CFLAGS=-G0 --FFLAGS=-G0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 In: PMI_Abort(59, application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0) srun: error: task 2-3: Killed srun: error: task 1: Killed What is wrong? Cheers, Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From aron.ahmadia at kaust.edu.sa Wed Apr 21 05:20:10 2010 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Wed, 21 Apr 2010 13:20:10 +0300 Subject: [petsc-users] If valgrind says no memory prolbem. In-Reply-To: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> References: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> Message-ID: A SEGV is definitely a memory access problem, as PETSc suggests, it is likely to be a memory access out of range. I don't recommend trying to debug this problem on amdahl, can you reproduce the problem just running with multiple processes on your workstation? Warm Regards, Aron On Wed, Apr 21, 2010 at 12:34 PM, (Rebecca) Xuefei YUAN wrote: > Dear all, > > I checked the code with valgrind, and there is no memory problem, but when > running parallelly, there is a message like > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > srun: error: task 0: Exited with exit code 59 > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on linux or man libgmalloc on Apple to find memory > corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 7, Mon Jul 6 11:33:34 > CDT 2009 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: /tmp/lustre/home/xy2102/April2110/die0/./twqt2ff.exe on a > linux-c-g named sci-m0n0.scsystem by xy2102 Wed Apr 21 05:30:10 2010 > [0]PETSC ERROR: Libraries linked from > /home/xy2102/soft/petsc-3.0.0-p7/linux-c-gnu-debug/lib > [0]PETSC ERROR: Configure run at Mon Jul 20 13:56:37 2009 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif77 > --with-mpiexec=srun --with-debugging=1 --with-fortran-kernels=generic > --with-shared=0 --CFLAGS=-G0 --FFLAGS=-G0 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > In: PMI_Abort(59, application called MPI_Abort(MPI_COMM_WORLD, 59) - > process 0) > srun: error: task 2-3: Killed > srun: error: task 1: Killed > > What is wrong? > > Cheers, > > Rebecca > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Wed Apr 21 05:47:25 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 21 Apr 2010 06:47:25 -0400 Subject: [petsc-users] If valgrind says no memory prolbem. In-Reply-To: References: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> Message-ID: <20100421064725.cg3wejgug4kgsc4g@cubmail.cc.columbia.edu> Dear Aron, Thanks for your reply. It is fine to run it in my machine with the same parameters and np. Here are the output files for the two: 1) running in my local machine: rebecca at YuanWork:~/linux/code/twoway/twoway_new/valgrind$ mpiexec -np 4 ./twqt2ff.exe -options_file option_all_twqt2ff ************************************************** number of processors = 4 viscosity = 1.0000000000000000e-03 resistivity = 1.0000000000000000e-03 skin depth = 1.0000000000000000e+00 hyper resistivity = 1.6384000000000001e-05 hyper viscosity = 6.5536000000000011e-02 problem size: 101 by 101 dx = 1.2673267326732673e-01 dy = 6.4000000000000001e-02 dt = 5.0000000000000003e-02 adaptive time step size (1:yes;0:no) = 0 ************************************************** 0 SNES Function norm 1.558736678272e-02 Linear solve converged due to CONVERGED_RTOL iterations 2 1 SNES Function norm 3.340317612139e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 2 SNES Function norm 3.147655751158e-04 Linear solve converged due to CONVERGED_RTOL iterations 5 3 SNES Function norm 5.447758329758e-06 Linear solve converged due to CONVERGED_RTOL iterations 9 4 SNES Function norm 6.186506196319e-09 Linear solve converged due to CONVERGED_RTOL iterations 16 5 SNES Function norm 7.316295670455e-13 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE ************************************************** time step = 1 current time step size= 5.0000000000000003e-02 time = 5.0000000000000003e-02 number of nonlinear iterations = 5 number of linear iterations = 35 function norm = 7.3162956704552350e-13 ************************************************** total number of time steps = 1 total number of nonlinear iterations = 5 total number of linear iterations = 35 2) here is what I get from amdahl: ************************************************** number of processors = 4 viscosity = 1.0000000000000000e-02 resistivity = 5.0000000000000001e-03 skin depth = 0.0000000000000000e+00 hyper resistivity = 8.1920000000000002e-05 hyper viscosity = 6.5535999999999997e-02 problem size: 101 by 101 dx = 1.2673267326732673e-01 dy = 6.4000000000000001e-02 dt = 5.0000000000000003e-02 adaptive time step size (1:yes;0:no) = 0 ************************************************** 0 SNES Function norm 1.121373952980e-02 STOPPED AND THE ERROR MESSAGE CAME OUT AS: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range srun: error: task 0: Exited with exit code 59 [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 7, Mon Jul 6 11:33:34 CDT 2009 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /tmp/lustre/home/xy2102/April2110/die0/./twqt2ff.exe on a linux-c-g named sci-m0n0.scsystem by xy2102 Wed Apr 21 05:30:10 2010 [0]PETSC ERROR: Libraries linked from /home/xy2102/soft/petsc-3.0.0-p7/linux-c-gnu-debug/lib [0]PETSC ERROR: Configure run at Mon Jul 20 13:56:37 2009 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif77 --with-mpiexec=srun --with-debugging=1 --with-fortran-kernels=generic --with-shared=0 --CFLAGS=-G0 --FFLAGS=-G0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 In: PMI_Abort(59, application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0) srun: error: task 2-3: Killed srun: error: task 1: Killed The makefile is 1) locally: ##### Rebecca local dev ################### PETSC_ARCH=linux-gnu-c-debug PETSC_DIR=/home/rebecca/soft/petsc-dev include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules ####################################### twqt2ff: twqt2ff.o chkopts -${CLINKER} -g -O0 -o twqt2ff.exe twqt2ff.o ${PETSC_SNES_LIB} 2) amdahl: ##### Amdahl 3.0 ################## PETSC_ARCH=linux-c-gnu-debug PETSC_DIR=/home/xy2102/soft/petsc-3.0.0-p7 include ${PETSC_DIR}/conf/base ####################################### twqt2ff: twqt2ff.o chkopts -${CLINKER} -o twqt2ff.exe twqt2ff.o ${PETSC_SNES_LIB} Could it be the different PETSc version and make options? Thanks very much! Rebecca Quoting Aron Ahmadia : > A SEGV is definitely a memory access problem, as PETSc suggests, it is > likely to be a memory access out of range. > > I don't recommend trying to debug this problem on amdahl, can you reproduce > the problem just running with multiple processes on your workstation? > > Warm Regards, > Aron > > On Wed, Apr 21, 2010 at 12:34 PM, (Rebecca) Xuefei YUAN > wrote: > >> Dear all, >> >> I checked the code with valgrind, and there is no memory problem, but when >> running parallelly, there is a message like >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> srun: error: task 0: Exited with exit code 59 >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or >> try >> http://valgrind.org on linux or man libgmalloc on Apple to find memory >> corruption errors >> [0]PETSC ERROR: likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 7, Mon Jul 6 11:33:34 >> CDT 2009 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: /tmp/lustre/home/xy2102/April2110/die0/./twqt2ff.exe on a >> linux-c-g named sci-m0n0.scsystem by xy2102 Wed Apr 21 05:30:10 2010 >> [0]PETSC ERROR: Libraries linked from >> /home/xy2102/soft/petsc-3.0.0-p7/linux-c-gnu-debug/lib >> [0]PETSC ERROR: Configure run at Mon Jul 20 13:56:37 2009 >> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif77 >> --with-mpiexec=srun --with-debugging=1 --with-fortran-kernels=generic >> --with-shared=0 --CFLAGS=-G0 --FFLAGS=-G0 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> In: PMI_Abort(59, application called MPI_Abort(MPI_COMM_WORLD, 59) - >> process 0) >> srun: error: task 2-3: Killed >> srun: error: task 1: Killed >> >> What is wrong? >> >> Cheers, >> >> Rebecca >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> >> > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Wed Apr 21 06:14:31 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 21 Apr 2010 07:14:31 -0400 Subject: [petsc-users] np>1 valgrind error comes out Message-ID: <20100421071431.9lfdddeog884c8k4@cubmail.cc.columbia.edu> Dear all, I was wrong at the previous message about the valgrind result. If np=1, valgrind says ok, but when np=2 it is not. It seems to me that the error comes from the initial set ups with wrong size : Address 0x46b6d78 is 368 bytes inside a [MPICH2 handle: objptr=0x46b6c08 handle=0xec00000c INDIRECT/REQUEST] of size 372 client-defined Could you please take a look at this error message: rebecca at YuanWork:~/linux/code/twoway/twoway_new/valgrind$ mpiexec -np 2 valgrind --leak-check=full ./twqt2ff.exe -options_file option_all_twqt2ff ==546== Memcheck, a memory error detector ==546== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==546== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==546== Command: ./twqt2ff.exe -options_file option_all_twqt2ff ==546== ==547== Memcheck, a memory error detector ==547== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==547== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==547== Command: ./twqt2ff.exe -options_file option_all_twqt2ff ==547== ************************************************** number of processors = 2 viscosity = 1.0000000000000000e-03 resistivity = 1.0000000000000000e-03 skin depth = 1.0000000000000000e+00 hyper resistivity = 1.8204444444444448e-04 hyper viscosity = 7.2817777777777792e-01 problem size: 31 by 31 dx = 4.1290322580645161e-01 dy = 2.1333333333333335e-01 dt = 5.0000000000000003e-02 adaptive time step size (1:yes;0:no) = 0 ************************************************** 0 SNES Function norm 4.632833297189e-02 ==546== Invalid read of size 4 ==546== at 0x8872994: MPIDI_CH3I_Progress_handle_sock_event (ch3_progress.c:738) ==546== by 0x8872EE7: MPIDI_CH3I_Progress (ch3_progress.c:212) ==546== by 0x8851E94: PMPI_Waitany (waitany.c:203) ==546== by 0x841A61C: MatGetSubMatrices_MPIAIJ_Local (mpiov.c:1195) ==546== by 0x8414E3A: MatGetSubMatrices_MPIAIJ (mpiov.c:781) ==546== by 0x832B67C: MatGetSubMatrices (matrix.c:5823) ==546== by 0x816580B: PCSetUp_ASM (asm.c:299) ==546== by 0x8131CD5: PCSetUp (precon.c:795) ==546== by 0x81D6B2D: KSPSetUp (itfunc.c:237) ==546== by 0x81D7C1F: KSPSolve (itfunc.c:353) ==546== by 0x80FEF93: SNES_KSPSolve (snes.c:2931) ==546== by 0x811A7C0: SNESSolve_LS (ls.c:191) ==546== Address 0x46b6d78 is 368 bytes inside a [MPICH2 handle: objptr=0x46b6c08 handle=0xec00000c INDIRECT/REQUEST] of size 372 client-defined ==546== at 0x8866BCA: MPIU_Handle_obj_alloc_unsafe (handlemem.c:219) ==546== by 0x88B3810: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c:342) ==546== by 0x887CF77: MPID_Irecv (mpid_irecv.c:46) ==546== by 0x8840290: MPIC_Sendrecv (helper_fns.c:153) ==546== by 0x8835810: MPIR_Barrier (barrier.c:75) ==546== by 0x8835B07: MPIR_Barrier_or_coll_fn (barrier.c:244) ==546== by 0x8835BC8: PMPI_Barrier (barrier.c:421) ==546== by 0x875B472: PetscCommDuplicate (tagm.c:190) ==546== by 0x875F272: PetscHeaderCreate_Private (inherit.c:44) ==546== by 0x81EA967: KSPCreate (itcreate.c:477) ==546== by 0x81530F4: PCMGCreate_Private (mg.c:95) ==546== by 0x8158F42: PCMGSetLevels (mg.c:623) ==546== ==547== Invalid read of size 4 ==547== at 0x8872994: MPIDI_CH3I_Progress_handle_sock_event (ch3_progress.c:738) ==547== by 0x8872EE7: MPIDI_CH3I_Progress (ch3_progress.c:212) ==547== by 0x8851E94: PMPI_Waitany (waitany.c:203) ==547== by 0x841B4B7: MatGetSubMatrices_MPIAIJ_Local (mpiov.c:1306) ==547== by 0x8414E3A: MatGetSubMatrices_MPIAIJ (mpiov.c:781) ==547== by 0x832B67C: MatGetSubMatrices (matrix.c:5823) ==547== by 0x816580B: PCSetUp_ASM (asm.c:299) ==547== by 0x8131CD5: PCSetUp (precon.c:795) ==547== by 0x81D6B2D: KSPSetUp (itfunc.c:237) ==547== by 0x81D7C1F: KSPSolve (itfunc.c:353) ==547== by 0x80FEF93: SNES_KSPSolve (snes.c:2931) ==547== by 0x811A7C0: SNESSolve_LS (ls.c:191) ==547== Address 0x45ba490 is 368 bytes inside a [MPICH2 handle: objptr=0x45ba320 handle=0xec00000c INDIRECT/REQUEST] of size 372 client-defined ==547== at 0x8866BCA: MPIU_Handle_obj_alloc_unsafe (handlemem.c:219) ==547== by 0x88B3810: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c:342) ==547== by 0x887CF77: MPID_Irecv (mpid_irecv.c:46) ==547== by 0x8840290: MPIC_Sendrecv (helper_fns.c:153) ==547== by 0x8835810: MPIR_Barrier (barrier.c:75) ==547== by 0x8835B07: MPIR_Barrier_or_coll_fn (barrier.c:244) ==547== by 0x8835BC8: PMPI_Barrier (barrier.c:421) ==547== by 0x875B472: PetscCommDuplicate (tagm.c:190) ==547== by 0x875F272: PetscHeaderCreate_Private (inherit.c:44) ==547== by 0x81EA967: KSPCreate (itcreate.c:477) ==547== by 0x81530F4: PCMGCreate_Private (mg.c:95) ==547== by 0x8158F42: PCMGSetLevels (mg.c:623) ==547== Linear solve converged due to CONVERGED_RTOL iterations 2 1 SNES Function norm 7.444976906574e-04 Linear solve converged due to CONVERGED_RTOL iterations 2 2 SNES Function norm 9.698205312365e-06 Linear solve converged due to CONVERGED_RTOL iterations 3 3 SNES Function norm 6.808828603405e-09 Linear solve converged due to CONVERGED_RTOL iterations 6 4 SNES Function norm 1.757078396735e-13 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE ************************************************** time step = 1 current time step size= 5.0000000000000003e-02 time = 5.0000000000000003e-02 number of nonlinear iterations = 4 number of linear iterations = 13 function norm = 1.7570783967354192e-13 ************************************************** total number of time steps = 1 total number of nonlinear iterations = 4 total number of linear iterations = 13 WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-mxgrid value: 32 Option left: name:-mygrid value: 32 Option left: name:-time_to_generate_grid value: 0.0 ==546== ==547== ==547== HEAP SUMMARY: ==547== in use at exit: 156 bytes in 11 blocks ==547== total heap usage: 4,947 allocs, 3,551 frees, 29,251,365 bytes allocated ==547== ==546== HEAP SUMMARY: ==546== in use at exit: 156 bytes in 11 blocks ==546== total heap usage: 4,922 allocs, 3,851 frees, 31,388,554 bytes allocated ==546== ==546== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 11 of 11 ==546== at 0x4022BF3: malloc (vg_replace_malloc.c:195) ==546== by 0x42693E2: ??? (in /lib/tls/i686/cmov/libc-2.7.so) ==546== by 0x4269C2D: __nss_database_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==546== by 0x4701FDB: ??? ==546== by 0x470313C: ??? ==546== by 0x4215D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==546== by 0x421565D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==546== by 0x8789545: PetscGetUserName (fuser.c:68) ==546== by 0x8744872: PetscErrorPrintfInitialize (errtrace.c:68) ==546== by 0x8779CEB: PetscInitialize (pinit.c:568) ==546== by 0x804B4E7: main (twqt2ff.c:88) ==546== ==546== LEAK SUMMARY: ==546== definitely lost: 36 bytes in 1 blocks ==546== indirectly lost: 120 bytes in 10 blocks ==546== possibly lost: 0 bytes in 0 blocks ==546== still reachable: 0 bytes in 0 blocks ==546== suppressed: 0 bytes in 0 blocks ==546== ==546== For counts of detected and suppressed errors, rerun with: -v ==546== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 40 from 10) ==547== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 11 of 11 ==547== at 0x4022BF3: malloc (vg_replace_malloc.c:195) ==547== by 0x42693E2: ??? (in /lib/tls/i686/cmov/libc-2.7.so) ==547== by 0x4269C2D: __nss_database_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==547== by 0x4701FDB: ??? ==547== by 0x470313C: ??? ==547== by 0x4215D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==547== by 0x421565D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==547== by 0x8789545: PetscGetUserName (fuser.c:68) ==547== by 0x8744872: PetscErrorPrintfInitialize (errtrace.c:68) ==547== by 0x8779CEB: PetscInitialize (pinit.c:568) ==547== by 0x804B4E7: main (twqt2ff.c:88) ==547== ==547== LEAK SUMMARY: ==547== definitely lost: 36 bytes in 1 blocks ==547== indirebecca at YuanWork:~/linux/code/twoway/twoway_new/valgrind$ Thanks very much! Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Wed Apr 21 07:55:31 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 21 Apr 2010 07:55:31 -0500 Subject: [petsc-users] If valgrind says no memory prolbem. In-Reply-To: <20100421064725.cg3wejgug4kgsc4g@cubmail.cc.columbia.edu> References: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> <20100421064725.cg3wejgug4kgsc4g@cubmail.cc.columbia.edu> Message-ID: You need to use a debugger to see where it is crashing. Run on your local machine, laptop/workstation with the option - start_in_debugger type cont into the debuggers when they come up and type where when it crashes. Barry On Apr 21, 2010, at 5:47 AM, (Rebecca) Xuefei YUAN wrote: > Dear Aron, > > Thanks for your reply. > > It is fine to run it in my machine with the same parameters and np. > > Here are the output files for the two: > > 1) running in my local machine: > rebecca at YuanWork:~/linux/code/twoway/twoway_new/valgrind$ mpiexec - > np 4 ./twqt2ff.exe -options_file option_all_twqt2ff > ************************************************** > number of processors = 4 > viscosity = 1.0000000000000000e-03 > resistivity = 1.0000000000000000e-03 > skin depth = 1.0000000000000000e+00 > hyper resistivity = 1.6384000000000001e-05 > hyper viscosity = 6.5536000000000011e-02 > problem size: 101 by 101 > dx = 1.2673267326732673e-01 > dy = 6.4000000000000001e-02 > dt = 5.0000000000000003e-02 > adaptive time step size (1:yes;0:no) = 0 > ************************************************** > 0 SNES Function norm 1.558736678272e-02 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 1 SNES Function norm 3.340317612139e-03 > Linear solve converged due to CONVERGED_RTOL iterations 3 > 2 SNES Function norm 3.147655751158e-04 > Linear solve converged due to CONVERGED_RTOL iterations 5 > 3 SNES Function norm 5.447758329758e-06 > Linear solve converged due to CONVERGED_RTOL iterations 9 > 4 SNES Function norm 6.186506196319e-09 > Linear solve converged due to CONVERGED_RTOL iterations 16 > 5 SNES Function norm 7.316295670455e-13 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE > ************************************************** > time step = 1 > current time step size= 5.0000000000000003e-02 > time = 5.0000000000000003e-02 > number of nonlinear iterations = 5 > number of linear iterations = 35 > function norm = 7.3162956704552350e-13 > ************************************************** > total number of time steps = 1 > total number of nonlinear iterations = 5 > total number of linear iterations = 35 > > 2) here is what I get from amdahl: > ************************************************** > number of processors = 4 > viscosity = 1.0000000000000000e-02 > resistivity = 5.0000000000000001e-03 > skin depth = 0.0000000000000000e+00 > hyper resistivity = 8.1920000000000002e-05 > hyper viscosity = 6.5535999999999997e-02 > problem size: 101 by 101 > dx = 1.2673267326732673e-01 > dy = 6.4000000000000001e-02 > dt = 5.0000000000000003e-02 > adaptive time step size (1:yes;0:no) = 0 > ************************************************** > 0 SNES Function norm 1.121373952980e-02 > > STOPPED AND THE ERROR MESSAGE CAME OUT AS: > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > srun: error: task 0: Exited with exit code 59 > [0]PETSC ERROR: Try option -start_in_debugger or - > on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal > [0]PETSC ERROR: or try http://valgrind.org on linux or man > libgmalloc on Apple to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 7, Mon Jul 6 > 11:33:34 CDT 2009 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: /tmp/lustre/home/xy2102/April2110/die0/./twqt2ff.exe > on a linux-c-g named sci-m0n0.scsystem by xy2102 Wed Apr 21 05:30:10 > 2010 > [0]PETSC ERROR: Libraries linked from /home/xy2102/soft/petsc-3.0.0- > p7/linux-c-gnu-debug/lib > [0]PETSC ERROR: Configure run at Mon Jul 20 13:56:37 2009 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif77 -- > with-mpiexec=srun --with-debugging=1 --with-fortran-kernels=generic > --with-shared=0 --CFLAGS=-G0 --FFLAGS=-G0 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > In: PMI_Abort(59, application called MPI_Abort(MPI_COMM_WORLD, 59) - > process 0) > srun: error: task 2-3: Killed > srun: error: task 1: Killed > > > The makefile is > > 1) locally: > > ##### Rebecca local dev ################### > PETSC_ARCH=linux-gnu-c-debug > PETSC_DIR=/home/rebecca/soft/petsc-dev > include ${PETSC_DIR}/conf/variables > include ${PETSC_DIR}/conf/rules > ####################################### > > twqt2ff: twqt2ff.o chkopts > -${CLINKER} -g -O0 -o twqt2ff.exe twqt2ff.o ${PETSC_SNES_LIB} > > 2) amdahl: > ##### Amdahl 3.0 ################## > PETSC_ARCH=linux-c-gnu-debug > PETSC_DIR=/home/xy2102/soft/petsc-3.0.0-p7 > include ${PETSC_DIR}/conf/base > ####################################### > > twqt2ff: twqt2ff.o chkopts > -${CLINKER} -o twqt2ff.exe twqt2ff.o ${PETSC_SNES_LIB} > > > Could it be the different PETSc version and make options? > > Thanks very much! > > Rebecca > > > > > Quoting Aron Ahmadia : > >> A SEGV is definitely a memory access problem, as PETSc suggests, it >> is >> likely to be a memory access out of range. >> >> I don't recommend trying to debug this problem on amdahl, can you >> reproduce >> the problem just running with multiple processes on your workstation? >> >> Warm Regards, >> Aron >> >> On Wed, Apr 21, 2010 at 12:34 PM, (Rebecca) Xuefei YUAN >> wrote: >> >>> Dear all, >>> >>> I checked the code with valgrind, and there is no memory problem, >>> but when >>> running parallelly, there is a message like >>> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>> Violation, >>> probably memory access out of range >>> srun: error: task 0: Exited with exit code 59 >>> [0]PETSC ERROR: Try option -start_in_debugger or - >>> on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal >>> [0]PETSCERROR: or try >>> http://valgrind.org on linux or man libgmalloc on Apple to find >>> memory >>> corruption errors >>> [0]PETSC ERROR: likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 7, Mon Jul 6 >>> 11:33:34 >>> CDT 2009 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: /tmp/lustre/home/xy2102/April2110/die0/./ >>> twqt2ff.exe on a >>> linux-c-g named sci-m0n0.scsystem by xy2102 Wed Apr 21 05:30:10 2010 >>> [0]PETSC ERROR: Libraries linked from >>> /home/xy2102/soft/petsc-3.0.0-p7/linux-c-gnu-debug/lib >>> [0]PETSC ERROR: Configure run at Mon Jul 20 13:56:37 2009 >>> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif77 >>> --with-mpiexec=srun --with-debugging=1 --with-fortran- >>> kernels=generic >>> --with-shared=0 --CFLAGS=-G0 --FFLAGS=-G0 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory >>> unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>> In: PMI_Abort(59, application called MPI_Abort(MPI_COMM_WORLD, 59) - >>> process 0) >>> srun: error: task 2-3: Killed >>> srun: error: task 1: Killed >>> >>> What is wrong? >>> >>> Cheers, >>> >>> Rebecca >>> >>> -- >>> (Rebecca) Xuefei YUAN >>> Department of Applied Physics and Applied Mathematics >>> Columbia University >>> Tel:917-399-8032 >>> www.columbia.edu/~xy2102 >>> >>> >> > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From balay at mcs.anl.gov Wed Apr 21 10:23:40 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 21 Apr 2010 10:23:40 -0500 (CDT) Subject: [petsc-users] If valgrind says no memory prolbem. In-Reply-To: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> References: <20100421053439.bgxxuy1ggk8kggk4@cubmail.cc.columbia.edu> Message-ID: On Wed, 21 Apr 2010, (Rebecca) Xuefei YUAN wrote: > Dear all, > > I checked the code with valgrind, and there is no memory problem, but when > running parallelly, there is a message like Does this mean you used valgrind on a sequential run - not the parallel run? http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind Satish From balay at mcs.anl.gov Wed Apr 21 10:27:48 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 21 Apr 2010 10:27:48 -0500 (CDT) Subject: [petsc-users] np>1 valgrind error comes out In-Reply-To: <20100421071431.9lfdddeog884c8k4@cubmail.cc.columbia.edu> References: <20100421071431.9lfdddeog884c8k4@cubmail.cc.columbia.edu> Message-ID: Some of these MPICH messages could be spurious. Its best to use --download-mpich to avoid these messages. http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind Satish On Wed, 21 Apr 2010, (Rebecca) Xuefei YUAN wrote: > Dear all, > > I was wrong at the previous message about the valgrind result. If np=1, > valgrind says ok, but when np=2 it is not. > > It seems to me that the error comes from the initial set ups with wrong size : > > Address 0x46b6d78 is 368 bytes inside a [MPICH2 handle: objptr=0x46b6c08 > handle=0xec00000c INDIRECT/REQUEST] of size 372 client-defined > > Could you please take a look at this error message: > > rebecca at YuanWork:~/linux/code/twoway/twoway_new/valgrind$ mpiexec -np 2 > valgrind --leak-check=full ./twqt2ff.exe -options_file option_all_twqt2ff > ==546== Memcheck, a memory error detector > ==546== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. > ==546== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info > ==546== Command: ./twqt2ff.exe -options_file option_all_twqt2ff > ==546== > ==547== Memcheck, a memory error detector > ==547== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. > ==547== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info > ==547== Command: ./twqt2ff.exe -options_file option_all_twqt2ff > ==547== > ************************************************** > number of processors = 2 > viscosity = 1.0000000000000000e-03 > resistivity = 1.0000000000000000e-03 > skin depth = 1.0000000000000000e+00 > hyper resistivity = 1.8204444444444448e-04 > hyper viscosity = 7.2817777777777792e-01 > problem size: 31 by 31 > dx = 4.1290322580645161e-01 > dy = 2.1333333333333335e-01 > dt = 5.0000000000000003e-02 > adaptive time step size (1:yes;0:no) = 0 > ************************************************** > 0 SNES Function norm 4.632833297189e-02 > ==546== Invalid read of size 4 > ==546== at 0x8872994: MPIDI_CH3I_Progress_handle_sock_event > (ch3_progress.c:738) > ==546== by 0x8872EE7: MPIDI_CH3I_Progress (ch3_progress.c:212) > ==546== by 0x8851E94: PMPI_Waitany (waitany.c:203) > ==546== by 0x841A61C: MatGetSubMatrices_MPIAIJ_Local (mpiov.c:1195) > ==546== by 0x8414E3A: MatGetSubMatrices_MPIAIJ (mpiov.c:781) > ==546== by 0x832B67C: MatGetSubMatrices (matrix.c:5823) > ==546== by 0x816580B: PCSetUp_ASM (asm.c:299) > ==546== by 0x8131CD5: PCSetUp (precon.c:795) > ==546== by 0x81D6B2D: KSPSetUp (itfunc.c:237) > ==546== by 0x81D7C1F: KSPSolve (itfunc.c:353) > ==546== by 0x80FEF93: SNES_KSPSolve (snes.c:2931) > ==546== by 0x811A7C0: SNESSolve_LS (ls.c:191) > ==546== Address 0x46b6d78 is 368 bytes inside a [MPICH2 handle: > objptr=0x46b6c08 handle=0xec00000c INDIRECT/REQUEST] of size 372 > client-defined > ==546== at 0x8866BCA: MPIU_Handle_obj_alloc_unsafe (handlemem.c:219) > ==546== by 0x88B3810: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c:342) > ==546== by 0x887CF77: MPID_Irecv (mpid_irecv.c:46) > ==546== by 0x8840290: MPIC_Sendrecv (helper_fns.c:153) > ==546== by 0x8835810: MPIR_Barrier (barrier.c:75) > ==546== by 0x8835B07: MPIR_Barrier_or_coll_fn (barrier.c:244) > ==546== by 0x8835BC8: PMPI_Barrier (barrier.c:421) > ==546== by 0x875B472: PetscCommDuplicate (tagm.c:190) > ==546== by 0x875F272: PetscHeaderCreate_Private (inherit.c:44) > ==546== by 0x81EA967: KSPCreate (itcreate.c:477) > ==546== by 0x81530F4: PCMGCreate_Private (mg.c:95) > ==546== by 0x8158F42: PCMGSetLevels (mg.c:623) > ==546== > ==547== Invalid read of size 4 > ==547== at 0x8872994: MPIDI_CH3I_Progress_handle_sock_event > (ch3_progress.c:738) > ==547== by 0x8872EE7: MPIDI_CH3I_Progress (ch3_progress.c:212) > ==547== by 0x8851E94: PMPI_Waitany (waitany.c:203) > ==547== by 0x841B4B7: MatGetSubMatrices_MPIAIJ_Local (mpiov.c:1306) > ==547== by 0x8414E3A: MatGetSubMatrices_MPIAIJ (mpiov.c:781) > ==547== by 0x832B67C: MatGetSubMatrices (matrix.c:5823) > ==547== by 0x816580B: PCSetUp_ASM (asm.c:299) > ==547== by 0x8131CD5: PCSetUp (precon.c:795) > ==547== by 0x81D6B2D: KSPSetUp (itfunc.c:237) > ==547== by 0x81D7C1F: KSPSolve (itfunc.c:353) > ==547== by 0x80FEF93: SNES_KSPSolve (snes.c:2931) > ==547== by 0x811A7C0: SNESSolve_LS (ls.c:191) > ==547== Address 0x45ba490 is 368 bytes inside a [MPICH2 handle: > objptr=0x45ba320 handle=0xec00000c INDIRECT/REQUEST] of size 372 > client-defined > ==547== at 0x8866BCA: MPIU_Handle_obj_alloc_unsafe (handlemem.c:219) > ==547== by 0x88B3810: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c:342) > ==547== by 0x887CF77: MPID_Irecv (mpid_irecv.c:46) > ==547== by 0x8840290: MPIC_Sendrecv (helper_fns.c:153) > ==547== by 0x8835810: MPIR_Barrier (barrier.c:75) > ==547== by 0x8835B07: MPIR_Barrier_or_coll_fn (barrier.c:244) > ==547== by 0x8835BC8: PMPI_Barrier (barrier.c:421) > ==547== by 0x875B472: PetscCommDuplicate (tagm.c:190) > ==547== by 0x875F272: PetscHeaderCreate_Private (inherit.c:44) > ==547== by 0x81EA967: KSPCreate (itcreate.c:477) > ==547== by 0x81530F4: PCMGCreate_Private (mg.c:95) > ==547== by 0x8158F42: PCMGSetLevels (mg.c:623) > ==547== > Linear solve converged due to CONVERGED_RTOL iterations 2 > 1 SNES Function norm 7.444976906574e-04 > Linear solve converged due to CONVERGED_RTOL iterations 2 > 2 SNES Function norm 9.698205312365e-06 > Linear solve converged due to CONVERGED_RTOL iterations 3 > 3 SNES Function norm 6.808828603405e-09 > Linear solve converged due to CONVERGED_RTOL iterations 6 > 4 SNES Function norm 1.757078396735e-13 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE > ************************************************** > time step = 1 > current time step size= 5.0000000000000003e-02 > time = 5.0000000000000003e-02 > number of nonlinear iterations = 4 > number of linear iterations = 13 > function norm = 1.7570783967354192e-13 > ************************************************** > total number of time steps = 1 > total number of nonlinear iterations = 4 > total number of linear iterations = 13 > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-mxgrid value: 32 > Option left: name:-mygrid value: 32 > Option left: name:-time_to_generate_grid value: 0.0 > ==546== > ==547== > ==547== HEAP SUMMARY: > ==547== in use at exit: 156 bytes in 11 blocks > ==547== total heap usage: 4,947 allocs, 3,551 frees, 29,251,365 bytes > allocated > ==547== > ==546== HEAP SUMMARY: > ==546== in use at exit: 156 bytes in 11 blocks > ==546== total heap usage: 4,922 allocs, 3,851 frees, 31,388,554 bytes > allocated > ==546== > ==546== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely lost in > loss record 11 of 11 > ==546== at 0x4022BF3: malloc (vg_replace_malloc.c:195) > ==546== by 0x42693E2: ??? (in /lib/tls/i686/cmov/libc-2.7.so) > ==546== by 0x4269C2D: __nss_database_lookup (in > /lib/tls/i686/cmov/libc-2.7.so) > ==546== by 0x4701FDB: ??? > ==546== by 0x470313C: ??? > ==546== by 0x4215D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==546== by 0x421565D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==546== by 0x8789545: PetscGetUserName (fuser.c:68) > ==546== by 0x8744872: PetscErrorPrintfInitialize (errtrace.c:68) > ==546== by 0x8779CEB: PetscInitialize (pinit.c:568) > ==546== by 0x804B4E7: main (twqt2ff.c:88) > ==546== > ==546== LEAK SUMMARY: > ==546== definitely lost: 36 bytes in 1 blocks > ==546== indirectly lost: 120 bytes in 10 blocks > ==546== possibly lost: 0 bytes in 0 blocks > ==546== still reachable: 0 bytes in 0 blocks > ==546== suppressed: 0 bytes in 0 blocks > ==546== > ==546== For counts of detected and suppressed errors, rerun with: -v > ==546== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 40 from 10) > ==547== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely lost in > loss record 11 of 11 > ==547== at 0x4022BF3: malloc (vg_replace_malloc.c:195) > ==547== by 0x42693E2: ??? (in /lib/tls/i686/cmov/libc-2.7.so) > ==547== by 0x4269C2D: __nss_database_lookup (in > /lib/tls/i686/cmov/libc-2.7.so) > ==547== by 0x4701FDB: ??? > ==547== by 0x470313C: ??? > ==547== by 0x4215D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==547== by 0x421565D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==547== by 0x8789545: PetscGetUserName (fuser.c:68) > ==547== by 0x8744872: PetscErrorPrintfInitialize (errtrace.c:68) > ==547== by 0x8779CEB: PetscInitialize (pinit.c:568) > ==547== by 0x804B4E7: main (twqt2ff.c:88) > ==547== > ==547== LEAK SUMMARY: > ==547== definitely lost: 36 bytes in 1 blocks > ==547== indirebecca at YuanWork:~/linux/code/twoway/twoway_new/valgrind$ > > Thanks very much! > > Rebecca > > > From Andrew.Parker2 at baesystems.com Thu Apr 22 07:33:42 2010 From: Andrew.Parker2 at baesystems.com (Parker, Andrew (UK Filton)) Date: Thu, 22 Apr 2010 13:33:42 +0100 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy Message-ID: Hi, I'm new to these postings. On very large cases, regardless of where I stop the debugger it is always copying data. I've put it into a debugger because on smaller cases it runs fine, on larger it takes a while (very long time) to get going. The stack trace always gives something like: memcpy PetscMemCpy MatSetValues_SeqAIJ MatSetValues my own wrapper to add values to a location within the matrix. I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, or the preallocation wrong, but I'm not sure why. I know this could be anything, but has anybody got any thoughts, remember stopping the debugger at random, regardless of the frequency always gives the above.... I set up the matrix like this MatCreate(PETSC_COMM_SELF,&_storage); MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n umLocs); MatSetFromOptions(_storage); MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); However, using this makes zero difference to the speed MatCreateSeqBAIJ(PETSC_COMM_SELF, numVars, numVars*numLocs, numVars*numLocs, 0, sparsityStart, &_storage); It is so slow that it has not even completed one cycle of the solver.... Cheers again, Andy ******************************************************************** This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person. ******************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 22 07:56:21 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 14:56:21 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: Message-ID: <871ve7sl1m.fsf@59A2.org> On Thu, 22 Apr 2010 13:33:42 +0100, "Parker, Andrew (UK Filton)" wrote: > I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, or > the preallocation wrong, but I'm not sure why. You are almost certainly correct. You can MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERROR,PETSC_TRUE); to get the debugger to break on the first entry that you have not preallocated. Then you can trace the index back through the stack to the application. Barry et al, why isn't this available as a runtime option? > I set up the matrix like this > > MatCreate(PETSC_COMM_SELF,&_storage); > > MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n > umLocs); > MatSetFromOptions(_storage); > MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); > > However, using this makes zero difference to the speed > > MatCreateSeqBAIJ(PETSC_COMM_SELF, > numVars, > numVars*numLocs, > numVars*numLocs, > 0, > sparsityStart, > &_storage); These two calls do the same thing underneath, the former is more flexible and takes options from the command line. > It is so slow that it has not even completed one cycle of the solver.... That will change when the preallocation is fixed. Jed From bsmith at mcs.anl.gov Thu Apr 22 07:59:17 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 22 Apr 2010 07:59:17 -0500 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: Message-ID: <967D96F7-34B7-4988-BF66-8C1101F6A910@mcs.anl.gov> On Apr 22, 2010, at 7:33 AM, Parker, Andrew (UK Filton) wrote: > Hi, > > I'm new to these postings. On very large cases, regardless of where > I stop the debugger it is always copying data. I've put it into a > debugger because on smaller cases it runs fine, on larger it takes a > while (very long time) to get going. The stack trace always gives > something like: > > memcpy > PetscMemCpy > MatSetValues_SeqAIJ It is using SeqAIJ matrices > MatSetValues > my own wrapper to add values to a location within the matrix. > > I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, > or the preallocation wrong, but I'm not sure why. I know this could > be anything, but has anybody got any thoughts, remember stopping the > debugger at random, regardless of the frequency always gives the > above.... > > I set up the matrix like this > > MatCreate(PETSC_COMM_SELF,&_storage); > > MatSetSizes > (_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*numLocs); > MatSetFromOptions(_storage); > > MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); Unless you use the argument -mat_type seqbaij this will use SeqAIJ while you preallocate SeqBAIJ. > > However, using this makes zero difference to the speed > > MatCreateSeqBAIJ(PETSC_COMM_SELF, > numVars, > numVars*numLocs, > numVars*numLocs, > 0, > sparsityStart, > &_storage); > > It is so slow that it has not even completed one cycle of the > solver.... Your sparseityStart is wrong. Run with -info and grep the results for malloc and you'll see it is mallocing many times to get enough space. Barry > > Cheers again, > Andy > > ******************************************************************** > This email and any attachments are confidential to the intended > recipient and may also be privileged. If you are not the intended > recipient please delete it from your system and notify the sender. > You should not copy it or use it for any purpose nor disclose or > distribute its contents to any other person. > ******************************************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andrew.Parker2 at baesystems.com Thu Apr 22 08:06:23 2010 From: Andrew.Parker2 at baesystems.com (Parker, Andrew (UK Filton)) Date: Thu, 22 Apr 2010 14:06:23 +0100 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <967D96F7-34B7-4988-BF66-8C1101F6A910@mcs.anl.gov> References: <967D96F7-34B7-4988-BF66-8C1101F6A910@mcs.anl.gov> Message-ID: Ok, Cheers guys. I did find that strange, that it was using AIJ, can I set that in the code rather from an options file, what is the code equivalent to -mat_type seqbaij, is it MatSetType or something like that? I'm going to try Jed's suggestion about error on on first non-preallocated access. Could my error actually be that the sparsity is correct, but as you note below, the matrix type allocated with MatSeqBAIJSetPreallocation is of type AIJ, and in which case, for that matrix it has not be preallocated with the correct sparsity, or at all, and hence the speed? Or something like that?? Could the solution possibly just be setting -mat_type seqbaij? In which case I need to find out how to do that. Cheers again, Andy ________________________________ From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: 22 April 2010 13:59 To: PETSc users list Subject: Re: [petsc-users] Newbie Question - Really slow - PetscMemCpy *** WARNING *** This message has originated outside your organisation, either from an external partner or the Global Internet. Keep this in mind if you answer this message. On Apr 22, 2010, at 7:33 AM, Parker, Andrew (UK Filton) wrote: Hi, I'm new to these postings. On very large cases, regardless of where I stop the debugger it is always copying data. I've put it into a debugger because on smaller cases it runs fine, on larger it takes a while (very long time) to get going. The stack trace always gives something like: memcpy PetscMemCpy MatSetValues_SeqAIJ It is using SeqAIJ matrices MatSetValues my own wrapper to add values to a location within the matrix. I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, or the preallocation wrong, but I'm not sure why. I know this could be anything, but has anybody got any thoughts, remember stopping the debugger at random, regardless of the frequency always gives the above.... I set up the matrix like this MatCreate(PETSC_COMM_SELF,&_storage); MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n umLocs); MatSetFromOptions(_storage); MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); Unless you use the argument -mat_type seqbaij this will use SeqAIJ while you preallocate SeqBAIJ. However, using this makes zero difference to the speed MatCreateSeqBAIJ(PETSC_COMM_SELF, numVars, numVars*numLocs, numVars*numLocs, 0, sparsityStart, &_storage); It is so slow that it has not even completed one cycle of the solver.... Your sparseityStart is wrong. Run with -info and grep the results for malloc and you'll see it is mallocing many times to get enough space. Barry Cheers again, Andy ******************************************************************** This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person. ******************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 22 08:11:46 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 15:11:46 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: <967D96F7-34B7-4988-BF66-8C1101F6A910@mcs.anl.gov> Message-ID: <87zl0vr5rh.fsf@59A2.org> On Thu, 22 Apr 2010 14:06:23 +0100, "Parker, Andrew (UK Filton)" wrote: > Ok, > > Cheers guys. I did find that strange, that it was using AIJ, can I set > that in the code rather from an options file, what is the code > equivalent to -mat_type seqbaij, is it MatSetType or something like > that? MatSetType(A,MATBAIJ), put it before MatSetFromOptions(). > Could my error actually be that the sparsity is correct, but as you > note below, the matrix type allocated with MatSeqBAIJSetPreallocation > is of type AIJ, and in which case, for that matrix it has not be > preallocated with the correct sparsity, or at all, and hence the > speed? Yes, if you call MatSeqBAIJSetPreallocation() with a MATSEQAIJ, then you have not preallocated anything. (This is behavior that I would like to change eventually since this call has enough information to preallocate AIJ correctly.) Jed From knepley at gmail.com Thu Apr 22 08:13:29 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Apr 2010 08:13:29 -0500 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: <967D96F7-34B7-4988-BF66-8C1101F6A910@mcs.anl.gov> Message-ID: On Thu, Apr 22, 2010 at 8:06 AM, Parker, Andrew (UK Filton) < Andrew.Parker2 at baesystems.com> wrote: > Ok, > > Cheers guys. I did find that strange, that it was using AIJ, can I set > that in the code rather from an options file, what is the code equivalent to > -mat_type seqbaij, is it MatSetType or something like that? I'm going to > try Jed's suggestion about error on on first non-preallocated access. Could > my error actually be that the sparsity is correct, but as you note below, > the matrix type allocated with MatSeqBAIJSetPreallocation is of type AIJ, > and in which case, for that matrix it has not be preallocated with the > correct sparsity, or at all, and hence the speed? Or something like that?? > Could the solution possibly just be setting -mat_type seqbaij? In which > case I need to find out how to do that. > 1) -mat_type seqbaij is equivalent to MatSetType(A, MATSEQBAIJ), but why hardcode it? 2) Yes, preallocation only takes place for the given type. You can call more than one preallocation function. 3) You have two errors: Wrong matrix type, and then for the second call the matrix type is correct but the sparsity is wrong. Matt > > Cheers again, > Andy > > ------------------------------ > *From:* petsc-users-bounces at mcs.anl.gov [mailto: > petsc-users-bounces at mcs.anl.gov] *On Behalf Of *Barry Smith > *Sent:* 22 April 2010 13:59 > *To:* PETSc users list > *Subject:* Re: [petsc-users] Newbie Question - Really slow - PetscMemCpy > > *** WARNING *** > > This message has originated outside your organisation, > either from an external partner or the Global Internet. > Keep this in mind if you answer this message. > > > > On Apr 22, 2010, at 7:33 AM, Parker, Andrew (UK Filton) wrote: > > Hi, > > I'm new to these postings. On very large cases, regardless of where I stop > the debugger it is always copying data. I've put it into a debugger because > on smaller cases it runs fine, on larger it takes a while (very long time) > to get going. The stack trace always gives something like: > > memcpy > PetscMemCpy > MatSetValues_SeqAIJ > > > It is using SeqAIJ matrices > > MatSetValues > my own wrapper to add values to a location within the matrix. > > I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, or the > preallocation wrong, but I'm not sure why. I know this could be anything, > but has anybody got any thoughts, remember stopping the debugger at random, > regardless of the frequency always gives the above.... > > I set up the matrix like this > > MatCreate(PETSC_COMM_SELF,&_storage); > > MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*numLocs); > MatSetFromOptions(_storage); > MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); > > > Unless you use the argument -mat_type seqbaij this will use SeqAIJ > while you preallocate SeqBAIJ. > > > However, using this makes zero difference to the speed > > MatCreateSeqBAIJ(PETSC_COMM_SELF, > numVars, > numVars*numLocs, > numVars*numLocs, > 0, > sparsityStart, > &_storage); > > It is so slow that it has not even completed one cycle of the solver.... > > > Your sparseityStart is wrong. Run with -info and grep the results for > malloc and you'll see it is mallocing many times to get enough space. > > Barry > > > Cheers again, > Andy > > ******************************************************************** > This email and any attachments are confidential to the intended > recipient and may also be privileged. If you are not the intended > recipient please delete it from your system and notify the sender. > You should not copy it or use it for any purpose nor disclose or > distribute its contents to any other person. > ******************************************************************** > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 22 08:14:40 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 15:14:40 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <87zl0vr5rh.fsf@59A2.org> References: <967D96F7-34B7-4988-BF66-8C1101F6A910@mcs.anl.gov> <87zl0vr5rh.fsf@59A2.org> Message-ID: <87y6gfr5mn.fsf@59A2.org> On Thu, 22 Apr 2010 15:11:46 +0200, Jed Brown wrote: > Yes, if you call MatSeqBAIJSetPreallocation() with a MATSEQAIJ, then you > have not preallocated anything. But note that when you called MatCreateSeqBAIJ(), your sparsity was actually used (and interpreted as BAIJ). Since this still had poor performance, there is still something wrong with the sparsity. Jed From bsmith at mcs.anl.gov Thu Apr 22 08:48:08 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 22 Apr 2010 08:48:08 -0500 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <871ve7sl1m.fsf@59A2.org> References: <871ve7sl1m.fsf@59A2.org> Message-ID: <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> On Apr 22, 2010, at 7:56 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 13:33:42 +0100, "Parker, Andrew (UK Filton)" > wrote: >> I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, or >> the preallocation wrong, but I'm not sure why. > > You are almost certainly correct. You can > > MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERROR,PETSC_TRUE); > > to get the debugger to break on the first entry that you have not > preallocated. Then you can trace the index back through the stack to > the application. > > Barry et al, why isn't this available as a runtime option? Good suggestion. Now the question is do we put it in the "right" place which is MatSetFromOptions(), but most people never call MatSetFromOptions() so should we put in the "ideological wrong place" MatCreate() since that means it will always be available? Barry > >> I set up the matrix like this >> >> MatCreate(PETSC_COMM_SELF,&_storage); >> >> MatSetSizes >> (_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n >> umLocs); >> MatSetFromOptions(_storage); >> >> MatSeqBAIJSetPreallocation >> (_storage,numVars,PETSC_NULL,sparsityStart); >> >> However, using this makes zero difference to the speed >> >> MatCreateSeqBAIJ(PETSC_COMM_SELF, >> numVars, >> numVars*numLocs, >> numVars*numLocs, >> 0, >> sparsityStart, >> &_storage); > > These two calls do the same thing underneath, the former is more > flexible and takes options from the command line. > >> It is so slow that it has not even completed one cycle of the >> solver.... > > That will change when the preallocation is fixed. > > Jed From Andrew.Parker2 at baesystems.com Thu Apr 22 09:15:12 2010 From: Andrew.Parker2 at baesystems.com (Parker, Andrew (UK Filton)) Date: Thu, 22 Apr 2010 15:15:12 +0100 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> Message-ID: Well that's a worry, no error from: MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); Still slow as hell, same behaviour as before. I'm now using: MatCreate(PETSC_COMM_SELF,&_storage); MatSetType(_storage, MATSEQBAIJ); MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n umLocs); MatSetFromOptions(_storage); MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); So it has to be the sparsity, but then wouldn't that have thrown and error? And I can't have really messed that up, just number of neighbour cells plus myself for each row? Might try adding other error codes, would it only throw when attached to a debugger or will it throw in optimised or standard debug? Watching it in Top, it just allocates and deallocates like mad. Cheers, Andy -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: 22 April 2010 14:48 To: PETSc users list Subject: Re: [petsc-users] Newbie Question - Really slow - PetscMemCpy *** WARNING *** This message has originated outside your organisation, either from an external partner or the Global Internet. Keep this in mind if you answer this message. On Apr 22, 2010, at 7:56 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 13:33:42 +0100, "Parker, Andrew (UK Filton)" > > wrote: >> I'm using Seq_BAIJ. My bets are that I've got the sparsity wrong, or >> the preallocation wrong, but I'm not sure why. > > You are almost certainly correct. You can > > MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERROR,PETSC_TRUE); > > to get the debugger to break on the first entry that you have not > preallocated. Then you can trace the index back through the stack to > the application. > > Barry et al, why isn't this available as a runtime option? Good suggestion. Now the question is do we put it in the "right" place which is MatSetFromOptions(), but most people never call MatSetFromOptions() so should we put in the "ideological wrong place" MatCreate() since that means it will always be available? Barry > >> I set up the matrix like this >> >> MatCreate(PETSC_COMM_SELF,&_storage); >> >> MatSetSizes >> (_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n >> umLocs); >> MatSetFromOptions(_storage); >> >> MatSeqBAIJSetPreallocation >> (_storage,numVars,PETSC_NULL,sparsityStart); >> >> However, using this makes zero difference to the speed >> >> MatCreateSeqBAIJ(PETSC_COMM_SELF, >> numVars, >> numVars*numLocs, >> numVars*numLocs, >> 0, >> sparsityStart, >> &_storage); > > These two calls do the same thing underneath, the former is more > flexible and takes options from the command line. > >> It is so slow that it has not even completed one cycle of the >> solver.... > > That will change when the preallocation is fixed. > > Jed ******************************************************************** This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person. ******************************************************************** From jed at 59A2.org Thu Apr 22 09:22:41 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 16:22:41 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> Message-ID: <87tyr3r2ha.fsf@59A2.org> On Thu, 22 Apr 2010 15:15:12 +0100, "Parker, Andrew (UK Filton)" wrote: > Well that's a worry, no error from: > MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > Still slow as hell, same behaviour as before. I'm now using: > > MatCreate(PETSC_COMM_SELF,&_storage); > MatSetType(_storage, MATSEQBAIJ); > MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n > umLocs); > MatSetFromOptions(_storage); > MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); This should work if you call both MatSetSizes and MatSetType before MatSetOption. There are technical reasons for this, mostly that it's difficult to cache all the options and get them handled in a consistent and debuggable manner once the implementation is chosen. Unless someone disagrees, I'll make it an error to call MatSetOptions() before the implementation is available (so you would get a useful error with the usage above, instead of the option just being ignored). Jed From knepley at gmail.com Thu Apr 22 10:11:52 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Apr 2010 10:11:52 -0500 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <87tyr3r2ha.fsf@59A2.org> References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> Message-ID: On Thu, Apr 22, 2010 at 9:22 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 15:15:12 +0100, "Parker, Andrew (UK Filton)" < > Andrew.Parker2 at baesystems.com> wrote: > > Well that's a worry, no error from: > > MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > > > Still slow as hell, same behaviour as before. I'm now using: > > > > MatCreate(PETSC_COMM_SELF,&_storage); > > MatSetType(_storage, MATSEQBAIJ); > > MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > > > MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n > > umLocs); > > MatSetFromOptions(_storage); > > MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); > Then there is something wrong in your code at large that we cannot see from this small snippet. Can you change a PETSc example into something that exhibits this behavior, like KSP ex2? Also, you can go in with the debugger and see what is happening. I am guessing you switch the matrix you are using (so options do not take effect), or recreate it, etc. Matt > This should work if you call both MatSetSizes and MatSetType before > MatSetOption. There are technical reasons for this, mostly that it's > difficult to cache all the options and get them handled in a consistent > and debuggable manner once the implementation is chosen. > > Unless someone disagrees, I'll make it an error to call MatSetOptions() > before the implementation is available (so you would get a useful error > with the usage above, instead of the option just being ignored). > > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 22 10:16:11 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 17:16:11 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> Message-ID: <87ochbr004.fsf@59A2.org> On Thu, 22 Apr 2010 10:11:52 -0500, Matthew Knepley wrote: > Also, you can go in with the debugger and see what is happening. I am > guessing you switch the matrix you are using (so options do not take > effect), or recreate it, etc. Matt, the type is not actually set until *both* MatSetSizes() and MatSetType() are called (note the caching of mat->ops->create). This is a case of being flexible about initialization order coming back to bite. Rather than trying to cache options, I'd like to make it an error to call MatSetOption() before the type is actually set. Jed From Andrew.Parker2 at baesystems.com Thu Apr 22 10:32:06 2010 From: Andrew.Parker2 at baesystems.com (Parker, Andrew (UK Filton)) Date: Thu, 22 Apr 2010 16:32:06 +0100 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <87tyr3r2ha.fsf@59A2.org> References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> Message-ID: Well that worked. Now it errors like nobodies business. To be clear though, the sparsity pattern is just an int, indicating the number of non-zero blocks in a row, including the diagonal? In my case the non-zero entries are due to entries from neighbouring cells, and myself, so can't see how I messed that up. I know the answer is that I have, but just can't see why? My concern is that I've not understood what the sparsity is doing. In my case I have a fully unstructured graph, so the neighbour cell numbers are in no way anywhere near in the number range to myself, the cell of interest. Therefore, telling it that I have 10 neighbours, does not let it know the actual indices of those ten cells, which are not contiguous. Unless there's some funky mapping going on down in the guts. Anyway, if the only number I need to give it is the number of non-zero blocks in a row, then I've done that and I think I need to rule that out and look for something else. Any ideas? Andy -----Original Message----- From: Jed Brown [mailto:five9a2 at gmail.com] On Behalf Of Jed Brown Sent: 22 April 2010 15:23 To: Parker, Andrew (UK Filton); PETSc users list Subject: Re: [petsc-users] Newbie Question - Really slow - PetscMemCpy *** WARNING *** This message has originated outside your organisation, either from an external partner or the Global Internet. Keep this in mind if you answer this message. On Thu, 22 Apr 2010 15:15:12 +0100, "Parker, Andrew (UK Filton)" wrote: > Well that's a worry, no error from: > MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > Still slow as hell, same behaviour as before. I'm now using: > > MatCreate(PETSC_COMM_SELF,&_storage); > MatSetType(_storage, MATSEQBAIJ); > MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); > > MatSetSizes(_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars > *n > umLocs); > MatSetFromOptions(_storage); > > MatSeqBAIJSetPreallocation(_storage,numVars,PETSC_NULL,sparsityStart); This should work if you call both MatSetSizes and MatSetType before MatSetOption. There are technical reasons for this, mostly that it's difficult to cache all the options and get them handled in a consistent and debuggable manner once the implementation is chosen. Unless someone disagrees, I'll make it an error to call MatSetOptions() before the implementation is available (so you would get a useful error with the usage above, instead of the option just being ignored). Jed ******************************************************************** This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person. ******************************************************************** From jed at 59A2.org Thu Apr 22 11:51:25 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 18:51:25 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> Message-ID: <87ljcfqvle.fsf@59A2.org> On Thu, 22 Apr 2010 16:32:06 +0100, "Parker, Andrew (UK Filton)" wrote: > Well that worked. Now it errors like nobodies business. To be clear > though, the sparsity pattern is just an int, indicating the number of > non-zero blocks in a row, including the diagonal? In my case the > non-zero entries are due to entries from neighbouring cells, and myself, > so can't see how I messed that up. I know the answer is that I have, > but just can't see why? The sparsity is an array with one int per row, indicating the number of nonzero blocks in that row (including the diagonal). You can provide column indices by calling MatSetValues(Blocked) on each row, but that is not necessary since it is cheap for the matrix implementation to shuffle entries around within a row as you set them, the expensive part is if a row overflows the space that has been allocated in which case it needs to reallocate the entire thing (usually hundreds of MB) and copy over. > My concern is that I've not understood what the sparsity is doing. In > my case I have a fully unstructured graph, so the neighbour cell numbers > are in no way anywhere near in the number range to myself, the cell of > interest. Therefore, telling it that I have 10 neighbours, does not let > it know the actual indices of those ten cells, which are not > contiguous. It's sparse storage, not banded storage, so this doesn't matter. In parallel, you have to specify how many entries are in the "diagonal block" (the column index corresponds to a node that is owned) and how many are in the "off-diagonal block" (corresponding to a node that is owned by a remote process). > Unless there's some funky mapping going on down in the guts. Anyway, if > the only number I need to give it is the number of non-zero blocks in a > row, then I've done that and I think I need to rule that out and look > for something else. Can you make the problem size trivially small so that you can look at the matrix to see what is different between the matrix you preallocated for and the thing that actually gets assembled? Note that if you preallocate for M nonzeros in a row, then assemble the matrix after inserting only N < M entries in that row, then the matrix will only keep space for N entries. So if your first assembly is not going to put something in all the slots that later assemblies will use, you should assemble once with all the nonzeros you might need (setting all the column indices, use MatSeqBAIJSetColumnIndices() if you have them in an array). Jed From bsmith at mcs.anl.gov Thu Apr 22 12:27:58 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 22 Apr 2010 12:27:58 -0500 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <87tyr3r2ha.fsf@59A2.org> References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> Message-ID: On Apr 22, 2010, at 9:22 AM, Jed Brown wrote: > On Thu, 22 Apr 2010 15:15:12 +0100, "Parker, Andrew (UK Filton)" > wrote: >> Well that's a worry, no error from: >> MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); >> >> Still slow as hell, same behaviour as before. I'm now using: >> >> MatCreate(PETSC_COMM_SELF,&_storage); >> MatSetType(_storage, MATSEQBAIJ); >> MatSetOption(_storage,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_TRUE); >> >> MatSetSizes >> (_storage,PETSC_DECIDE,PETSC_DECIDE,numVars*numLocs,numVars*n >> umLocs); >> MatSetFromOptions(_storage); >> >> MatSeqBAIJSetPreallocation >> (_storage,numVars,PETSC_NULL,sparsityStart); > > This should work if you call both MatSetSizes and MatSetType before > MatSetOption. There are technical reasons for this, mostly that it's > difficult to cache all the options and get them handled in a > consistent > and debuggable manner once the implementation is chosen. > > Unless someone disagrees, I'll make it an error to call > MatSetOptions() > before the implementation is available (so you would get a useful > error > with the usage above, instead of the option just being ignored). > Do you mean if (!mat->ops->setoption) SETERRQ..... ? Or something else. I agree that it is impractical at this point to cache such values and better to generate the "useful error message" Barry > Jed From jed at 59A2.org Thu Apr 22 12:35:47 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 22 Apr 2010 19:35:47 +0200 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> Message-ID: <87k4rzqtjg.fsf@59A2.org> On Thu, 22 Apr 2010 12:27:58 -0500, Barry Smith wrote: > Do you mean if (!mat->ops->setoption) SETERRQ..... ? I think that is too strong because it will break everyone's existing matrix type (including MatShell) that didn't bother to implement MatSetOption. The primary implementations explicitly ignore certain options and generate an error for unknown options. I think this is the right behavior *if* the implementation chooses to define a MatSetOption_XXX, but I think that ignoring options is the correct behavior if they do not implement this function (for example, most options don't even apply to MatShell, I definitely don't want users to have to query whether an option is supported before setting it). What I had in mind was just if (!((PetscObject)mat)->type_name) SETERRQ(PETSC_ERR_ARG_TYPENOTSET,"Cannot set options until type and size have been set, see MatSetType() and MatSetSizes()"); Jed From bsmith at mcs.anl.gov Thu Apr 22 12:58:32 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 22 Apr 2010 12:58:32 -0500 Subject: [petsc-users] Newbie Question - Really slow - PetscMemCpy In-Reply-To: <87k4rzqtjg.fsf@59A2.org> References: <871ve7sl1m.fsf@59A2.org> <5134B1F6-F238-4738-85F5-E8FBECD33B45@mcs.anl.gov> <87tyr3r2ha.fsf@59A2.org> <87k4rzqtjg.fsf@59A2.org> Message-ID: <6F83B40E-5D74-409A-9978-52635EF9BED4@mcs.anl.gov> On Apr 22, 2010, at 12:35 PM, Jed Brown wrote: > On Thu, 22 Apr 2010 12:27:58 -0500, Barry Smith > wrote: >> Do you mean if (!mat->ops->setoption) SETERRQ..... ? > > I think that is too strong because it will break everyone's existing > matrix type (including MatShell) that didn't bother to implement > MatSetOption. The primary implementations explicitly ignore certain > options and generate an error for unknown options. I think this is > the > right behavior *if* the implementation chooses to define a > MatSetOption_XXX, but I think that ignoring options is the correct > behavior if they do not implement this function (for example, most > options don't even apply to MatShell, I definitely don't want users to > have to query whether an option is supported before setting it). > > What I had in mind was just > > if (!((PetscObject)mat)->type_name) > SETERRQ(PETSC_ERR_ARG_TYPENOTSET,"Cannot set options until type and > size have been set, see MatSetType() and MatSetSizes()"); > Ok, I guess this is probably ok. Barry > Jed From lizs at mail.uc.edu Thu Apr 22 19:59:49 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Fri, 23 Apr 2010 00:59:49 +0000 Subject: [petsc-users] want some explanations on BC setup Message-ID: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com> Hi, My objective is to write a primitive variable CFD solver based on SNES and TS. As a new user, I plan to start with the tutorial code snes/ex19.c by removing the temperature term. Then I can implement its equations and boundary conditions (BC) into the tutorial code ts/ex7.c for pseudo-transient. Finally I can switch to primitive variables. But I could not fully understand the BC setup in the tutorial codes and could not get a correct result by simply imitating the tutorial codes or my own attempts. PETSc usually defines F as the function vector and X as solution vector. In snes/ex19.c, the BC are setup by manipulating arrays pointing to F and X, such as: f[j][i].u = x[j][i].u - lid; (line 297) in which x[j][i].u is initialized as zero by other function. Why it's here " - lid" rather than "+ lid"? How about if we use " f[j][i].u = - lid " or " x[j][i].u = - lid "? And in my test runs with a modified ts/ex7.c with equations and BC from snes/ex19.c , only BC with " x[j][i] = some numerical value " can give a convergent result. I also found that in TS codes an initial condition at the boundary will overwrite the BC setup if they are of different values. Is this a bug? I think TS is something like a structure, but I cannot find descriptions of its members like "step","ptime" and "v" in any document. To run a TS simulation successfully, does it require a NORM_2 of "v" converge? Thank you very much. Best regards, Zhisong Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Fri Apr 23 05:26:20 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 23 Apr 2010 12:26:20 +0200 Subject: [petsc-users] want some explanations on BC setup In-Reply-To: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <877hnyqxbn.fsf@59A2.org> On Fri, 23 Apr 2010 00:59:49 +0000, "Li, Zhisong (lizs)" wrote: > Hi, > > My objective is to write a primitive variable CFD solver based on SNES > and TS. Compressible or incompressible flow? Is a 2D (topologically) structured grid sufficient? > Finally I can switch to primitive variables. As you may know, discretization is usually significantly different for shocks or incompressibility, so this won't necessarily be an easy step. The main reason velocity-vorticity is popular for 2D incompressible flow is that simple (non-staggered/mixed) discretizations just work. Also note that the ex19 discretization is excessively diffusive at high Reynolds number. I'm just warning that this transition to primitive variables involves a lot of choices and may involve complications like staggered grids and/or TVD spatial discretizations. > PETSc usually defines F as the function vector and X as solution > vector. In snes/ex19.c, the BC are setup by manipulating arrays > pointing to F and X, such as: > > f[j][i].u = x[j][i].u - lid; (line 297) This is the residual, at convergence, you will see f[j][i].u = 0, or in other words, x[j][i].u = lid. > in which x[j][i].u is initialized as zero by other function. This is an "impulsive start" to the steady-state problem, the initial condition does not satisfy the boundary conditions. > Why it's here " - lid" rather than "+ lid"? How about if we use " > f[j][i].u = - lid " or " x[j][i].u = - lid "? Hopefully the explanation above explains this. You aren't normally supposed to modify the solution vector X in residual evaluation, but this condition will usually be satisfied identically after the first iteration. I think that, at least in some cases, it is okay to project the solution vector during residual evaluation, but this is more relevant for variational inequalities and causes additional complications about what the Jacobian should look like (it won't work correctly with colored or matrix-free finite difference Jacobians), so I don't recommend it. > I also found that in TS codes an initial condition at the boundary > will overwrite the BC setup if they are of different values. Is this a > bug? In the appropriate function-space setting, Dirichlet boundary values are not even part of the solution space (they are known and thus not something you need to solve for). There are a variety of ways to enforce Dirichlet conditions without removing them from the function space, I don't know what specifically you are asking about TS examples, but if you use an initial condition that is incompatible with Dirichlet boundary conditions, it will normally be made consistent at the end of the first step (we don't have support for automatically determining consistent initial conditions, this is more of an issue for DAE and something I plan to address eventually). > I think TS is something like a structure, but I cannot find > descriptions of its members like "step","ptime" and "v" in any > document. To run a TS simulation successfully, does it require a > NORM_2 of "v" converge? TS is a class much like SNES and KSP. It sounds like you are asking about ex7's MyTSMonitor(), to understand the meaning of it's arguments, you can refer to the TSMonitorSet() documentation http://mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/TS/TSMonitorSet.html In addition to these arguments, you can query other properties through the TS. Jed From stefan.kurzbach at tuhh.de Fri Apr 23 05:36:29 2010 From: stefan.kurzbach at tuhh.de (Stefan Kurzbach) Date: Fri, 23 Apr 2010 12:36:29 +0200 Subject: [petsc-users] Building Sieve examples Message-ID: <4BD1782D.50004@tuhh.de> Hello everybody, when looking for examples using Sieve for unstructured meshes I found these 3 (in http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mesh/MeshCreate.html): src/ksp/ksp/examples/tutorials/ex33.c.html src/ksp/ksp/examples/tutorials/ex35.c.html src/ksp/ksp/examples/tutorials/ex37.c.html I have built PETSc configuring --with-sieve successfully. Unfortunately trying to build these examples ("make ex35") fails with error messages similar to the one below (and many more): /home/software/mpi/intel/openmpi-1.2.4/bin/mpicxx -o ex35.o -c -O -I/home/uhgd0053/petsc-3.1-p0//src/dm/mesh/sieve -I/home/uhgd0053/petsc-3.1-p0/externalpacka ges/Boost/ -I/opt/software/mpi/intel/openmpi-1.2.4/lib64 -I/home/uhgd0053/petsc-3.1-p0//withsieve/include -I/home/uhgd0053/petsc-3.1-p0//include -I/usr/X11R6/ include -I/home/uhgd0053/petsc-3.1-p0/withsieve/include -I/opt/software/mpi/intel/openmpi-1.2.4/lib64 -I/home/uhgd0053/petsc-3.1-p0/include/sieve -I/home/uhgd 0053/petsc-3.1-p0/externalpackages/Boost/ -I/home/software/mpi/intel/openmpi-1.2.4/include -D__INSDIR__=src/ksp/ksp/examples/tutorials/ ex35.c ex35.c(21): error: name followed by "::" must be a class or namespace name PetscErrorCode MeshView_Sieve_Newer(ALE::Obj, PetscViewer); Without digging deeper into the code, are these examples still working with the current version of PETSc? What am I missing or where can I find up-to-date information? Thanks a lot, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.kramer at imperial.ac.uk Fri Apr 23 08:32:54 2010 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Fri, 23 Apr 2010 14:32:54 +0100 Subject: [petsc-users] configure script for petsc Message-ID: <4BD1A186.1080000@imperial.ac.uk> Hi all, I'm trying to write a portable configure script for our software that uses petsc, and needs to deal with petsc installations on a number of different platforms. In petsc 3.0 I could do make -f $(PETSC_DIR)/conf/base getlinklibs and make -f $(PETSC_DIR)/conf/base getincludedirs that would work for both "installed" petsc and petsc in its own build directory (provided PETSC_ARCH is set correctly of course). In petsc 3.1 conf/base seems to have disappeared, is the best way to deal with this to make my own little makefile: include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules and use "make -f my_own_little_makefile getlinklibs" instead? Cheers Stephan From balay at mcs.anl.gov Fri Apr 23 08:36:31 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 23 Apr 2010 08:36:31 -0500 (CDT) Subject: [petsc-users] configure script for petsc In-Reply-To: <4BD1A186.1080000@imperial.ac.uk> References: <4BD1A186.1080000@imperial.ac.uk> Message-ID: On Fri, 23 Apr 2010, Stephan Kramer wrote: > Hi all, > > I'm trying to write a portable configure script for our software that uses > petsc, and needs to deal with petsc installations on a number of different > platforms. In petsc 3.0 I could do > > make -f $(PETSC_DIR)/conf/base getlinklibs > > and > > make -f $(PETSC_DIR)/conf/base getincludedirs > > that would work for both "installed" petsc and petsc in its own build > directory (provided PETSC_ARCH is set correctly of course). In petsc 3.1 > conf/base seems to have disappeared, is the best > way to deal with this to make my own little makefile: > > include ${PETSC_DIR}/conf/variables > include ${PETSC_DIR}/conf/rules > > and use "make -f my_own_little_makefile getlinklibs" instead? Yes - this way - you can add more stuff into this makefile - if needed. If you just do getlinklibs and getincludedirs - then you can probably just do: make -f $(PETSC_DIR)/makefile getincludedirs etc.. Satish From s.kramer at imperial.ac.uk Fri Apr 23 08:39:33 2010 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Fri, 23 Apr 2010 14:39:33 +0100 Subject: [petsc-users] configure script for petsc In-Reply-To: References: <4BD1A186.1080000@imperial.ac.uk> Message-ID: <4BD1A315.9090206@imperial.ac.uk> Satish Balay wrote: > On Fri, 23 Apr 2010, Stephan Kramer wrote: > >> Hi all, >> >> I'm trying to write a portable configure script for our software that uses >> petsc, and needs to deal with petsc installations on a number of different >> platforms. In petsc 3.0 I could do >> >> make -f $(PETSC_DIR)/conf/base getlinklibs >> >> and >> >> make -f $(PETSC_DIR)/conf/base getincludedirs >> >> that would work for both "installed" petsc and petsc in its own build >> directory (provided PETSC_ARCH is set correctly of course). In petsc 3.1 >> conf/base seems to have disappeared, is the best >> way to deal with this to make my own little makefile: >> >> include ${PETSC_DIR}/conf/variables >> include ${PETSC_DIR}/conf/rules >> >> and use "make -f my_own_little_makefile getlinklibs" instead? > > Yes - this way - you can add more stuff into this makefile - if needed. > > If you just do getlinklibs and getincludedirs - then you can probably > just do: > > make -f $(PETSC_DIR)/makefile getincludedirs True, except the makefile doesn't make it into a prefix-installed installation of petsc. Having my own makefile do it works fine - was just checking if this is indeed the recommended way Thanks a lot Cheers Stephan > > etc.. > > Satish > -- Stephan Kramer Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London From balay at mcs.anl.gov Fri Apr 23 08:47:22 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 23 Apr 2010 08:47:22 -0500 (CDT) Subject: [petsc-users] configure script for petsc In-Reply-To: <4BD1A315.9090206@imperial.ac.uk> References: <4BD1A186.1080000@imperial.ac.uk> <4BD1A315.9090206@imperial.ac.uk> Message-ID: On Fri, 23 Apr 2010, Stephan Kramer wrote: > Satish Balay wrote: > > On Fri, 23 Apr 2010, Stephan Kramer wrote: > > > > > Hi all, > > > > > > I'm trying to write a portable configure script for our software that uses > > > petsc, and needs to deal with petsc installations on a number of different > > > platforms. In petsc 3.0 I could do > > > > > > make -f $(PETSC_DIR)/conf/base getlinklibs > > > > > > and > > > > > > make -f $(PETSC_DIR)/conf/base getincludedirs > > > > > > that would work for both "installed" petsc and petsc in its own build > > > directory (provided PETSC_ARCH is set correctly of course). In petsc 3.1 > > > conf/base seems to have disappeared, is the best > > > way to deal with this to make my own little makefile: > > > > > > include ${PETSC_DIR}/conf/variables > > > include ${PETSC_DIR}/conf/rules > > > > > > and use "make -f my_own_little_makefile getlinklibs" instead? > > > > Yes - this way - you can add more stuff into this makefile - if needed. > > > > If you just do getlinklibs and getincludedirs - then you can probably > > just do: > > > > make -f $(PETSC_DIR)/makefile getincludedirs > > True, except the makefile doesn't make it into a prefix-installed > installation of petsc. Having my own makefile do it works fine - > was just checking if this is indeed the recommended way Ah yes. There is no makefie that you can use in the prefix install. In this case creating a temporary makefile is the way to go.. Satish From lizs at mail.uc.edu Fri Apr 23 11:01:21 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Fri, 23 Apr 2010 16:01:21 +0000 Subject: [petsc-users] want some explanations on BC setup In-Reply-To: <877hnyqxbn.fsf@59A2.org> References: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com>, <877hnyqxbn.fsf@59A2.org> Message-ID: <88D7E3BB7E1960428303E76010037451A3A0@BL2PRD0103MB060.prod.exchangelabs.com> Hi, Jed, Thank you so much for these words. Actually I am working towards a 3D incompressible flow solver with finite volume method. So a driven cavity flow problem of finite difference will be a good starting point. ________________________________________ >This is an "impulsive start" to the steady-state problem, the initial >condition does not satisfy the boundary conditions. >> Why it's here " - lid" rather than "+ lid"? How about if we use " >> f[j][i].u = - lid " or " x[j][i].u = - lid "? > Hopefully the explanation above explains this. You aren't normally > supposed to modify the solution vector X in residual evaluation, but > this condition will usually be satisfied identically after the first > iteration. > I think that, at least in some cases, it is okay to project the solution > vector during residual evaluation, but this is more relevant for > variational inequalities and causes additional complications about what > the Jacobian should look like (it won't work correctly with colored or > matrix-free finite difference Jacobians), so I don't recommend it. The order of scheme is not a big issue to me. The big troubles I can't work out are: 1. Direct removing temperature terms in snes/ex19.c doesn't give a sensible result except applying option "-snes_mf" or "-snes_mf_operator". 2. Combining snes/ex19.c and ts/ex7.c (no temperature), I could not even get a converging SNES function norm or bounded result if I strictly follow your BC rules. But I can make it converged or bounded by setting BC like " f[j][i] = - lid " or "x[j][i] = - lid " with/without -snes_mf. I can send you my modified code (simple and straightforward) if you want. Thank you again. Regards, Zhisong Li From jed at 59A2.org Fri Apr 23 11:19:56 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 23 Apr 2010 18:19:56 +0200 Subject: [petsc-users] want some explanations on BC setup In-Reply-To: <88D7E3BB7E1960428303E76010037451A3A0@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com>, <877hnyqxbn.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A3A0@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <87vdbip2dv.fsf@59A2.org> On Fri, 23 Apr 2010 16:01:21 +0000, "Li, Zhisong (lizs)" wrote: > 1. Direct removing temperature terms in snes/ex19.c doesn't give a > sensible result except applying option "-snes_mf" or > "-snes_mf_operator". DMMG normally computes the operator using coloring which is essentially the same operation as used for -snes_mf/-snes_mf_operator. You haven't given enough information to explain what's wrong. > 2. Combining snes/ex19.c and ts/ex7.c (no temperature), I could not > even get a converging SNES function norm or bounded result if I > strictly follow your BC rules. But I can make it converged or bounded > by setting BC like " f[j][i] = - lid " or "x[j][i] = - lid " > with/without -snes_mf. I don't believe you are actually doing this since it will not converge: f[j][i] will never go to zero so the norm of the residual will not either. Jed From lizs at mail.uc.edu Fri Apr 23 13:45:58 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Fri, 23 Apr 2010 18:45:58 +0000 Subject: [petsc-users] want some explanations on BC setup In-Reply-To: <87vdbip2dv.fsf@59A2.org> References: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com>, <877hnyqxbn.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A3A0@BL2PRD0103MB060.prod.exchangelabs.com>, <87vdbip2dv.fsf@59A2.org> Message-ID: <88D7E3BB7E1960428303E76010037451A3CD@BL2PRD0103MB060.prod.exchangelabs.com> >I don't believe you are actually doing this since it will not converge: >f[j][i] will never go to zero so the norm of the residual will not >either. >Jed Actually here I mean putting only the equations and BC of snes/ex19.c into ts/ex7.c , not the exact program and no DMMG issue. I use x[j][i].u=0.0 or -lid as BC and it converges well. If I use f[j][i].u=x[j][i].u - lid ,x[j][i].u value at the boundary will grow up uncontrollably as time marches. > There are a variety of ways to enforce Dirichlet conditions without > removing them from the function space, I don't know what specifically > you are asking about TS examples, but if you use an initial condition that is > incompatible with Dirichlet boundary conditions, it will normally be made > consistent at the end of the first step. And for the driven cavity problem (snes/ex19.c), it's not fully Dirichlet boundary as omega on solid wall is not fixed and need to be determined. So we can never make IC consistent with BC for this case. You said " it will normally be made consistent at the end of the first step" I think this is not correct. Here is my test on ts/ex7.c: Change the IC value 0.0 at the boundary (line 249) as nonzero, say 1.5. Leave the BC as f[j][i]=x[j][i] unchanged or fix it as f[j][i] =0.0 (line 189) I also add "VecView(x,PETSC_VIEWER_DRAW_WORLD);" to view the result. You will find the steady state result completely different because of the IC. I really want go through this IC overwriting problem. Do you have any suggestion? Thank you again. Regards, Zhisong Li From jed at 59A2.org Sun Apr 25 14:49:07 2010 From: jed at 59A2.org (Jed Brown) Date: Sun, 25 Apr 2010 21:49:07 +0200 Subject: [petsc-users] want some explanations on BC setup In-Reply-To: <88D7E3BB7E1960428303E76010037451A3CD@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A36A@BL2PRD0103MB060.prod.exchangelabs.com>, <877hnyqxbn.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A3A0@BL2PRD0103MB060.prod.exchangelabs.com>, <87vdbip2dv.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A3CD@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <87fx2jpb2k.fsf@59A2.org> On Fri, 23 Apr 2010 18:45:58 +0000, "Li, Zhisong (lizs)" wrote: > And for the driven cavity problem (snes/ex19.c), it's not fully > Dirichlet boundary as omega on solid wall is not fixed and need to be > determined. So we can never make IC consistent with BC for this case. There are Dirichlet conditions for u and v. The boundary condition for vorticity can be thought of as a Dirichlet condition (equal to it's definition as the curl of velocity). This should be implemented in the same way as the non-transient examples (see also snes/examples/tutorials/ex27.c and the associated paper [1]). Residual evaluation should not modify the state vector to enforce boundary conditions. > Change the IC value 0.0 at the boundary (line 249) as nonzero, say 1.5. > > Leave the BC as f[j][i]=x[j][i] unchanged or fix it as f[j][i] =0.0 (line 189) The latter of these choices produces a singular operator. > I also add "VecView(x,PETSC_VIEWER_DRAW_WORLD);" to view the result. > > You will find the steady state result completely different because of the IC. I don't know what physics ex7 is solving, but I suspect the associated steady-state problem is ill posed. However, the problem you describe is real and is caused by the fact that Dirichlet boundary conditions are algebraic constraints (assuming those boundary values are explicitly represented in your system, rather than being removed). When you use TS's "RHS" interface (TSSetRHSFunction, etc), you are solving a system X_t = F(t,X) where ALL variables, INCLUDING explicitly represented boundary values, are in the vector X. So consider the equation for a certain boundary value x which should be equal to c. If we write the perfectly reasonable algebraic constraint f(x) = x-c, the transient term becomes x_t = x - c which is very bad (x=c isn't even a stable solution). We could write this as f(x) = lambda(c-x), lambda>0 which will produce exponential decay to the correct boundary value, but this is a stiff term for large lambda (and relaxes slowly for small lamda) and will produce oscillations with methods that are not L-stable (like Crank-Nicholson) when lambda*(Delta t) is large. If the boundary data is sufficiently smooth in time, you may be able to choose a lambda to make this an acceptable solution. It is certainly not okay if the boundary data is non-smooth in time. There are at least two solutions to this problem: 1. Remove the Dirichlet degrees of freedom from the system. The remaining system is actually an ODE and everything will work fine. This is a hassle on a structured grid, if different quantities have boundary conditions of different type, or if the boundary conditions change during the simulation. As far as I can tell, CVODE (from Sundials) doesn't have any particular way to handle explicitly represented boundary values. 2. Write the system as a differential algebraic system. This mostly amounts to using the implicit interface (see TSSetIFunction and TSSetIJacobian). This allows explicit representation of boundary values, the conditions will be enforced exactly from the first stage onward. I recommend this one. Jed [1] Todd S. Coffey and C. T. Kelley and David E. Keyes, Pseudotransient Continuation and Differential-Algebraic Equations, 2003. From xy2102 at columbia.edu Mon Apr 26 09:01:37 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Mon, 26 Apr 2010 10:01:37 -0400 Subject: [petsc-users] pc_type asm Message-ID: <20100426100137.81zky3r2ck8o4wo4@cubmail.cc.columbia.edu> Dear all, If I add the option with -pc_type asm, I will get some memory errors from valgrind for np>1. I think it is for setup of the pc, but I am not sure how to fix this problem. Here is the valgrind error: rebecca at YuanWork:~/linux/code/twoway/twoway_new/workingspace$ mpiexec -np 2 valgrind ./twqt2ff.exe -options_file option_all ==15113== Memcheck, a memory error detector ==15113== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==15113== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==15113== Command: ./twqt2ff.exe -options_file option_all ==15113== ==15114== Memcheck, a memory error detector ==15114== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==15114== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==15114== Command: ./twqt2ff.exe -options_file option_all ==15114== ************************************************** number of processors = 2 viscosity = 5.0000000000000003e-02 resistivity = 5.0000000000000001e-03 skin depth = 0.0000000000000000e+00 hyper resistivity = 3.2768000000000001e-04 hyper viscosity = 2.6214399999999999e-01 problem size: 51 by 51 dx = 2.5098039215686274e-01 dy = 1.2800000000000000e-01 dt = 1.0000000000000001e-01 adaptive time step size (1:yes;0:no) = 0 ************************************************** 0 SNES Function norm 2.189505265331e-02 ==15114== Invalid read of size 4 ==15114== at 0x88A09C4: MPIDI_CH3I_Progress_handle_sock_event (ch3_progress.c:738) ==15114== by 0x88A0F17: MPIDI_CH3I_Progress (ch3_progress.c:212) ==15114== by 0x88802B4: PMPI_Waitany (waitany.c:203) ==15114== by 0x85A15A4: MatGetSubMatrices_MPIAIJ_Local (mpiov.c:1195) ==15114== by 0x859BDC2: MatGetSubMatrices_MPIAIJ (mpiov.c:781) ==15114== by 0x820BD51: MatGetSubMatrices (matrix.c:5691) ==15114== by 0x8285317: PCSetUp_ASM (asm.c:299) ==15114== by 0x8250FD3: PCSetUp (precon.c:796) ==15114== by 0x82F0134: KSPSetUp (itfunc.c:272) ==15114== by 0x82F125C: KSPSolve (itfunc.c:390) ==15114== by 0x838A9DA: SNES_KSPSolve (snes.c:2961) ==15114== by 0x87C2C0C: SNESSolve_LS (ls.c:191) ==15114== Address 0x46411d0 is 368 bytes inside a [MPICH2 handle: objptr=0x4641060 handle=0xec00000c INDIRECT/REQUEST] of size 372 client-defined ==15114== at 0x889528A: MPIU_Handle_obj_alloc_unsafe (handlemem.c:217) ==15114== by 0x88E1920: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c:342) ==15114== by 0x88AAFA7: MPID_Irecv (mpid_irecv.c:46) ==15114== by 0x886D080: MPIC_Sendrecv (helper_fns.c:153) ==15114== by 0x88625E0: MPIR_Barrier (barrier.c:75) ==15114== by 0x88628D7: MPIR_Barrier_or_coll_fn (barrier.c:244) ==15114== by 0x8862998: PMPI_Barrier (barrier.c:421) ==15114== by 0x81174EA: PetscCommDuplicate (tagm.c:190) ==15114== by 0x811AE1E: PetscHeaderCreate_Private (inherit.c:43) ==15114== by 0x8304017: KSPCreate (itcreate.c:476) ==15114== by 0x8273121: PCMGSetLevels (mg.c:173) ==15114== by 0x839A21E: DMMGSetUpLevel (damg.c:379) ==15114== ==15113== Invalid read of size 4 ==15113== at 0x88A09C4: MPIDI_CH3I_Progress_handle_sock_event (ch3_progress.c:738) ==15113== by 0x88A0F17: MPIDI_CH3I_Progress (ch3_progress.c:212) ==15113== by 0x88802B4: PMPI_Waitany (waitany.c:203) ==15113== by 0x85A15A4: MatGetSubMatrices_MPIAIJ_Local (mpiov.c:1195) ==15113== by 0x859BDC2: MatGetSubMatrices_MPIAIJ (mpiov.c:781) ==15113== by 0x820BD51: MatGetSubMatrices (matrix.c:5691) ==15113== by 0x8285317: PCSetUp_ASM (asm.c:299) ==15113== by 0x8250FD3: PCSetUp (precon.c:796) ==15113== by 0x82F0134: KSPSetUp (itfunc.c:272) ==15113== by 0x82F125C: KSPSolve (itfunc.c:390) ==15113== by 0x838A9DA: SNES_KSPSolve (snes.c:2961) ==15113== by 0x87C2C0C: SNESSolve_LS (ls.c:191) ==15113== Address 0x472d4d8 is 368 bytes inside a [MPICH2 handle: objptr=0x472d368 handle=0xec00000c INDIRECT/REQUEST] of size 372 client-defined ==15113== at 0x889528A: MPIU_Handle_obj_alloc_unsafe (handlemem.c:217) ==15113== by 0x88E1920: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c:342) ==15113== by 0x88AAFA7: MPID_Irecv (mpid_irecv.c:46) ==15113== by 0x886D080: MPIC_Sendrecv (helper_fns.c:153) ==15113== by 0x88625E0: MPIR_Barrier (barrier.c:75) ==15113== by 0x88628D7: MPIR_Barrier_or_coll_fn (barrier.c:244) ==15113== by 0x8862998: PMPI_Barrier (barrier.c:421) ==15113== by 0x81174EA: PetscCommDuplicate (tagm.c:190) ==15113== by 0x811AE1E: PetscHeaderCreate_Private (inherit.c:43) ==15113== by 0x8304017: KSPCreate (itcreate.c:476) ==15113== by 0x8273121: PCMGSetLevels (mg.c:173) ==15113== by 0x839A21E: DMMGSetUpLevel (damg.c:379) ==15113== Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 6.250178106647e-03 Linear solve converged due to CONVERGED_RTOL iterations 3 2 SNES Function norm 8.139700042246e-04 Linear solve converged due to CONVERGED_RTOL iterations 5 3 SNES Function norm 1.228451086501e-05 Linear solve converged due to CONVERGED_RTOL iterations 10 4 SNES Function norm 1.375513293330e-08 Linear solve converged due to CONVERGED_RTOL iterations 16 5 SNES Function norm 3.634919075810e-13 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE ************************************************** time step = 1 current time step size= 1.0000000000000001e-01 time = 1.0000000000000001e-01 number of nonlinear iterations = 5 number of linear iterations = 35 function norm = 3.6349190758097565e-13 ************************************************** total number of time steps = 1 total number of nonlinear iterations = 5 total number of linear iterations = 35 -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Mon Apr 26 10:50:38 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 26 Apr 2010 10:50:38 -0500 Subject: [petsc-users] pc_type asm In-Reply-To: <20100426100137.81zky3r2ck8o4wo4@cubmail.cc.columbia.edu> References: <20100426100137.81zky3r2ck8o4wo4@cubmail.cc.columbia.edu> Message-ID: <2FD449A2-7134-4088-B193-3B1FCEB2F3F4@mcs.anl.gov> Those are not real problems. You can ignore them; MPICH is sloppy about initializing some of its memory. Barry On Apr 26, 2010, at 9:01 AM, (Rebecca) Xuefei YUAN wrote: > Dear all, > > If I add the option with -pc_type asm, I will get some memory errors > from valgrind for np>1. > > I think it is for setup of the pc, but I am not sure how to fix this > problem. > > Here is the valgrind error: > rebecca at YuanWork:~/linux/code/twoway/twoway_new/workingspace$ > mpiexec -np 2 valgrind ./twqt2ff.exe -options_file option_all > ==15113== Memcheck, a memory error detector > ==15113== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward > et al. > ==15113== Using Valgrind-3.5.0 and LibVEX; rerun with -h for > copyright info > ==15113== Command: ./twqt2ff.exe -options_file option_all > ==15113== > ==15114== Memcheck, a memory error detector > ==15114== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward > et al. > ==15114== Using Valgrind-3.5.0 and LibVEX; rerun with -h for > copyright info > ==15114== Command: ./twqt2ff.exe -options_file option_all > ==15114== > ************************************************** > number of processors = 2 > viscosity = 5.0000000000000003e-02 > resistivity = 5.0000000000000001e-03 > skin depth = 0.0000000000000000e+00 > hyper resistivity = 3.2768000000000001e-04 > hyper viscosity = 2.6214399999999999e-01 > problem size: 51 by 51 > dx = 2.5098039215686274e-01 > dy = 1.2800000000000000e-01 > dt = 1.0000000000000001e-01 > adaptive time step size (1:yes;0:no) = 0 > ************************************************** > 0 SNES Function norm 2.189505265331e-02 > ==15114== Invalid read of size 4 > ==15114== at 0x88A09C4: MPIDI_CH3I_Progress_handle_sock_event > (ch3_progress.c:738) > ==15114== by 0x88A0F17: MPIDI_CH3I_Progress (ch3_progress.c:212) > ==15114== by 0x88802B4: PMPI_Waitany (waitany.c:203) > ==15114== by 0x85A15A4: MatGetSubMatrices_MPIAIJ_Local (mpiov.c: > 1195) > ==15114== by 0x859BDC2: MatGetSubMatrices_MPIAIJ (mpiov.c:781) > ==15114== by 0x820BD51: MatGetSubMatrices (matrix.c:5691) > ==15114== by 0x8285317: PCSetUp_ASM (asm.c:299) > ==15114== by 0x8250FD3: PCSetUp (precon.c:796) > ==15114== by 0x82F0134: KSPSetUp (itfunc.c:272) > ==15114== by 0x82F125C: KSPSolve (itfunc.c:390) > ==15114== by 0x838A9DA: SNES_KSPSolve (snes.c:2961) > ==15114== by 0x87C2C0C: SNESSolve_LS (ls.c:191) > ==15114== Address 0x46411d0 is 368 bytes inside a [MPICH2 handle: > objptr=0x4641060 handle=0xec00000c INDIRECT/REQUEST] of size 372 > client-defined > ==15114== at 0x889528A: MPIU_Handle_obj_alloc_unsafe (handlemem.c: > 217) > ==15114== by 0x88E1920: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c: > 342) > ==15114== by 0x88AAFA7: MPID_Irecv (mpid_irecv.c:46) > ==15114== by 0x886D080: MPIC_Sendrecv (helper_fns.c:153) > ==15114== by 0x88625E0: MPIR_Barrier (barrier.c:75) > ==15114== by 0x88628D7: MPIR_Barrier_or_coll_fn (barrier.c:244) > ==15114== by 0x8862998: PMPI_Barrier (barrier.c:421) > ==15114== by 0x81174EA: PetscCommDuplicate (tagm.c:190) > ==15114== by 0x811AE1E: PetscHeaderCreate_Private (inherit.c:43) > ==15114== by 0x8304017: KSPCreate (itcreate.c:476) > ==15114== by 0x8273121: PCMGSetLevels (mg.c:173) > ==15114== by 0x839A21E: DMMGSetUpLevel (damg.c:379) > ==15114== > ==15113== Invalid read of size 4 > ==15113== at 0x88A09C4: MPIDI_CH3I_Progress_handle_sock_event > (ch3_progress.c:738) > ==15113== by 0x88A0F17: MPIDI_CH3I_Progress (ch3_progress.c:212) > ==15113== by 0x88802B4: PMPI_Waitany (waitany.c:203) > ==15113== by 0x85A15A4: MatGetSubMatrices_MPIAIJ_Local (mpiov.c: > 1195) > ==15113== by 0x859BDC2: MatGetSubMatrices_MPIAIJ (mpiov.c:781) > ==15113== by 0x820BD51: MatGetSubMatrices (matrix.c:5691) > ==15113== by 0x8285317: PCSetUp_ASM (asm.c:299) > ==15113== by 0x8250FD3: PCSetUp (precon.c:796) > ==15113== by 0x82F0134: KSPSetUp (itfunc.c:272) > ==15113== by 0x82F125C: KSPSolve (itfunc.c:390) > ==15113== by 0x838A9DA: SNES_KSPSolve (snes.c:2961) > ==15113== by 0x87C2C0C: SNESSolve_LS (ls.c:191) > ==15113== Address 0x472d4d8 is 368 bytes inside a [MPICH2 handle: > objptr=0x472d368 handle=0xec00000c INDIRECT/REQUEST] of size 372 > client-defined > ==15113== at 0x889528A: MPIU_Handle_obj_alloc_unsafe (handlemem.c: > 217) > ==15113== by 0x88E1920: MPIDI_CH3U_Recvq_FDU_or_AEP (ch3u_recvq.c: > 342) > ==15113== by 0x88AAFA7: MPID_Irecv (mpid_irecv.c:46) > ==15113== by 0x886D080: MPIC_Sendrecv (helper_fns.c:153) > ==15113== by 0x88625E0: MPIR_Barrier (barrier.c:75) > ==15113== by 0x88628D7: MPIR_Barrier_or_coll_fn (barrier.c:244) > ==15113== by 0x8862998: PMPI_Barrier (barrier.c:421) > ==15113== by 0x81174EA: PetscCommDuplicate (tagm.c:190) > ==15113== by 0x811AE1E: PetscHeaderCreate_Private (inherit.c:43) > ==15113== by 0x8304017: KSPCreate (itcreate.c:476) > ==15113== by 0x8273121: PCMGSetLevels (mg.c:173) > ==15113== by 0x839A21E: DMMGSetUpLevel (damg.c:379) > ==15113== > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 SNES Function norm 6.250178106647e-03 > Linear solve converged due to CONVERGED_RTOL iterations 3 > 2 SNES Function norm 8.139700042246e-04 > Linear solve converged due to CONVERGED_RTOL iterations 5 > 3 SNES Function norm 1.228451086501e-05 > Linear solve converged due to CONVERGED_RTOL iterations 10 > 4 SNES Function norm 1.375513293330e-08 > Linear solve converged due to CONVERGED_RTOL iterations 16 > 5 SNES Function norm 3.634919075810e-13 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE > ************************************************** > time step = 1 > current time step size= 1.0000000000000001e-01 > time = 1.0000000000000001e-01 > number of nonlinear iterations = 5 > number of linear iterations = 35 > function norm = 3.6349190758097565e-13 > ************************************************** > total number of time steps = 1 > total number of nonlinear iterations = 5 > total number of linear iterations = 35 > > > > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From xy2102 at columbia.edu Tue Apr 27 04:20:51 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 27 Apr 2010 05:20:51 -0400 Subject: [petsc-users] VecView() error in BlueGene Message-ID: <20100427052051.g07i5m8wsg0gkksw@cubmail.cc.columbia.edu> Hi, I tried the example code /petsc-dev/src/snes/examples/tutorials/ex5.c on bluegene, but get the following message about the VecView(). [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly illegal memory access [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] VecView_MPI line 801 src/vec/vec/impls/mpi/pdvec.c [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c [0]PETSC ERROR: [0] DAView_VTK line 129 src/dm/da/src/daview.c [0]PETSC ERROR: [0] DAView line 227 src/dm/da/src/daview.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex5.exe on a arch-bgp- named ionode37 by Unknown Thu Jan 1 03:03:12 1970 [0]PETSC ERROR: Libraries linked from /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 [0]PETSC ERROR: Configure options --with-cc=mpixlc_r --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ -llapack -lblas" --with-x=0 --with-is-color-value-type=short --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --known-level1-dcache-assoc=0 --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file Abort(59) on node 0 (rank 0 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 Also, my code is not working for calling VecView() with error message: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly illegal memory access [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] VecView_MPI_DA line 508 src/dm/da/src/gr2.c [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c [0]PETSC ERROR: [0] DumpSolutionToMatlab line 1052 twqt2ff.c [0]PETSC ERROR: [0] Solve line 235 twqt2ff.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./twqt2ff.exe on a arch-bgp- named ionode132 by Unknown Fri Jan 2 00:44:26 1970 [0]PETSC ERROR: Libraries linked from /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 [0]PETSC ERROR: Configure options --with-cc=mpixlc_r --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ -llapack -lblas" --with-x=0 --with-is-color-value-type=short --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-bits-per-byte=8 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --known-level1-dcache-assoc=0 --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file Abort(59) on node 0 (rank 0 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 Am I wrong at configure? Thanks a lot! Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Tue Apr 27 04:44:40 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 27 Apr 2010 05:44:40 -0400 Subject: [petsc-users] A good way of dealing data Message-ID: <20100427054440.kenru3syu8kcoggo@cubmail.cc.columbia.edu> Dear all, If I save my data file as a binary file from Petsc,i.e., ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); How could I deal(get access to) with my data with a c/c++ code? Any examples on it? Thanks very much! Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From jed at 59A2.org Tue Apr 27 05:31:31 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 27 Apr 2010 12:31:31 +0200 Subject: [petsc-users] A good way of dealing data In-Reply-To: <20100427054440.kenru3syu8kcoggo@cubmail.cc.columbia.edu> References: <20100427054440.kenru3syu8kcoggo@cubmail.cc.columbia.edu> Message-ID: <87bpd5xk3g.fsf@59A2.org> On Tue, 27 Apr 2010 05:44:40 -0400, "(Rebecca) Xuefei YUAN" wrote: > Dear all, > > If I save my data file as a binary file from Petsc,i.e., > > ierr = > PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); > > How could I deal(get access to) with my data with a c/c++ code? In what context? Normally you would open a binary viewer for reading and load the vectors/matrices/etc that were written above. Jed From aron.ahmadia at kaust.edu.sa Tue Apr 27 05:55:51 2010 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Tue, 27 Apr 2010 06:55:51 -0400 Subject: [petsc-users] [Supercomputing Lab #847] VecView() error in BlueGene In-Reply-To: References: <20100427052051.g07i5m8wsg0gkksw@cubmail.cc.columbia.edu> Message-ID: I'll take a look at this and report back Rebecca. I was seeing similar bus errors on some PETSc example code calling VecView and we haven't tracked it down yet. A On Tue, Apr 27, 2010 at 5:20 AM, Xuefei YUAN via RT wrote: > > Tue Apr 27 12:20:11 2010: Request 847 was acted upon. > ?Transaction: Ticket created by xy2102 at columbia.edu > ? ? ? Queue: Shaheen > ? ? Subject: VecView() error in BlueGene > ? ? ? Owner: Nobody > ?Requestors: xy2102 at columbia.edu > ? ? ?Status: new > ?Ticket > > > Hi, > > I tried the example code /petsc-dev/src/snes/examples/tutorials/ex5.c > on bluegene, but get the following message about the VecView(). > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly > illegal memory access > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- ?Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the function > [0]PETSC ERROR: ? ? ? is given. > [0]PETSC ERROR: [0] VecView_MPI line 801 src/vec/vec/impls/mpi/pdvec.c > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c > [0]PETSC ERROR: [0] DAView_VTK line 129 src/dm/da/src/daview.c > [0]PETSC ERROR: [0] DAView line 227 src/dm/da/src/daview.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex5.exe on a arch-bgp- named ionode37 by Unknown Thu > Jan ?1 03:03:12 1970 > [0]PETSC ERROR: Libraries linked from > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ > -llapack -lblas" --with-x=0 --with-is-color-value-type=short > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 > --known-sizeof-long-long=8 --known-sizeof-float=4 > --known-sizeof-double=8 --known-bits-per-byte=8 > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > Abort(59) on node 0 (rank 0 in comm 1140850688): application called > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > Also, my code is not working for calling VecView() with error message: > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly > illegal memory access > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- ?Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the function > [0]PETSC ERROR: ? ? ? is given. > [0]PETSC ERROR: [0] VecView_MPI_DA line 508 src/dm/da/src/gr2.c > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c > [0]PETSC ERROR: [0] DumpSolutionToMatlab line 1052 twqt2ff.c > [0]PETSC ERROR: [0] Solve line 235 twqt2ff.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./twqt2ff.exe on a arch-bgp- named ionode132 by > Unknown Fri Jan ?2 00:44:26 1970 > [0]PETSC ERROR: Libraries linked from > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ > -llapack -lblas" --with-x=0 --with-is-color-value-type=short > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 > --known-sizeof-long-long=8 --known-sizeof-float=4 > --known-sizeof-double=8 --known-bits-per-byte=8 > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > Abort(59) on node 0 (rank 0 in comm 1140850688): application called > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > Am I wrong at configure? > > Thanks a lot! > > Rebecca > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > > > From balay at mcs.anl.gov Tue Apr 27 09:24:42 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 27 Apr 2010 09:24:42 -0500 (CDT) Subject: [petsc-users] [Supercomputing Lab #847] VecView() error in BlueGene In-Reply-To: References: <20100427052051.g07i5m8wsg0gkksw@cubmail.cc.columbia.edu> Message-ID: Is this latest petsc-dev? Can you try using the following env variable and see if the error message goes away? 'BG_MAXALIGNEXP=-1' We still have to find the source of the problem.. Satish On Tue, 27 Apr 2010, Aron Ahmadia wrote: > I'll take a look at this and report back Rebecca. I was seeing > similar bus errors on some PETSc example code calling VecView and we > haven't tracked it down yet. > > A > > On Tue, Apr 27, 2010 at 5:20 AM, Xuefei YUAN via RT > wrote: > > > > Tue Apr 27 12:20:11 2010: Request 847 was acted upon. > > ?Transaction: Ticket created by xy2102 at columbia.edu > > ? ? ? Queue: Shaheen > > ? ? Subject: VecView() error in BlueGene > > ? ? ? Owner: Nobody > > ?Requestors: xy2102 at columbia.edu > > ? ? ?Status: new > > ?Ticket > > > > > > Hi, > > > > I tried the example code /petsc-dev/src/snes/examples/tutorials/ex5.c > > on bluegene, but get the following message about the VecView(). > > > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly > > illegal memory access > > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [0]PETSC ERROR: or see > > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > > find memory corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- ?Stack Frames > > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the function > > [0]PETSC ERROR: ? ? ? is given. > > [0]PETSC ERROR: [0] VecView_MPI line 801 src/vec/vec/impls/mpi/pdvec.c > > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c > > [0]PETSC ERROR: [0] DAView_VTK line 129 src/dm/da/src/daview.c > > [0]PETSC ERROR: [0] DAView line 227 src/dm/da/src/daview.c > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: Signal received! > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./ex5.exe on a arch-bgp- named ionode37 by Unknown Thu > > Jan ?1 03:03:12 1970 > > [0]PETSC ERROR: Libraries linked from > > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib > > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 > > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r > > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx > > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ > > -llapack -lblas" --with-x=0 --with-is-color-value-type=short > > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" > > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 > > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok > > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 > > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 > > --known-sizeof-long-long=8 --known-sizeof-float=4 > > --known-sizeof-double=8 --known-bits-per-byte=8 > > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 > > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: User provided function() line 0 in unknown directory > > unknown file > > Abort(59) on node 0 (rank 0 in comm 1140850688): application called > > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > > > Also, my code is not working for calling VecView() with error message: > > > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly > > illegal memory access > > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [0]PETSC ERROR: or see > > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > > find memory corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- ?Stack Frames > > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the function > > [0]PETSC ERROR: ? ? ? is given. > > [0]PETSC ERROR: [0] VecView_MPI_DA line 508 src/dm/da/src/gr2.c > > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c > > [0]PETSC ERROR: [0] DumpSolutionToMatlab line 1052 twqt2ff.c > > [0]PETSC ERROR: [0] Solve line 235 twqt2ff.c > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: Signal received! > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: unknown > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./twqt2ff.exe on a arch-bgp- named ionode132 by > > Unknown Fri Jan ?2 00:44:26 1970 > > [0]PETSC ERROR: Libraries linked from > > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib > > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 > > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r > > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx > > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ > > -llapack -lblas" --with-x=0 --with-is-color-value-type=short > > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" > > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 > > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok > > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 > > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 > > --known-sizeof-long-long=8 --known-sizeof-float=4 > > --known-sizeof-double=8 --known-bits-per-byte=8 > > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 > > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: User provided function() line 0 in unknown directory > > unknown file > > Abort(59) on node 0 (rank 0 in comm 1140850688): application called > > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > > > Am I wrong at configure? > > > > Thanks a lot! > > > > Rebecca > > > > > > -- > > (Rebecca) Xuefei YUAN > > Department of Applied Physics and Applied Mathematics > > Columbia University > > Tel:917-399-8032 > > www.columbia.edu/~xy2102 > > > > > > > > > From bsmith at mcs.anl.gov Tue Apr 27 15:01:21 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 27 Apr 2010 15:01:21 -0500 Subject: [petsc-users] A good way of dealing data In-Reply-To: <20100427054440.kenru3syu8kcoggo@cubmail.cc.columbia.edu> References: <20100427054440.kenru3syu8kcoggo@cubmail.cc.columbia.edu> Message-ID: On Apr 27, 2010, at 4:44 AM, (Rebecca) Xuefei YUAN wrote: > Dear all, > > If I save my data file as a binary file from Petsc,i.e., > > ierr = > PetscViewerBinaryOpen > (PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); > > How could I deal(get access to) with my data with a c/c++ code? It you want to access it directly from C/C++ without a PETSc code then you can read the manual page for VecLoad() and MatLoad() for exact details of the file format. Barry > > Any examples on it? > > Thanks very much! > > Rebecca > > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From xy2102 at columbia.edu Tue Apr 27 15:15:01 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 27 Apr 2010 16:15:01 -0400 Subject: [petsc-users] A good way of dealing data In-Reply-To: References: <20100427054440.kenru3syu8kcoggo@cubmail.cc.columbia.edu> Message-ID: <20100427161501.ak1s8k2pxwo8484k@cubmail.cc.columbia.edu> Dear Barry, Thanks very much for your reply. I solved the problem this afternoon, but forgot to update in the email. Have a great day! Rebecca Quoting Barry Smith : > > On Apr 27, 2010, at 4:44 AM, (Rebecca) Xuefei YUAN wrote: > >> Dear all, >> >> If I save my data file as a binary file from Petsc,i.e., >> >> ierr = >> PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); >> >> How could I deal(get access to) with my data with a c/c++ code? > > It you want to access it directly from C/C++ without a PETSc code > then you can read the manual page for VecLoad() and MatLoad() for exact > details of the file format. > > Barry > >> >> Any examples on it? >> >> Thanks very much! >> >> Rebecca >> >> >> >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Wed Apr 28 01:37:33 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 28 Apr 2010 02:37:33 -0400 Subject: [petsc-users] [Supercomputing Lab #847] VecView() error in BlueGene In-Reply-To: References: <20100427052051.g07i5m8wsg0gkksw@cubmail.cc.columbia.edu> <20100427102822.fe8e3n63w4gc004w@cubmail.cc.columbia.edu> Message-ID: <20100428023733.86pipx52joks080s@cubmail.cc.columbia.edu> Dear Satish, After I use this env set, it is right and no error comes out. Thanks very much! Rebecca Quoting Satish Balay : > It should be used at runtime. For ex on our bgl - we use qsub as: > > qsub --env BG_MAXALIGNEXP=-1 -t 00:15:00 -n 4 ./ex2 > > Satish > > > On Tue, 27 Apr 2010, (Rebecca) Xuefei YUAN wrote: > >> Dear Satish, >> >> I think it is the latest petsc-dev, but I can check. >> >> How could I use the env variable? >> >> Do I need to put this >> >> 'BG_MAXALIGNEXP=-1' >> >> in the configure.py? >> >> Thanks a lot! >> >> Rebecca >> >> >> Quoting Satish Balay : >> >> > Is this latest petsc-dev? >> > >> > Can you try using the following env variable and see if the error message >> > goes away? >> > >> > 'BG_MAXALIGNEXP=-1' >> > >> > >> > We still have to find the source of the problem.. >> > >> > Satish >> > >> > On Tue, 27 Apr 2010, Aron Ahmadia wrote: >> > >> > > I'll take a look at this and report back Rebecca. I was seeing >> > > similar bus errors on some PETSc example code calling VecView and we >> > > haven't tracked it down yet. >> > > >> > > A >> > > >> > > On Tue, Apr 27, 2010 at 5:20 AM, Xuefei YUAN via RT >> > > wrote: >> > > > >> > > > Tue Apr 27 12:20:11 2010: Request 847 was acted upon. >> > > > ?Transaction: Ticket created by xy2102 at columbia.edu >> > > > ? ? ? Queue: Shaheen >> > > > ? ? Subject: VecView() error in BlueGene >> > > > ? ? ? Owner: Nobody >> > > > ?Requestors: xy2102 at columbia.edu >> > > > ? ? ?Status: new >> > > > ?Ticket > http://www.hpc.kaust.edu.sa/rt/Ticket/Display.html?id=847 >> > > > > >> > > > >> > > > >> > > > Hi, >> > > > >> > > > I tried the example code /petsc-dev/src/snes/examples/tutorials/ex5.c >> > > > on bluegene, but get the following message about the VecView(). >> > > > >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly >> > > > illegal memory access >> > > > [0]PETSC ERROR: Try option -start_in_debugger or >> > > > -on_error_attach_debugger >> > > > [0]PETSC ERROR: or see >> > > > >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC >> > > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to >> > > > find memory corruption errors >> > > > [0]PETSC ERROR: likely location of problem given in stack below >> > > > [0]PETSC ERROR: --------------------- ?Stack Frames >> > > > ------------------------------------ >> > > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> > > > available, >> > > > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the >> > > > function >> > > > [0]PETSC ERROR: ? ? ? is given. >> > > > [0]PETSC ERROR: [0] VecView_MPI line 801 src/vec/vec/impls/mpi/pdvec.c >> > > > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c >> > > > [0]PETSC ERROR: [0] DAView_VTK line 129 src/dm/da/src/daview.c >> > > > [0]PETSC ERROR: [0] DAView line 227 src/dm/da/src/daview.c >> > > > [0]PETSC ERROR: --------------------- Error Message >> > > > ------------------------------------ >> > > > [0]PETSC ERROR: Signal received! >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: Petsc Development HG revision: unknown HG >> Date: unknown >> > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> > > > [0]PETSC ERROR: See docs/index.html for manual pages. >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: ./ex5.exe on a arch-bgp- named ionode37 by Unknown Thu >> > > > Jan ?1 03:03:12 1970 >> > > > [0]PETSC ERROR: Libraries linked from >> > > > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib >> > > > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 >> > > > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r >> > > > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx >> > > > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ >> > > > -llapack -lblas" --with-x=0 --with-is-color-value-type=short >> > > > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" >> > > > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 >> > > > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok >> > > > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 >> > > > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 >> > > > --known-sizeof-long-long=8 --known-sizeof-float=4 >> > > > --known-sizeof-double=8 --known-bits-per-byte=8 >> > > > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 >> > > > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 >> > > > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 >> > > > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: User provided function() line 0 in unknown directory >> > > > unknown file >> > > > Abort(59) on node 0 (rank 0 in comm 1140850688): application called >> > > > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> > > > >> > > > >> > > > Also, my code is not working for calling VecView() with error message: >> > > > >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly >> > > > illegal memory access >> > > > [0]PETSC ERROR: Try option -start_in_debugger or >> > > > -on_error_attach_debugger >> > > > [0]PETSC ERROR: or see >> > > > >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC >> > > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to >> > > > find memory corruption errors >> > > > [0]PETSC ERROR: likely location of problem given in stack below >> > > > [0]PETSC ERROR: --------------------- ?Stack Frames >> > > > ------------------------------------ >> > > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> > > > available, >> > > > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the >> > > > function >> > > > [0]PETSC ERROR: ? ? ? is given. >> > > > [0]PETSC ERROR: [0] VecView_MPI_DA line 508 src/dm/da/src/gr2.c >> > > > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c >> > > > [0]PETSC ERROR: [0] DumpSolutionToMatlab line 1052 twqt2ff.c >> > > > [0]PETSC ERROR: [0] Solve line 235 twqt2ff.c >> > > > [0]PETSC ERROR: --------------------- Error Message >> > > > ------------------------------------ >> > > > [0]PETSC ERROR: Signal received! >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: Petsc Development HG revision: unknown HG >> Date: unknown >> > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> > > > [0]PETSC ERROR: See docs/index.html for manual pages. >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: ./twqt2ff.exe on a arch-bgp- named ionode132 by >> > > > Unknown Fri Jan ?2 00:44:26 1970 >> > > > [0]PETSC ERROR: Libraries linked from >> > > > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib >> > > > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 >> > > > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r >> > > > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx >> > > > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ >> > > > -llapack -lblas" --with-x=0 --with-is-color-value-type=short >> > > > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" >> > > > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 >> > > > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok >> > > > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 >> > > > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 >> > > > --known-sizeof-long-long=8 --known-sizeof-float=4 >> > > > --known-sizeof-double=8 --known-bits-per-byte=8 >> > > > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 >> > > > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 >> > > > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 >> > > > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg >> > > > [0]PETSC ERROR: >> > > > >> ------------------------------------------------------------------------ >> > > > [0]PETSC ERROR: User provided function() line 0 in unknown directory >> > > > unknown file >> > > > Abort(59) on node 0 (rank 0 in comm 1140850688): application called >> > > > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> > > > >> > > > >> > > > Am I wrong at configure? >> > > > >> > > > Thanks a lot! >> > > > >> > > > Rebecca >> > > > >> > > > >> > > > -- >> > > > (Rebecca) Xuefei YUAN >> > > > Department of Applied Physics and Applied Mathematics >> > > > Columbia University >> > > > Tel:917-399-8032 >> > > > www.columbia.edu/~xy2102 >> > > > >> > > > >> > > > >> > > > >> > > >> > >> >> >> >> > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From balay at mcs.anl.gov Wed Apr 28 07:39:20 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 28 Apr 2010 07:39:20 -0500 (CDT) Subject: [petsc-users] [Supercomputing Lab #847] VecView() error in BlueGene In-Reply-To: <20100428023733.86pipx52joks080s@cubmail.cc.columbia.edu> References: <20100427052051.g07i5m8wsg0gkksw@cubmail.cc.columbia.edu> <20100427102822.fe8e3n63w4gc004w@cubmail.cc.columbia.edu> <20100428023733.86pipx52joks080s@cubmail.cc.columbia.edu> Message-ID: This indicates the errros you got earlier were alignment errors. BG_MAXALIGNEXP=-1 get the code running - but a bit inefficiently. We have to track them down in a debugger to determine the location. If BG_MAXALIGNEXP=0 is used - the code will give BUS error at the first mis-aligned access. Then we have to use the debugger with the core file and determine the location of the crash - and try to figure out which memory is misaligned. Satish On Wed, 28 Apr 2010, (Rebecca) Xuefei YUAN wrote: > Dear Satish, > > After I use this env set, it is right and no error comes out. > > Thanks very much! > > Rebecca > > > Quoting Satish Balay : > > > It should be used at runtime. For ex on our bgl - we use qsub as: > > > > qsub --env BG_MAXALIGNEXP=-1 -t 00:15:00 -n 4 ./ex2 > > > > Satish > > > > > > On Tue, 27 Apr 2010, (Rebecca) Xuefei YUAN wrote: > > > > > Dear Satish, > > > > > > I think it is the latest petsc-dev, but I can check. > > > > > > How could I use the env variable? > > > > > > Do I need to put this > > > > > > 'BG_MAXALIGNEXP=-1' > > > > > > in the configure.py? > > > > > > Thanks a lot! > > > > > > Rebecca > > > > > > > > > Quoting Satish Balay : > > > > > > > Is this latest petsc-dev? > > > > > > > > Can you try using the following env variable and see if the error > > > > message > > > > goes away? > > > > > > > > 'BG_MAXALIGNEXP=-1' > > > > > > > > > > > > We still have to find the source of the problem.. > > > > > > > > Satish > > > > > > > > On Tue, 27 Apr 2010, Aron Ahmadia wrote: > > > > > > > > > I'll take a look at this and report back Rebecca. I was seeing > > > > > similar bus errors on some PETSc example code calling VecView and we > > > > > haven't tracked it down yet. > > > > > > > > > > A > > > > > > > > > > On Tue, Apr 27, 2010 at 5:20 AM, Xuefei YUAN via RT > > > > > wrote: > > > > > > > > > > > > Tue Apr 27 12:20:11 2010: Request 847 was acted upon. > > > > > > ?Transaction: Ticket created by xy2102 at columbia.edu > > > > > > ? ? ? Queue: Shaheen > > > > > > ? ? Subject: VecView() error in BlueGene > > > > > > ? ? ? Owner: Nobody > > > > > > ?Requestors: xy2102 at columbia.edu > > > > > > ? ? ?Status: new > > > > > > ?Ticket > > > http://www.hpc.kaust.edu.sa/rt/Ticket/Display.html?id=847 > > > > > > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > I tried the example code > > > > /petsc-dev/src/snes/examples/tutorials/ex5.c > > > > > > on bluegene, but get the following message about the VecView(). > > > > > > > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly > > > > > > illegal memory access > > > > > > [0]PETSC ERROR: Try option -start_in_debugger or > > > > > > -on_error_attach_debugger > > > > > > [0]PETSC ERROR: or see > > > > > > > > > > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > > > > > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > > > > > > find memory corruption errors > > > > > > [0]PETSC ERROR: likely location of problem given in stack below > > > > > > [0]PETSC ERROR: --------------------- ?Stack Frames > > > > > > ------------------------------------ > > > > > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > > > > > available, > > > > > > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the > > > > > > function > > > > > > [0]PETSC ERROR: ? ? ? is given. > > > > > > [0]PETSC ERROR: [0] VecView_MPI line 801 > > > > src/vec/vec/impls/mpi/pdvec.c > > > > > > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c > > > > > > [0]PETSC ERROR: [0] DAView_VTK line 129 src/dm/da/src/daview.c > > > > > > [0]PETSC ERROR: [0] DAView line 227 src/dm/da/src/daview.c > > > > > > [0]PETSC ERROR: --------------------- Error Message > > > > > > ------------------------------------ > > > > > > [0]PETSC ERROR: Signal received! > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: > > > > unknown > > > > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > > > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > > > > > [0]PETSC ERROR: See docs/index.html for manual pages. > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: ./ex5.exe on a arch-bgp- named ionode37 by Unknown > > > > Thu > > > > > > Jan ?1 03:03:12 1970 > > > > > > [0]PETSC ERROR: Libraries linked from > > > > > > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib > > > > > > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 > > > > > > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r > > > > > > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx > > > > > > > > > > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ > > > > > > -llapack -lblas" --with-x=0 --with-is-color-value-type=short > > > > > > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" > > > > > > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 > > > > > > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok > > > > > > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 > > > > > > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 > > > > > > --known-sizeof-long-long=8 --known-sizeof-float=4 > > > > > > --known-sizeof-double=8 --known-bits-per-byte=8 > > > > > > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > > > > > > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 > > > > > > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > > > > > > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: User provided function() line 0 in unknown directory > > > > > > unknown file > > > > > > Abort(59) on node 0 (rank 0 in comm 1140850688): application called > > > > > > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > > > > > > > > > > > > > > > Also, my code is not working for calling VecView() with error > > > > message: > > > > > > > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly > > > > > > illegal memory access > > > > > > [0]PETSC ERROR: Try option -start_in_debugger or > > > > > > -on_error_attach_debugger > > > > > > [0]PETSC ERROR: or see > > > > > > > > > > http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC > > > > > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > > > > > > find memory corruption errors > > > > > > [0]PETSC ERROR: likely location of problem given in stack below > > > > > > [0]PETSC ERROR: --------------------- ?Stack Frames > > > > > > ------------------------------------ > > > > > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > > > > > available, > > > > > > [0]PETSC ERROR: ? ? ? INSTEAD the line number of the start of the > > > > > > function > > > > > > [0]PETSC ERROR: ? ? ? is given. > > > > > > [0]PETSC ERROR: [0] VecView_MPI_DA line 508 src/dm/da/src/gr2.c > > > > > > [0]PETSC ERROR: [0] VecView line 690 src/vec/vec/interface/vector.c > > > > > > [0]PETSC ERROR: [0] DumpSolutionToMatlab line 1052 twqt2ff.c > > > > > > [0]PETSC ERROR: [0] Solve line 235 twqt2ff.c > > > > > > [0]PETSC ERROR: --------------------- Error Message > > > > > > ------------------------------------ > > > > > > [0]PETSC ERROR: Signal received! > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: Petsc Development HG revision: unknown HG Date: > > > > unknown > > > > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > > > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > > > > > [0]PETSC ERROR: See docs/index.html for manual pages. > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: ./twqt2ff.exe on a arch-bgp- named ionode132 by > > > > > > Unknown Fri Jan ?2 00:44:26 1970 > > > > > > [0]PETSC ERROR: Libraries linked from > > > > > > /scratch/rebeccaxyf/petsc-dev/arch-bgp-ibm-dbg/lib > > > > > > [0]PETSC ERROR: Configure run at Mon Apr 26 15:37:29 2010 > > > > > > [0]PETSC ERROR: Configure options --with-cc=mpixlc_r > > > > > > --with-cxx=mpixlcxx_r --with-fc=mpixlf90_r --with-clanguage=cxx > > > > > > > > > > --with-blas-lapack-lib="-L/opt/share/math_libraries/lapack/ppc64/IBM/ > > > > > > -llapack -lblas" --with-x=0 --with-is-color-value-type=short > > > > > > --with-shared=0 -CFLAGS="-qmaxmem=-1 -g" -CXXFLAGS="-qmaxmem=-1 -g" > > > > > > -FFLAGS="-qmaxmem=-1 -g" --with-debugging=1 --with-fortran-kernels=1 > > > > > > --with-batch=1 --known-mpi-shared=0 --known-memcmp-ok > > > > > > --known-sizeof-char=1 --known-sizeof-void-p=4 --known-sizeof-short=2 > > > > > > --known-sizeof-int=4 --known-sizeof-long=4 --known-sizeof-size_t=4 > > > > > > --known-sizeof-long-long=8 --known-sizeof-float=4 > > > > > > --known-sizeof-double=8 --known-bits-per-byte=8 > > > > > > --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 > > > > > > --known-mpi-long-double=1 --known-level1-dcache-assoc=0 > > > > > > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > > > > > > --petsc-arch=bgp-dbg PETSC_ARCH=arch-bgp-ibm-dbg > > > > > > [0]PETSC ERROR: > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > [0]PETSC ERROR: User provided function() line 0 in unknown directory > > > > > > unknown file > > > > > > Abort(59) on node 0 (rank 0 in comm 1140850688): application called > > > > > > MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > > > > > > > > > > > > > > > Am I wrong at configure? > > > > > > > > > > > > Thanks a lot! > > > > > > > > > > > > Rebecca > > > > > > > > > > > > > > > > > > -- > > > > > > (Rebecca) Xuefei YUAN > > > > > > Department of Applied Physics and Applied Mathematics > > > > > > Columbia University > > > > > > Tel:917-399-8032 > > > > > > www.columbia.edu/~xy2102 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From lizs at mail.uc.edu Thu Apr 29 16:08:38 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Thu, 29 Apr 2010 21:08:38 +0000 Subject: [petsc-users] about FD Jacobian for the new DAE solver Message-ID: <88D7E3BB7E1960428303E76010037451A502@BL2PRD0103MB060.prod.exchangelabs.com> Hi, Last time I was suggested by Jed using the DAE solver to build my nonlinear CFD code based on TS. For incompressible N-S equations, I think it's difficult to derive the analytical Jacobian formulation, and we need a finite difference approximation or matrix-free method. What I previously done was integrating the incompressible N-S equations into tutorial code ts/ex7.c . It simply uses an argument "TSDefaultComputeJacobianColor" to finish the Jacobian issue. When I switch to TSSetIFunction() and TSSetIJacobian(), this argument doesn't work anymore. And if I remove all Jacobian-related lines and use -snes_mf command option, it prompts out error message "Must Set Jacobian!". Do you know how to resolve this for both methods? Thank you. Regards, Zhisong Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 29 16:37:48 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 29 Apr 2010 23:37:48 +0200 Subject: [petsc-users] about FD Jacobian for the new DAE solver In-Reply-To: <88D7E3BB7E1960428303E76010037451A502@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A502@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <87eihydjo3.fsf@59A2.org> On Thu, 29 Apr 2010 21:08:38 +0000, "Li, Zhisong (lizs)" wrote: > Hi, > > Last time I was suggested by Jed using the DAE solver to build my > nonlinear CFD code based on TS. For incompressible N-S equations, I > think it's difficult to derive the analytical Jacobian formulation, > and we need a finite difference approximation or matrix-free method. It's pretty easy to code an analytic Jacobian for incompressible Navier-Stokes since it's only a quadratic nonlinearity. But > What I previously done was integrating the incompressible N-S equations into tutorial code ts/ex7.c . It simply uses an argument "TSDefaultComputeJacobianColor" to finish the Jacobian issue. When I switch to TSSetIFunction() and TSSetIJacobian(), this argument doesn't work anymore. And if I remove all Jacobian-related lines and use -snes_mf command option, it prompts out error message "Must Set Jacobian!". Do you know how to resolve this for both methods? If you use -snes_mf or -snes_fd, then you can call TSSetIJacobian() and not provide a function. If you are using petsc-dev, then ts/examples/tutorials/ex10.c has the following method (which is also more efficient than TSDefaultComputeJacobianColor). SNES snes; ISColoring iscoloring; ierr = TSGetSNES(ts,&snes); ierr = DAGetColoring(rd->da,IS_COLORING_GLOBAL,MATAIJ,&iscoloring);CHKERRQ(ierr); ierr = MatFDColoringCreate(B,iscoloring,&matfdcoloring);CHKERRQ(ierr); ierr = ISColoringDestroy(iscoloring);CHKERRQ(ierr); ierr = MatFDColoringSetFunction(matfdcoloring,(PetscErrorCode(*)(void))SNESTSFormFunction,ts);CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(matfdcoloring);CHKERRQ(ierr); ierr = SNESSetJacobian(snes,A,B,SNESDefaultComputeJacobianColor,matfdcoloring);CHKERRQ(ierr); But it looks like TSPSEUDO will currently overwrite this (the other integrators deal with this correctly). I'll push a fix shortly. Jed From lizs at mail.uc.edu Thu Apr 29 17:28:39 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Thu, 29 Apr 2010 22:28:39 +0000 Subject: [petsc-users] about FD Jacobian for the new DAE solver In-Reply-To: <87eihydjo3.fsf@59A2.org> References: <88D7E3BB7E1960428303E76010037451A502@BL2PRD0103MB060.prod.exchangelabs.com>, <87eihydjo3.fsf@59A2.org> Message-ID: <88D7E3BB7E1960428303E76010037451A522@BL2PRD0103MB060.prod.exchangelabs.com> Hi, Jed, Thank you for your quick response. but I don't understand why you said it's easy here. My professor got stuck on this problem: >>It's pretty easy to code an analytic Jacobian for incompressible >>Navier-Stokes since it's only a quadratic nonlinearity. But I wonder if you mean the finite element method here. I am only planning FDM or FVM for my work. Actually we don't have any polynomial in incompressible N-S equations. >From my understanding from ts/ex8, for example, the continuity equation with pressure term: F = d(p)/dt+ d(u)/dx+d(v)/dy, we need to compute J[0][0] = d(F)/d(p), J[1][0] = d(F)/d(u) and J[2][0] = d(F)/d(v). I speculate they are J[0][0] = 1/delta_t, J[1][0] = 1/delta_x and J[2][0] = 1/delta_y. Is this correct? And d(Lap(u))/d(u) might be more difficult. This is not much about PETSc, but I hope you can still give me some help or suggest a book/ paper on this. Thank you very much. Zhisong Li From knepley at gmail.com Thu Apr 29 17:58:44 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 29 Apr 2010 17:58:44 -0500 Subject: [petsc-users] about FD Jacobian for the new DAE solver In-Reply-To: <88D7E3BB7E1960428303E76010037451A522@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A502@BL2PRD0103MB060.prod.exchangelabs.com> <87eihydjo3.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A522@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: On Thu, Apr 29, 2010 at 5:28 PM, Li, Zhisong (lizs) wrote: > Hi, Jed, > > Thank you for your quick response. > > but I don't understand why you said it's easy here. My professor got stuck > on this problem: > > >>It's pretty easy to code an analytic Jacobian for incompressible > >>Navier-Stokes since it's only a quadratic nonlinearity. But > > I wonder if you mean the finite element method here. I am only planning FDM > or FVM for my work. Actually we don't have any polynomial in incompressible > N-S equations. > The nonlinearity is u \cdot \nabla u, which is quadratic in u. > >From my understanding from ts/ex8, for example, the continuity equation > with pressure term: F = d(p)/dt+ d(u)/dx+d(v)/dy, we need to compute > J[0][0] = d(F)/d(p), J[1][0] = d(F)/d(u) and J[2][0] = d(F)/d(v). I > speculate they are J[0][0] = 1/delta_t, J[1][0] = 1/delta_x and J[2][0] = > 1/delta_y. Is this correct? And d(Lap(u))/d(u) might be more difficult. > > This is not much about PETSc, but I hope you can still give me some help or > suggest a book/ paper on this. > The derivative of Lap is just Lap (this is a Frechet derivative). It is easiest to think of the residual F as being a function of the coefficients, and then J_{ij} is just the derivative of F_i with respect to coefficient j. Matt > Thank you very much. > > > Zhisong Li -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Fri Apr 30 05:26:21 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 30 Apr 2010 12:26:21 +0200 Subject: [petsc-users] about FD Jacobian for the new DAE solver In-Reply-To: <88D7E3BB7E1960428303E76010037451A522@BL2PRD0103MB060.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E76010037451A502@BL2PRD0103MB060.prod.exchangelabs.com>, <87eihydjo3.fsf@59A2.org> <88D7E3BB7E1960428303E76010037451A522@BL2PRD0103MB060.prod.exchangelabs.com> Message-ID: <8739yddynm.fsf@59A2.org> On Thu, 29 Apr 2010 22:28:39 +0000, "Li, Zhisong (lizs)" wrote: > I wonder if you mean the finite element method here. I am only > planning FDM or FVM for my work. Actually we don't have any polynomial > in incompressible N-S equations. As Matt says, the convection term is quadratic, everything else is linear. You don't have an equation of state or a nonlinear rheology. The true Jacobian is easy to assemble in this setting as long as you use a linear spatial discretization. If you use a nonlinear spatial discretization such as WENO or a high-resolution TVD finite volume method, then the true Jacobian is quite complicated to assemble and has a huge stencil which will make it expensive to assemble and solve with. You will get much better performance if you instead precondition with a first-order upwind spatial discretization, with the action of the true Jacobian applied matrix-free (e.g. -snes_mf_operator). Jed From hxie at umn.edu Fri Apr 30 12:42:12 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 30 Apr 2010 12:42:12 -0500 Subject: [petsc-users] zero diagonals In-Reply-To: References: Message-ID: Hi, The Matshift will shift all the diagonals. Is there a function to shift only the zero diagonals? Bests, Hui >Date: Mon, 19 Apr 2010 16:08:34 -0500 >From: Barry Smith >Subject: Re: [petsc-users] zero diagonals >To: PETSc users list >Message-ID: <8CCA772D-DD35-4396-9951-DF23AC833863 at mcs.anl.gov> >Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > > -pc_factor_mat_shift_type nonzero or -sub_pc_factor_shift_type >nonzero or -mg_coarse_pc_factor_shift_type nonzero depending on where >it is used. > > You can always run with -help and grep for shift_type to find out >which one is needed > > Barry > > >On Apr 19, 2010, at 4:04 PM, hxie at umn.edu wrote: > >> Hi, >> I have a matrix with some zero diagonals. Now I create a new matrix >> for preconditioner and use Matshift to avoid the zero diagonals in >> the preconditioner matrix. It works fine. Is that possible to just >> use the command line options? If I just want to shift the zero >> diagonals, how can I do that? Thanks. >> >> Bests, >> Hui >> >> > > > From bsmith at mcs.anl.gov Fri Apr 30 13:03:48 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 30 Apr 2010 13:03:48 -0500 Subject: [petsc-users] zero diagonals In-Reply-To: References: Message-ID: <29EB28A4-9A42-471F-8197-7CFE67C11A54@mcs.anl.gov> No, nor is there likely a reason you would want to do that. It is practically mathematically meaningless. You can write a routine that loops over each local row, checks if the row has zero diagonal, keep the list of these rows and then call MatSetValues() for the diagonals of those rows and put whatever value you want in there. Note: you cannot mix calls to MatGetRow() and MatSetValues() so call ALL the MatGetRows() and then call all the MatSetValues(). Barry On Apr 30, 2010, at 12:42 PM, hxie at umn.edu wrote: > > Hi, > > The Matshift will shift all the diagonals. Is there a function to > shift only the zero diagonals? > > Bests, > Hui > > > > >> Date: Mon, 19 Apr 2010 16:08:34 -0500 >> From: Barry Smith >> Subject: Re: [petsc-users] zero diagonals >> To: PETSc users list >> Message-ID: <8CCA772D-DD35-4396-9951-DF23AC833863 at mcs.anl.gov> >> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes >> >> >> -pc_factor_mat_shift_type nonzero or -sub_pc_factor_shift_type >> nonzero or -mg_coarse_pc_factor_shift_type nonzero depending on >> where it is used. >> >> You can always run with -help and grep for shift_type to find >> out which one is needed >> >> Barry >> >> >> On Apr 19, 2010, at 4:04 PM, hxie at umn.edu wrote: >> >>> Hi, >>> I have a matrix with some zero diagonals. Now I create a new >>> matrix for preconditioner and use Matshift to avoid the zero >>> diagonals in the preconditioner matrix. It works fine. Is that >>> possible to just use the command line options? If I just want to >>> shift the zero diagonals, how can I do that? Thanks. >>> >>> Bests, >>> Hui >>> >>> >> >> >> > From chenleping at yahoo.cn Fri Apr 30 09:17:09 2010 From: chenleping at yahoo.cn (=?gb2312?B?s8LA1sa9o6hMZXBpbmcgQ2hlbqOp?=) Date: Fri, 30 Apr 2010 22:17:09 +0800 Subject: [petsc-users] about snes and finite difference jacobian approximations References: <201004301512433337340@yahoo.cn> Message-ID: <201004302217070315525@yahoo.cn> Dear petsc teams, When I use snes, I find the Jacobian matrix computation is an imposible misson, however,I don't konw how to use "Finite Difference Jacobian Approximations", for example, I modified the example "ex1f.F" and the function FormJacobian() remain unchanged,as follows, call FormJacobian(snes,x,J,J,flag,dummy,ierr) call MatGetColoring(J,MATCOLORING_SL,iscoloring,ierr) call MatFDColoringCreate(J,iscoloring,fdcoloring,ierr) call ISColoringDestroy(iscoloring,ierr) call MatFDColoringSetFromOptions(fdcoloring,ierr) call SNESSetJacobian(snes,J,J,SNESDefaultComputeJacobianColor, fdcoloring,ierr) but, [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Must call MatFDColoringSetFunction()! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 4, Fri Mar 6 14:46:08 CST 2009 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /public/user/chenleping/femxr-2dof-nonlinear/ex1f on a linux-gnu named node1 by chenleping Sat May 1 14:30:19 2010 [0]PETSC ERROR: Libraries linked from /public/user/chenleping/soft/petsc-3.0.0-p4/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Fri Mar 27 13:09:33 2009 [0]PETSC ERROR: Configure options --with-mpi-dir=/public/user/chenleping/soft/mpich-install --with-x=0 --download-hypre=/public/user/chenleping/soft/petsc-3.0.0-p4/hypre-2.0.0.tar.gz --with-shared=0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatFDColoringApply() line 522 in src/mat/matfd/fdmatrix.c [0]PETSC ERROR: SNESDefaultComputeJacobianColor() line 49 in src/snes/interface/snesj2.c [0]PETSC ERROR: SNESComputeJacobian() line 1111 in src/snes/interface/snes.c [0]PETSC ERROR: SNESSolve_LS() line 189 in src/snes/impls/ls/ls.c [0]PETSC ERROR: SNESSolve() line 2221 in src/snes/interface/snes.c Number of Newton iterations = 0 and I don't understand /* This initializes the nonzero structure of the Jacobian. This is artificial because clearly if we had a routine to compute the Jacobian we wouldn?t need to use finite differences. */ so I don't how to define the function FormJacobian(). thanks, leping 2010-04-30 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pavel_chuvahov at mail.ru Thu Apr 29 04:33:49 2010 From: pavel_chuvahov at mail.ru (=?koi8-r?Q?=F0=C1=D7=C5=CC_=FE=D5=D7=C1=C8=CF=D7?=) Date: Thu, 29 Apr 2010 13:33:49 +0400 Subject: [petsc-users] DAVecGetArray, ghost access Message-ID: Dear PETSc team, all! I'm involved in developing PETSc-assisted code and recently came across the following problem. As described in user manual (rev.3.1, p.48), elements of vector that is managed by DA object can be directly accessed by DAVecGetArray(...) and DAVecRestoreArray(...) where the vector is either local or global. Besides, there's written that GHOSTED values can be accessed in this way. It seems to be not applicable for global vectors, but for local ones only. As far as I understand, it's proposed to get ghosts by creating a new local vector, that contains full local part of global vector plus ghosts. Therefore the memory is just twice we really need, not to mention excessive data coping. So the question arises. In what way can user get access to ghosted values without creating additional DA local vector? RSVP Best Wishes, Pavel V. Chouvakhov.