From enjoywm at cs.wm.edu Sun Mar 1 10:07:41 2009 From: enjoywm at cs.wm.edu (Yixun Liu) Date: Sun, 01 Mar 2009 11:07:41 -0500 Subject: VecRestoreArray Message-ID: <49AAB2CD.8000106@cs.wm.edu> Hi, Is VecRestoreArray used to let the vector can be freed correctly? If I use VecGetArray to get a pointer and then change some values, are they changed in the vector? Thanks. Yixun From knepley at gmail.com Sun Mar 1 10:35:59 2009 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 1 Mar 2009 10:35:59 -0600 Subject: VecRestoreArray In-Reply-To: <49AAB2CD.8000106@cs.wm.edu> References: <49AAB2CD.8000106@cs.wm.edu> Message-ID: On Sun, Mar 1, 2009 at 10:07 AM, Yixun Liu wrote: > Hi, > Is VecRestoreArray used to let the vector can be freed correctly? This is for safety, so someone does not change values in the middle of a VecDot() for instance. > > If I use VecGetArray to get a pointer and then change some values, are > they changed in the vector? Yes. Matt > > Thanks. > > Yixun > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at vpac.org Mon Mar 2 20:43:14 2009 From: dave at vpac.org (Dave Lee) Date: Tue, 3 Mar 2009 13:43:14 +1100 (EST) Subject: Picard Solver In-Reply-To: <1505080339.2332271236048045495.JavaMail.root@mail.vpac.org> Message-ID: <1752911175.2332361236048194865.JavaMail.root@mail.vpac.org> Does PETSc v. 3.0.0 include a fully implemented Picard non-linear solver? I'm just wondering because there seems to be some functionality in the src/snes/impls/picard directory, but there's very little documentation on any of this in the online index pages, and nothing in the version 3.0.0 manual... Cheers, Dave. From knepley at gmail.com Mon Mar 2 23:31:30 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 2 Mar 2009 23:31:30 -0600 Subject: Picard Solver In-Reply-To: <1752911175.2332361236048194865.JavaMail.root@mail.vpac.org> References: <1505080339.2332271236048045495.JavaMail.root@mail.vpac.org> <1752911175.2332361236048194865.JavaMail.root@mail.vpac.org> Message-ID: I am responsible for the Picard implementation, and also for the lack of documentation. It has only quadratic line search now. I can give an example of custom line search if yo want. Matt On Mon, Mar 2, 2009 at 8:43 PM, Dave Lee wrote: > Does PETSc v. 3.0.0 include a fully implemented Picard non-linear solver? > I'm just wondering because there seems to be some functionality in the > src/snes/impls/picard directory, but there's very little documentation on > any of this in the online index pages, and nothing in the version 3.0.0 > manual... > > Cheers, Dave. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at vpac.org Mon Mar 2 23:42:57 2009 From: dave at vpac.org (Dave Lee) Date: Tue, 3 Mar 2009 16:42:57 +1100 (EST) Subject: Picard Solver In-Reply-To: Message-ID: <1678652473.2335281236058977061.JavaMail.root@mail.vpac.org> Thats ok, thanks Matt. Just so i'm clear - if i call SNESCreate_Picard(), that will set all the other SNES functions to their _Picard equivalents, then i can just use the regular SNES function calls (without having to set the build F or build J functions) to solve my problem. This is right yeah? Dave. ----- Original Message ----- From: "Matthew Knepley" To: "PETSc users list" Sent: Tuesday, March 3, 2009 4:31:30 PM GMT +10:00 Canberra / Melbourne / Sydney Subject: Re: Picard Solver I am responsible for the Picard implementation, and also for the lack of documentation. It has only quadratic line search now. I can give an example of custom line search if yo want. Matt On Mon, Mar 2, 2009 at 8:43 PM, Dave Lee < dave at vpac.org > wrote: Does PETSc v. 3.0.0 include a fully implemented Picard non-linear solver? I'm just wondering because there seems to be some functionality in the src/snes/impls/picard directory, but there's very little documentation on any of this in the online index pages, and nothing in the version 3.0.0 manual... Cheers, Dave. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From knepley at gmail.com Mon Mar 2 23:52:59 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 2 Mar 2009 23:52:59 -0600 Subject: Picard Solver In-Reply-To: <1678652473.2335281236058977061.JavaMail.root@mail.vpac.org> References: <1678652473.2335281236058977061.JavaMail.root@mail.vpac.org> Message-ID: On Mon, Mar 2, 2009 at 11:42 PM, Dave Lee wrote: > Thats ok, thanks Matt. Just so i'm clear - if i call SNESCreate_Picard(), > that will set all the other SNES functions to their _Picard equivalents, > then i can just use the regular SNES function calls (without having to set > the build F or build J functions) to solve my problem. This is right yeah? Just do everything the same way you always use SNES, but pass PETSC_NULL for the Jacobian function, and give -snes_type picard. Matt > > Dave. > - Show quoted text - > > > ----- Original Message ----- > From: "Matthew Knepley" > To: "PETSc users list" > Sent: Tuesday, March 3, 2009 4:31:30 PM GMT +10:00 Canberra / Melbourne / > Sydney > Subject: Re: Picard Solver > > I am responsible for the Picard implementation, and also for the lack of > documentation. It has only quadratic line search now. I can give an > example of custom line search if yo want. > > Matt > > > On Mon, Mar 2, 2009 at 8:43 PM, Dave Lee < dave at vpac.org > wrote: > > > Does PETSc v. 3.0.0 include a fully implemented Picard non-linear solver? > I'm just wondering because there seems to be some functionality in the > src/snes/impls/picard directory, but there's very little documentation on > any of this in the online index pages, and nothing in the version 3.0.0 > manual... > > Cheers, Dave. > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Grassl at student.uibk.ac.at Tue Mar 3 02:33:37 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Tue, 03 Mar 2009 09:33:37 +0100 Subject: PCNN preconditioner In-Reply-To: <49A7B040.9050309@student.uibk.ac.at> References: <49A6BF7C.1030503@student.uibk.ac.at> <49A7B040.9050309@student.uibk.ac.at> Message-ID: <49ACEB61.7040906@student.uibk.ac.at> any suggestions? cheers ando Andreas Grassl schrieb: > Barry Smith schrieb: >> Use MatCreateIS() to create the matrix. Use MatSetValuesLocal() to put >> the values in the matrix >> then use PCSetType(pc,PCNN); to set the preconditioner to NN. >> > > I followed your advice, but still run into problems. > > my sourcecode: > > ierr = KSPCreate(comm,&solver);CHKERRQ(ierr); > ierr = KSPSetOperators(solver,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); > ierr = KSPSetInitialGuessNonzero(solver,PETSC_TRUE);CHKERRQ(ierr); > ierr = KSPGetPC(solver,&prec);CHKERRQ(ierr); > ierr = PCSetType(prec,PCNN);CHKERRQ(ierr); > //ierr = PCFactorSetShiftPd(prec,PETSC_TRUE);CHKERRQ(ierr); > ierr = KSPSetUp(solver);CHKERRQ(ierr); > ierr = KSPSolve(solver,B,X);CHKERRQ(ierr); > > and the error message: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Detected zero pivot in LU factorization > see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot! > [0]PETSC ERROR: Zero pivot row 801 value 2.78624e-13 tolerance > 4.28598e-12 * rowsum 4.28598! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 3, Fri Jan 30 > 17:55:56 CST 2009 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Unknown Name on a linux64-g named mat1.uibk.ac.at by > csae1801 Fri Feb 27 10:12:34 2009 > [0]PETSC ERROR: Libraries linked from > /home/lux/csae1801/petsc/petsc-3.0.0-p3/linux64-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Wed Feb 18 10:30:58 2009 > [0]PETSC ERROR: Configure options --with-64-bit-indices > --with-scalar-type=real --with-precision=double --with-cc=icc > --with-fc=ifort --with-cxx=icpc --with-shared=0 --with-mpi=1 > --download-mpich=ifneeded --with-scalapack=1 > --download-scalapack=ifneeded --download-f-blas-lapack=yes > --with-blacs=1 --download-blacs=yes PETSC_ARCH=linux64-gnu-c-debug > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatLUFactorNumeric_Inode() line 1335 in > src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: MatLUFactorNumeric() line 2338 in src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 222 in src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 794 in src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCISSetUp() line 137 in src/ksp/pc/impls/is/pcis.c > [0]PETSC ERROR: PCSetUp_NN() line 28 in src/ksp/pc/impls/is/nn/nn.c > [0]PETSC ERROR: PCSetUp() line 794 in src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: User provided function() line 1274 in petscsolver.c > > > Running PCFactorSetShift doesn't affect the output. > > any ideas? > > cheers > > ando > -- /"\ \ / ASCII Ribbon X against HTML email / \ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 315 bytes Desc: OpenPGP digital signature URL: From C.Klaij at marin.nl Tue Mar 3 07:50:38 2009 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 3 Mar 2009 14:50:38 +0100 Subject: intel compilers and ml fails Message-ID: <5D9143EF9FADE942BEF6F2A636A861170800F671@MAR150CV1.marin.local> Hello, I'm trying to build petsc-3.0.0-p3 using the intel compilers and math kernel on a linux pc. Everything's fine as long as I don't use ml. When I do use ml, I get an "Error running configure on ML". The ml configure scripts says "configure: error: linking to Fortran libraries from C fails". This is my configure command: $ config/configure.py --with-cxx=icpc --with-cc=icc --with-fc=ifort --with-blas-lapack-dir=/opt/intel/mkl/10.1.1.019 --download-ml=1 --download-mpich=1 There's no problem when using gcc and gfortran. The intel compilers and mkl are version 10.1. Any ideas on how to fix this problem? Chris dr. ir. Christiaan Klaij CFD Researcher Research & Development mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I http://www.marin.nl/ MARIN webnews: MARIN?s Ships Hydrodynamics Seminar in Brazil This e-mail may be confidential, privileged and/or protected by copyright. If you are not the intended recipient, you should return it to the sender immediately and delete your copy from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 1069 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 1622 bytes Desc: not available URL: From tchouanm at msn.com Tue Mar 3 07:54:25 2009 From: tchouanm at msn.com (STEPHANE TCHOUANMO) Date: Tue, 3 Mar 2009 14:54:25 +0100 Subject: petsc-users Digest, Vol 2, Issue 33 In-Reply-To: References: Message-ID: Hi all, thank you Barry for the indication you gave me. As a matter of fact, i verified my jacobian and function evaluation again and again but i really dont see anything wrong in it. So i came back to the basic Laplacian problem (- \Delta u = f ) in the unit cube discretized in regular hexes. The numerical scheme i use is a vertex-centred finite volume scheme. The solution i get is correct compared to the exact solution (of second order) and i know my jacobian and residual evalutions are correct. But here is the log out i get. Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecMDot 71 1.0 2.9587e-02 1.0 6.23e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 25 0 0 0 0 25 0 0 0 623 VecNorm 77 1.0 3.3638e-02 1.0 4.24e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 42 VecScale 74 1.0 2.1052e-03 1.0 3.26e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 326 VecCopy 80 1.0 3.4863e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 9 1.0 2.0776e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 5 1.0 2.3208e-04 1.0 3.99e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 399 VecWAXPY 1 1.0 6.6995e-05 1.0 1.38e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 138 VecMAXPY 74 1.0 3.8138e-02 1.0 5.18e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 27 0 0 0 0 27 0 0 0 518 VecAssemblyBegin 4 1.0 9.8636e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 4 1.0 6.9494e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 3 1.0 3.0706e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 74 1.0 3.4648e-02 1.0 5.88e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 59 MatMult 73 1.0 1.4618e-01 1.0 2.22e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 45 0 0 0 0 45 0 0 0 222 MatAssemblyBegin 2 1.0 6.9899e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 6.1999e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESSolve 1 1.0 6.7333e+01 1.0 1.08e+06 1.0 0.0e+00 0.0e+00 3.0e+00 99100 0 0100 99100 0 0100 1 SNESLineSearch 1 1.0 5.1989e-01 1.0 8.91e+04 1.0 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 33 1 0 0 0 33 0 SNESFunctionEval 2 1.0 1.0441e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 2 0 0 0 67 2 0 0 0 67 0 SNESJacobianEval 1 1.0 6.6026e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 97 0 0 0 33 97 0 0 0 33 0 KSPGMRESOrthog 71 1.0 6.5884e-02 1.0 5.60e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 51 0 0 0 0 51 0 0 0 560 KSPSetup 1 1.0 2.2203e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 2.6036e-01 1.0 2.80e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0100 0 0 0 0100 0 0 0 280 PCSetUp 1 1.0 7.9495e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 74 1.0 3.6445e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Index Set 3 3 111792 0 Vec 44 3 223596 0 Vec Scatter 3 3 0 0 Matrix 1 0 0 0 SNES 1 0 0 0 Krylov Solver 1 0 0 0 Preconditioner 1 0 0 0 Viewer 2 0 0 0 Draw 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 1.60268e-06 This shows that the Jacobian evaluation takes 97% of time and the residual just 2% in the SNESSolve. But if you look at the total MFlops, you can see that its null(i guess very low) for these phases. What seems to be long is the part in red concerning Vector manips. You can even see at the end that the most memory use is in Index set and Vec. Then i did another test solving this time the heat equation (unsteady) with a given initial condition and a compatible homogeneous Dirichlet boundary condition. Once again i get the right solution with the log out bellow. Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecMDot 22 1.0 5.0474e-04 1.0 5.48e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 6 0 0 0 0 6 0 0 0 548 VecNorm 62 1.0 8.8694e-03 1.0 4.72e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 47 VecScale 32 1.0 3.8212e-04 1.0 2.83e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 283 VecCopy 81 1.0 1.1948e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 88 1.0 8.4816e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 10 1.0 1.8910e-04 1.0 3.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 357 VecWAXPY 10 1.0 2.6472e-04 1.0 1.27e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 127 VecMAXPY 32 1.0 1.0271e-03 1.0 4.14e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 414 VecAssemblyBegin 40 1.0 8.7160e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 40 1.0 7.5617e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 39 1.0 1.5163e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 32 1.0 3.8553e-03 1.0 6.65e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 67 MatMult 22 1.0 1.5831e-02 1.0 2.16e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 72 0 0 0 0 72 0 0 0 216 MatAssemblyBegin 30 1.0 6.5176e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 30 1.0 1.2829e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 9 1.0 1.8313e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESSolve 10 1.0 1.7674e+01 1.0 2.69e+05 1.0 0.0e+00 0.0e+00 3.0e+01 93100 0 0 0 94100 0 0 75 0 SNESLineSearch 10 1.0 3.7443e+00 1.0 4.51e+04 1.0 0.0e+00 0.0e+00 1.0e+01 20 4 0 0 0 20 4 0 0 25 0 SNESFunctionEval 20 1.0 7.2693e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 38 0 0 0 0 39 0 0 0 50 0 SNESJacobianEval 10 1.0 1.0367e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 55 0 0 0 0 55 0 0 0 25 0 KSPGMRESOrthog 22 1.0 1.4277e-03 1.0 3.88e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 12 0 0 0 0 12 0 0 0 388 KSPSetup 10 1.0 1.3128e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 10 1.0 2.8431e-02 1.0 1.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 94 0 0 0 0 94 0 0 0 157 PCSetUp 10 1.0 2.5831e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 32 1.0 5.7973e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Index Set 40 40 548800 0 Vec 167 153 4198932 0 Vec Scatter 40 40 0 0 Matrix 1 0 0 0 SNES 10 9 1116 0 Krylov Solver 10 9 151920 0 Preconditioner 10 9 0 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 1.60486e-06 Now the Jacobian evaluation takes 55% of time and the residual 38% and the more time steps i make the more these two time percentages equilibrate and then change ie. the residual eval spends more. A quick overview shows the same behavior than the previous test. So may be im wrong but i doubt there is a problem in my Jacobian and Residual evaluation. My Real problem is that in a 30min nonlinear resolution with many time steps (say 100) i have 10min just for the first Newton iteration at the first time step and this happens even for the Basic Laplacian test. Therefore, I thought that may be the nonlinear solver context and vectors initializations are heavy and so last a lot. But I dont know if there is a way to improve that. Or possibly there is a problem with the interfacing between LibMesh and PETSc (actually i dont use PETSc directly, i call it via the code LibMesh). What do you think? Thanks a lot. Stephane > ------------------------------ > > Message: 2 > Date: Tue, 24 Feb 2009 12:25:44 -0600 > From: Barry Smith > Subject: Re: petsc-users Digest, Vol 2, Issue 32 > To: PETSc users list > Message-ID: <014621DE-584C-4574-8F95-9E491F8858D9 at mcs.anl.gov> > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > > SNESSolve 100 1.0 2.9339e+03 1.0 9.17e+05 1.0 0.0e+00 0.0e > +00 1.0e+03100100 0 0 1 100 > SNESLineSearch 202 1.0 7.9707e+02 1.0 4.35e+05 1.0 0.0e+00 0.0e > +00 4.0e+02 27 13 0 0 0 27 > SNESFunctionEval 302 1.0 1.1836e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 3.0e+02 40 0 0 0 0 40 > SNESJacobianEval 202 1.0 1.7238e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 2.0e+02 59 0 0 0 0 59 > > The final column above (I have removed the later ones for clarity) is > the problem. Your function evaluation is > taking 40% of the time and the Jacobian 59% > > The PETSc linear solver is taking 1% so something is seriously bad > about your function evaluation and Jacobian evaluation > code. > > Barry > > > Today's Topics: > > 1. RE: petsc-users Digest, Vol 2, Issue 32 (STEPHANE TCHOUANMO) > 2. Re: petsc-users Digest, Vol 2, Issue 32 (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 24 Feb 2009 19:17:27 +0100 > From: STEPHANE TCHOUANMO > Subject: RE: petsc-users Digest, Vol 2, Issue 32 > To: > Message-ID: > Content-Type: text/plain; charset="iso-8859-1" > > > Here is my -log_summary: > Something looks a bit strange to me; its the MPI Reductions below. > Other than that, i dont see anything relevant. > What do you think? > Thanks > > > > ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- > > ./diff-conv-opt on a linux-gnu named linux-stchouan with 1 processor, by stephane Tue Feb 24 13:54:35 2009 > Using Petsc Release Version 2.3.3, Patch 13, Thu May 15 17:29:26 CDT 2008 HG revision: 4466c6289a0922df26e20626fd4a0b4dd03c8124 > > Max Max/Min Avg Total > Time (sec): 2.937e+03 1.00000 2.937e+03 > Objects: 3.420e+03 1.00000 3.420e+03 > Flops: 2.690e+09 1.00000 2.690e+09 2.690e+09 > Flops/sec: 9.161e+05 1.00000 9.161e+05 9.161e+05 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Reductions: 1.189e+05 1.00000 > > Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N --> 2N flops > and VecAXPY() for complex vectors of length N --> 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts %Total Avg %Total counts %Total > 0: Main Stage: 2.9367e+03 100.0% 2.6905e+09 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 1.106e+03 0.9% > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops/sec: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). > %T - percent time in this phase %F - percent flops in this phase > %M - percent messages in this phase %L - percent message lengths in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) > ------------------------------------------------------------------------------------------------------------------------ > > Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > VecDot 202 1.0 3.0360e-02 1.0 3.96e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 396 > VecMDot 202 1.0 3.0552e-02 1.0 3.94e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 394 > VecNorm 1110 1.0 1.2257e+00 1.0 5.40e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 54 > VecScale 404 1.0 3.5342e-02 1.0 3.41e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 341 > VecCopy 507 1.0 8.4626e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 1408 1.0 1.1664e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 202 1.0 2.6221e-02 1.0 4.59e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 459 > VecWAXPY 202 1.0 4.4239e-02 1.0 1.36e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 136 > VecMAXPY 404 1.0 7.3515e-02 1.0 3.27e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 327 > VecAssemblyBegin 302 1.0 9.2960e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 302 1.0 5.5790e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 603 1.0 1.9933e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 404 1.0 5.5408e-01 1.0 6.52e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 65 > MatMult 404 1.0 2.6457e+00 1.0 2.26e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 22 0 0 0 0 22 0 0 0 226 > MatSolve 404 1.0 4.6454e+00 1.0 1.28e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 22 0 0 0 0 22 0 0 0 128 > MatLUFactorNum 202 1.0 1.5211e+01 1.0 8.85e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 50 0 0 0 1 50 0 0 0 89 > MatILUFactorSym 100 1.0 1.9993e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+02 0 0 0 0 0 0 0 0 0 9 0 > MatAssemblyBegin 404 1.0 9.6217e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 404 1.0 1.4601e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetRowIJ 100 1.0 2.4641e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetOrdering 100 1.0 7.6755e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+02 0 0 0 0 0 0 0 0 0 18 0 > MatZeroEntries 99 1.0 3.6160e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SNESSolve 100 1.0 2.9339e+03 1.0 9.17e+05 1.0 0.0e+00 0.0e+00 1.0e+03100100 0 0 1 100100 0 0 91 1 > SNESLineSearch 202 1.0 7.9707e+02 1.0 4.35e+05 1.0 0.0e+00 0.0e+00 4.0e+02 27 13 0 0 0 27 13 0 0 37 0 > SNESFunctionEval 302 1.0 1.1836e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+02 40 0 0 0 0 40 0 0 0 27 0 > SNESJacobianEval 202 1.0 1.7238e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+02 59 0 0 0 0 59 0 0 0 18 0 > KSPGMRESOrthog 202 1.0 7.0303e-02 1.0 3.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 342 > KSPSetup 202 1.0 4.6391e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 202 1.0 2.4101e+01 1.0 9.65e+07 1.0 0.0e+00 0.0e+00 3.0e+02 1 86 0 0 0 1 86 0 0 27 97 > PCSetUp 202 1.0 1.7296e+01 1.0 7.78e+07 1.0 0.0e+00 0.0e+00 3.0e+02 1 50 0 0 0 1 50 0 0 27 78 > PCApply 404 1.0 4.6487e+00 1.0 1.28e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 22 0 0 0 0 22 0 0 0 128 > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > > --- Event Stage 0: Main Stage > > Index Set 904 901 107564984 0 > Vec 1511 1497 357441684 0 > Vec Scatter 604 604 0 0 > Matrix 101 99 942432084 0 > SNES 100 99 12276 0 > Krylov Solver 100 99 1671120 0 > Preconditioner 100 99 14256 0 > ======================================================================================================================== > Average time to get PetscTime(): 1.49164e-06 > OptionTable: -snes_converged_reason > OptionTable: -snes_max_it 20 > OptionTable: -snes_rtol 0.0000001 > OptionTable: -snes_stol 0.001 > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 > Configure run at: Mon Feb 23 23:01:43 2009 > Configure options: --with-debugging=no -with-shared --download-mpich=1 > ----------------------------------------- > > > > > > We can't say anything without seeing the entire output of -log_summary. > > > > Matt > > > > _________________________________________________________________ More than messages?check out the rest of the Windows Live?. http://www.microsoft.com/windows/windowslive/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 3 08:15:15 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Mar 2009 08:15:15 -0600 Subject: petsc-users Digest, Vol 2, Issue 33 In-Reply-To: References: Message-ID: On Tue, Mar 3, 2009 at 7:54 AM, STEPHANE TCHOUANMO wrote: > Hi all, > > thank you Barry for the indication you gave me. > > As a matter of fact, i verified my jacobian and function evaluation again > and again but i really dont see anything wrong in it. > So i came back to the basic Laplacian problem (- \Delta u = f ) in the unit > cube discretized in regular hexes. The numerical scheme i use is a > vertex-centred finite volume scheme. > The solution i get is correct compared to the exact solution (of second > order) and i know my jacobian and residual evalutions are correct. But here > is the log out i get. > > > Event Count Time (sec) > Flops/sec --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > VecMDot 71 1.0 2.9587e-02 1.0 6.23e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 25 0 0 0 0 25 0 0 0 623 > VecNorm 77 1.0 3.3638e-02 1.0 4.24e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 2 0 0 0 0 2 0 0 0 42 > VecScale 74 1.0 2.1052e-03 1.0 3.26e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 326 > VecCopy 80 1.0 3.4863e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 9 1.0 2.0776e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 5 1.0 2.3208e-04 1.0 3.99e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 399 > VecWAXPY 1 1.0 6.6995e-05 1.0 1.38e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 138 > VecMAXPY 74 1.0 3.8138e-02 1.0 5.18e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 27 0 0 0 0 27 0 0 0 518 > VecAssemblyBegin 4 1.0 9.8636e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 4 1.0 6.9494e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 3 1.0 3.0706e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 74 1.0 3.4648e-02 1.0 5.88e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 3 0 0 0 0 3 0 0 0 59 > MatMult 73 1.0 1.4618e-01 1.0 2.22e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 45 0 0 0 0 45 0 0 0 222 > MatAssemblyBegin 2 1.0 6.9899e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 2 1.0 6.1999e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SNESSolve 1 1.0 6.7333e+01 1.0 1.08e+06 1.0 0.0e+00 0.0e+00 > 3.0e+00 99100 0 0100 99100 0 0100 1 > SNESLineSearch 1 1.0 5.1989e-01 1.0 8.91e+04 1.0 0.0e+00 0.0e+00 > 1.0e+00 1 0 0 0 33 1 0 0 0 33 0 > SNESFunctionEval 2 1.0 1.0441e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.0e+00 2 0 0 0 67 2 0 0 0 67 0 > SNESJacobianEval 1 1.0 6.6026e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.0e+00 97 0 0 0 33 97 0 0 0 33 0 > KSPGMRESOrthog 71 1.0 6.5884e-02 1.0 5.60e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 51 0 0 0 0 51 0 0 0 560 > KSPSetup 1 1.0 2.2203e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 1 1.0 2.6036e-01 1.0 2.80e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0100 0 0 0 0100 0 0 0 280 > PCSetUp 1 1.0 7.9495e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > PCApply 74 1.0 3.6445e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > > --- Event Stage 0: Main Stage > > Index Set 3 3 111792 0 > Vec 44 3 223596 0 > Vec Scatter 3 3 0 0 > Matrix 1 0 0 0 > SNES 1 0 0 0 > Krylov Solver 1 0 0 0 > Preconditioner 1 0 0 0 > Viewer 2 0 0 0 > Draw 1 0 0 0 > > ======================================================================================================================== > Average time to get PetscTime(): 1.60268e-06 > > > This shows that the Jacobian evaluation takes 97% of time and the residual > just 2% in the SNESSolve. But if you look at the total MFlops, you can see > that its null(i guess very low) for these phases. What seems to be long is > the part in red concerning Vector manips. You can even see at the end that > the most memory use is in Index set and Vec. > This analysis does not make sense. If you add all the time spent in the Vec operations (in red), it is less than 1/100 of the time in the SNES Solve. There is obviously a problem in that routine, if there is indeed a problem. Do you have a model of the computation that says that this time is too long? Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 3 08:17:39 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Mar 2009 08:17:39 -0600 Subject: intel compilers and ml fails In-Reply-To: <5D9143EF9FADE942BEF6F2A636A861170800F671@MAR150CV1.marin.local> References: <5D9143EF9FADE942BEF6F2A636A861170800F671@MAR150CV1.marin.local> Message-ID: On Tue, Mar 3, 2009 at 7:50 AM, Klaij, Christiaan wrote: > Hello, > > I'm trying to build petsc-3.0.0-p3 using the intel compilers and math > kernel on a linux pc. Everything's fine as long as I don't use ml. When I do > use ml, I get an "Error running configure on ML". The ml configure scripts > says "configure: error: linking to Fortran libraries from C fails". > > This is my configure command: > > $ config/configure.py --with-cxx=icpc --with-cc=icc --with-fc=ifort > --with-blas-lapack-dir=/opt/intel/mkl/10.1.1.019 --download-ml=1 > --download-mpich=1 > > There's no problem when using gcc and gfortran. The intel compilers and mkl > are version 10.1. Any ideas on how to fix this problem? > I can't tell anything without configure.log. Since this is large, please continue on petsc-maint at mcs.anl.gov Matt > > Chris > > > dr. ir. Christiaan Klaij CFD Researcher Research & Development *MARIN > * 2, Haagsteeg C.Klaij at marin.nl P.O. Box 28 T +31 317 49 39 11 6700 AA > Wageningen F +31 317 49 32 45 T +31 317 49 33 44 The Netherlands I > www.marin.nl > MARIN webnews: MARIN?s > Ships Hydrodynamics Seminar in Brazil > > This e-mail may be confidential, privileged and/or protected by copyright. > If you are not the intended recipient, you should return it to the sender > immediately and delete your copy from your system. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 1069 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 1622 bytes Desc: not available URL: From tchouanm at msn.com Tue Mar 3 09:17:30 2009 From: tchouanm at msn.com (STEPHANE TCHOUANMO) Date: Tue, 3 Mar 2009 16:17:30 +0100 Subject: petsc-users Digest, Vol 2, Issue 33 In-Reply-To: References: Message-ID: Ok Matt you're right. The SNES Solve is definitely at fault. But still there's something i dont understand in the log summary i get. Take for example the one for the unsteady heat equation right after: Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecMDot 22 1.0 5.0474e-04 1.0 5.48e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 6 0 0 0 0 6 0 0 0 548 VecNorm 62 1.0 8.8694e-03 1.0 4.72e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 47 VecScale 32 1.0 3.8212e-04 1.0 2.83e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 283 VecCopy 81 1.0 1.1948e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 88 1.0 8.4816e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 10 1.0 1.8910e-04 1.0 3.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 357 VecWAXPY 10 1.0 2.6472e-04 1.0 1.27e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 127 VecMAXPY 32 1.0 1.0271e-03 1.0 4.14e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 414 VecAssemblyBegin 40 1.0 8.7160e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 40 1.0 7.5617e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 39 1.0 1.5163e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 32 1.0 3.8553e-03 1.0 6.65e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 67 MatMult 22 1.0 1.5831e-02 1.0 2.16e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 72 0 0 0 0 72 0 0 0 216 MatAssemblyBegin 30 1.0 6.5176e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 30 1.0 1.2829e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 9 1.0 1.8313e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESSolve 10 1.0 1.7674e+01 1.0 2.69e+05 1.0 0.0e+00 0.0e+00 3.0e+01 93100 0 0 0 94100 0 0 75 0 SNESLineSearch 10 1.0 3.7443e+00 1.0 4.51e+04 1.0 0.0e+00 0.0e+00 1.0e+01 20 4 0 0 0 20 4 0 0 25 0 SNESFunctionEval 20 1.0 7.2693e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 38 0 0 0 0 39 0 0 0 50 0 SNESJacobianEval 10 1.0 1.0367e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01 55 0 0 0 0 55 0 0 0 25 0 KSPGMRESOrthog 22 1.0 1.4277e-03 1.0 3.88e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 12 0 0 0 0 12 0 0 0 388 KSPSetup 10 1.0 1.3128e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 10 1.0 2.8431e-02 1.0 1.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 94 0 0 0 0 94 0 0 0 157 PCSetUp 10 1.0 2.5831e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 32 1.0 5.7973e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Index Set 40 40 548800 0 Vec 167 153 4198932 0 Vec Scatter 40 40 0 0 Matrix 1 0 0 0 SNES 10 9 1116 0 Krylov Solver 10 9 151920 0 Preconditioner 10 9 0 0 Viewer 1 0 0 0 ======================================================================================================================== Now it says SNESSolve takes 93% of the main stage, right? In that case what does is mean the 20% for SNESLinesearch, 38% for SNESFunctionEval and 55% for SNESJacobianEval? It cant be percentages of the main stage or of the SNESSolve. Do you have an idea? Actually to answer your question, what is long is the first Newton iteration in the first time step and a debugging in DDD shows it too. So with the log summary i get, its obviously due to SNESSolve with Residual and Jacobian evaluations. Here is a part of my Jacobian and Residual computation routine in LibMesh for the basic Laplacian (- \Delta u = f ). Its called 'compute_jacobian' and 'compute_residual' respectively. Could you please look at it quickly and tell me if you see at first look something strange? void compute_jacobian (const NumericVector& soln, SparseMatrix& jacobian) { EquationSystems &es = *_equation_system; const MeshBase& mesh = es.get_mesh(); NonlinearImplicitSystem& system = es.get_system("dc"); const DofMap& dof_map = system.get_dof_map(); // Define the finite volume FV fv; MeshBase::const_element_iterator el = mesh.active_local_elements_begin(); const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end(); // The loop on every simplex for ( ; el != end_el; ++el) { const Elem* elem = *el; dof_map.dof_indices (elem, dof_indices); fv.reinit(elem); n_dofs = dof_indices.size(); // = 4 for a hex Ke.resize (n_dofs, n_dofs); // Assemble the elementary matrix for the Laplacian problem (size 4*4) Ke=fv.elmmat(perm); dof_map.constrain_element_matrix (Ke, dof_indices); // Adds the small matrix Ke to the Jacobian jacobian.add_matrix (Ke, dof_indices); } } void compute_residual (const NumericVector& soln, NumericVector& residual) { EquationSystems &es = *_equation_system; const MeshBase& mesh = es.get_mesh(); NonlinearImplicitSystem& system = es.get_system("dc"); const DofMap& dof_map = system.get_dof_map(); // Define the finite volume FV fv; residual.zero(); MeshBase::const_element_iterator el = mesh.active_local_elements_begin(); const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end(); // The loop on every simplex for ( ; el != end_el; ++el) { const Elem* elem = *el; dof_map.dof_indices (elem, dof_indices); fv.reinit(elem); n_dofs = dof_indices.size(); // = 4 for a hex Se.resize (n_dofs); // Compute the solution from the previous Newton iterate for (unsigned int l=0; lpoint(i); Re(i) = vol*(xyz(0)-0.5) ; for (unsigned int j=0; j wrote: Hi all, thank you Barry for the indication you gave me. As a matter of fact, i verified my jacobian and function evaluation again and again but i really dont see anything wrong in it. So i came back to the basic Laplacian problem (- \Delta u = f ) in the unit cube discretized in regular hexes. The numerical scheme i use is a vertex-centred finite volume scheme. The solution i get is correct compared to the exact solution (of second order) and i know my jacobian and residual evalutions are correct. But here is the log out i get. Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecMDot 71 1.0 2.9587e-02 1.0 6.23e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 25 0 0 0 0 25 0 0 0 623 VecNorm 77 1.0 3.3638e-02 1.0 4.24e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 42 VecScale 74 1.0 2.1052e-03 1.0 3.26e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 326 VecCopy 80 1.0 3.4863e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 9 1.0 2.0776e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 5 1.0 2.3208e-04 1.0 3.99e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 399 VecWAXPY 1 1.0 6.6995e-05 1.0 1.38e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 138 VecMAXPY 74 1.0 3.8138e-02 1.0 5.18e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 27 0 0 0 0 27 0 0 0 518 VecAssemblyBegin 4 1.0 9.8636e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 4 1.0 6.9494e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 3 1.0 3.0706e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 74 1.0 3.4648e-02 1.0 5.88e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 59 MatMult 73 1.0 1.4618e-01 1.0 2.22e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 45 0 0 0 0 45 0 0 0 222 MatAssemblyBegin 2 1.0 6.9899e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 2 1.0 6.1999e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESSolve 1 1.0 6.7333e+01 1.0 1.08e+06 1.0 0.0e+00 0.0e+00 3.0e+00 99100 0 0100 99100 0 0100 1 SNESLineSearch 1 1.0 5.1989e-01 1.0 8.91e+04 1.0 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 33 1 0 0 0 33 0 SNESFunctionEval 2 1.0 1.0441e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 2 0 0 0 67 2 0 0 0 67 0 SNESJacobianEval 1 1.0 6.6026e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 97 0 0 0 33 97 0 0 0 33 0 KSPGMRESOrthog 71 1.0 6.5884e-02 1.0 5.60e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 51 0 0 0 0 51 0 0 0 560 KSPSetup 1 1.0 2.2203e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 2.6036e-01 1.0 2.80e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0100 0 0 0 0100 0 0 0 280 PCSetUp 1 1.0 7.9495e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 74 1.0 3.6445e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Index Set 3 3 111792 0 Vec 44 3 223596 0 Vec Scatter 3 3 0 0 Matrix 1 0 0 0 SNES 1 0 0 0 Krylov Solver 1 0 0 0 Preconditioner 1 0 0 0 Viewer 2 0 0 0 Draw 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 1.60268e-06 This shows that the Jacobian evaluation takes 97% of time and the residual just 2% in the SNESSolve. But if you look at the total MFlops, you can see that its null(i guess very low) for these phases. What seems to be long is the part in red concerning Vector manips. You can even see at the end that the most memory use is in Index set and Vec. This analysis does not make sense. If you add all the time spent in the Vec operations (in red), it is less than 1/100 of the time in the SNES Solve. There is obviously a problem in that routine, if there is indeed a problem. Do you have a model of the computation that says that this time is too long? Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener _________________________________________________________________ News, entertainment and everything you care about at Live.com. Get it now! http://www.live.com/getstarted.aspx -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 3 12:37:40 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Mar 2009 12:37:40 -0600 Subject: petsc-users Digest, Vol 2, Issue 33 In-Reply-To: References: Message-ID: On Tue, Mar 3, 2009 at 9:17 AM, STEPHANE TCHOUANMO wrote: > Ok Matt you're right. The SNES Solve is definitely at fault. > But still there's something i dont understand in the log summary i get. > Take for example the one for the unsteady heat equation right after: > > Event Count Time (sec) > Flops/sec --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > VecMDot 22 1.0 5.0474e-04 1.0 5.48e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 6 0 0 0 0 6 0 0 0 548 > VecNorm 62 1.0 8.8694e-03 1.0 4.72e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 9 0 0 0 0 9 0 0 0 47 > VecScale 32 1.0 3.8212e-04 1.0 2.83e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 2 0 0 0 0 2 0 0 0 283 > VecCopy 81 1.0 1.1948e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 88 1.0 8.4816e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 10 1.0 1.8910e-04 1.0 3.57e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 357 > VecWAXPY 10 1.0 2.6472e-04 1.0 1.27e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 127 > VecMAXPY 32 1.0 1.0271e-03 1.0 4.14e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 9 0 0 0 0 9 0 0 0 414 > VecAssemblyBegin 40 1.0 8.7160e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 40 1.0 7.5617e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 39 1.0 1.5163e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 32 1.0 3.8553e-03 1.0 6.65e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 5 0 0 0 0 5 0 0 0 67 > MatMult 22 1.0 1.5831e-02 1.0 2.16e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 72 0 0 0 0 72 0 0 0 216 > MatAssemblyBegin 30 1.0 6.5176e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 30 1.0 1.2829e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatZeroEntries 9 1.0 1.8313e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SNESSolve 10 1.0 1.7674e+01 1.0 2.69e+05 1.0 0.0e+00 0.0e+00 > 3.0e+01 93100 0 0 0 94100 0 0 75 0 > SNESLineSearch 10 1.0 3.7443e+00 1.0 4.51e+04 1.0 0.0e+00 0.0e+00 > 1.0e+01 20 4 0 0 0 20 4 0 0 25 0 > SNESFunctionEval 20 1.0 7.2693e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.0e+01 38 0 0 0 0 39 0 0 0 50 0 > SNESJacobianEval 10 1.0 1.0367e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.0e+01 55 0 0 0 0 55 0 0 0 25 0 > KSPGMRESOrthog 22 1.0 1.4277e-03 1.0 3.88e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 12 0 0 0 0 12 0 0 0 388 > KSPSetup 10 1.0 1.3128e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 10 1.0 2.8431e-02 1.0 1.57e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 94 0 0 0 0 94 0 0 0 157 > PCSetUp 10 1.0 2.5831e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > PCApply 32 1.0 5.7973e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > > --- Event Stage 0: Main Stage > > Index Set 40 40 548800 0 > Vec 167 153 4198932 0 > Vec Scatter 40 40 0 0 > Matrix 1 0 0 0 > SNES 10 9 1116 0 > Krylov Solver 10 9 151920 0 > Preconditioner 10 9 0 0 > Viewer 1 0 0 0 > > ======================================================================================================================== > > Now it says SNESSolve takes 93% of the main stage, right? > In that case what does is mean the 20% for SNESLinesearch, 38% for > SNESFunctionEval and 55% for SNESJacobianEval? It cant be percentages of the > main stage or of the SNESSolve. Do you have an idea? > 1) There is not strict separation or nesting for events. For instance, line search, function eval, and Jacobian eval all happen inside SNESolve. In addition, function evaluation happens inside line search. 2) However, clearly jac eval + func eval = solve roughly. Thus, nothing else is taking any time. This is confirmed by looking at KSPSolve, which takes no time. 3) If you say that the first step (of 10) takes most of the time, the conclusion for me is inescapable. LibMesh is making a whole bunch of allocation calls during the first time step, which is very very slow. After that, it has the memory and everything runs fine. I suggest talking to the LibMesh developers. Matt > > Actually to answer your question, what is long is the first Newton > iteration in the first time step and a debugging in DDD shows it too. So > with the log summary i get, its obviously due to SNESSolve with Residual and > Jacobian evaluations. > Here is a part of my Jacobian and Residual computation routine in LibMesh > for the basic Laplacian (- \Delta u = f ). Its called 'compute_jacobian' > and 'compute_residual' respectively. Could you please look at it quickly and > tell me if you see at first look something strange? > > > * void compute_jacobian (const NumericVector& soln, > SparseMatrix& jacobian) > { > EquationSystems &es = *_equation_system; > > const MeshBase& mesh = es.get_mesh(); > > NonlinearImplicitSystem& system = > es.get_system("dc"); > > const DofMap& dof_map = system.get_dof_map(); > > // Define the finite volume > FV fv; > > MeshBase::const_element_iterator el = > mesh.active_local_elements_begin(); > const MeshBase::const_element_iterator end_el = > mesh.active_local_elements_end(); > > // The loop on every simplex > for ( ; el != end_el; ++el) > { > const Elem* elem = *el; > > dof_map.dof_indices (elem, dof_indices); > > fv.reinit(elem); > > n_dofs = dof_indices.size(); // = 4 for a hex > > Ke.resize (n_dofs, n_dofs); > > // Assemble the elementary matrix for the Laplacian problem (size > 4*4) > Ke=fv.elmmat(perm); > > dof_map.constrain_element_matrix (Ke, dof_indices); > > // Adds the small matrix Ke to the Jacobian > jacobian.add_matrix (Ke, dof_indices); > } > } > > void compute_residual (const NumericVector& soln, > NumericVector& residual) > { > EquationSystems &es = *_equation_system; > > const MeshBase& mesh = es.get_mesh(); > > NonlinearImplicitSystem& system = > es.get_system("dc"); > > const DofMap& dof_map = system.get_dof_map(); > > // Define the finite volume > FV fv; > > residual.zero(); > > MeshBase::const_element_iterator el = > mesh.active_local_elements_begin(); > const MeshBase::const_element_iterator end_el = > mesh.active_local_elements_end(); > > ** // The loop on every simplex** > for ( ; el != end_el; ++el) > { > const Elem* elem = *el; > dof_map.dof_indices (elem, dof_indices); > > fv.reinit(elem); > n_dofs = dof_indices.size(); **// = 4 for a hex** > Se.resize (n_dofs); > > // Compute the solution from the previous Newton iterate > for (unsigned int l=0; l Se(l) = soln(dof_indices[l]); > > Re.resize (n_dofs); > > elmMat=fv.elmmat(perm); > > for (unsigned int i=0; i { > vol=fv.elmvolume(i); > xyz=elem->point(i); > > Re(i) = vol*(xyz(0)-0.5) ; > > for (unsigned int j=0; j Re(i) += elmMat(i,j)*Se(j) ; > } > dof_map.constrain_element_vector (Re, dof_indices); > residual.add_vector (Re, dof_indices); > } > } > > * > and thats it! > > What amazes me is that i always get the right solution after resolution. > > Thanks a lot. > > Stephane > > > > > > > ------------------------------ > Date: Tue, 3 Mar 2009 08:15:15 -0600 > Subject: Re: petsc-users Digest, Vol 2, Issue 33 > From: knepley at gmail.com > To: petsc-users at mcs.anl.gov > CC: tchouanm at msn.com > > > On Tue, Mar 3, 2009 at 7:54 AM, STEPHANE TCHOUANMO wrote: > > Hi all, > > thank you Barry for the indication you gave me. > > As a matter of fact, i verified my jacobian and function evaluation again > and again but i really dont see anything wrong in it. > So i came back to the basic Laplacian problem (- \Delta u = f ) in the unit > cube discretized in regular hexes. The numerical scheme i use is a > vertex-centred finite volume scheme. > The solution i get is correct compared to the exact solution (of second > order) and i know my jacobian and residual evalutions are correct. But here > is the log out i get. > > > Event Count Time (sec) > Flops/sec --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > VecMDot 71 1.0 2.9587e-02 1.0 6.23e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 25 0 0 0 0 25 0 0 0 623 > VecNorm 77 1.0 3.3638e-02 1.0 4.24e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 2 0 0 0 0 2 0 0 0 42 > VecScale 74 1.0 2.1052e-03 1.0 3.26e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 326 > VecCopy 80 1.0 3.4863e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 9 1.0 2.0776e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 5 1.0 2.3208e-04 1.0 3.99e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 399 > VecWAXPY 1 1.0 6.6995e-05 1.0 1.38e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 138 > VecMAXPY 74 1.0 3.8138e-02 1.0 5.18e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 27 0 0 0 0 27 0 0 0 518 > VecAssemblyBegin 4 1.0 9.8636e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAssemblyEnd 4 1.0 6.9494e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 3 1.0 3.0706e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 74 1.0 3.4648e-02 1.0 5.88e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 3 0 0 0 0 3 0 0 0 59 > MatMult 73 1.0 1.4618e-01 1.0 2.22e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 45 0 0 0 0 45 0 0 0 222 > MatAssemblyBegin 2 1.0 6.9899e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyEnd 2 1.0 6.1999e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > SNESSolve 1 1.0 6.7333e+01 1.0 1.08e+06 1.0 0.0e+00 0.0e+00 > 3.0e+00 99100 0 0100 99100 0 0100 1 > SNESLineSearch 1 1.0 5.1989e-01 1.0 8.91e+04 1.0 0.0e+00 0.0e+00 > 1.0e+00 1 0 0 0 33 1 0 0 0 33 0 > SNESFunctionEval 2 1.0 1.0441e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.0e+00 2 0 0 0 67 2 0 0 0 67 0 > SNESJacobianEval 1 1.0 6.6026e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.0e+00 97 0 0 0 33 97 0 0 0 33 0 > KSPGMRESOrthog 71 1.0 6.5884e-02 1.0 5.60e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0 51 0 0 0 0 51 0 0 0 560 > KSPSetup 1 1.0 2.2203e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 1 1.0 2.6036e-01 1.0 2.80e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 0100 0 0 0 0100 0 0 0 280 > PCSetUp 1 1.0 7.9495e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > PCApply 74 1.0 3.6445e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > > --- Event Stage 0: Main Stage > > Index Set 3 3 111792 0 > Vec 44 3 223596 0 > Vec Scatter 3 3 0 0 > Matrix 1 0 0 0 > SNES 1 0 0 0 > Krylov Solver 1 0 0 0 > Preconditioner 1 0 0 0 > Viewer 2 0 0 0 > Draw 1 0 0 0 > > ======================================================================================================================== > Average time to get PetscTime(): 1.60268e-06 > > > This shows that the Jacobian evaluation takes 97% of time and the residual > just 2% in the SNESSolve. But if you look at the total MFlops, you can see > that its null(i guess very low) for these phases. What seems to be long is > the part in red concerning Vector manips. You can even see at the end that > the most memory use is in Index set and Vec. > > > This analysis does not make sense. If you add all the time spent in the Vec > operations (in red), it is less than 1/100 of the time in the > SNES Solve. There is obviously a problem in that routine, if there is > indeed a problem. Do you have a model of the computation that > says that this time is too long? > > Matt > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > ------------------------------ > Get news, entertainment and everything you care about at Live.com. Check > it out! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 3 13:56:19 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Mar 2009 13:56:19 -0600 Subject: PCNN preconditioner In-Reply-To: <49ACEB61.7040906@student.uibk.ac.at> References: <49A6BF7C.1030503@student.uibk.ac.at> <49A7B040.9050309@student.uibk.ac.at> <49ACEB61.7040906@student.uibk.ac.at> Message-ID: Sorry, deleted the original email by mistake. >> err = PCSetType(prec,PCNN);CHKERRQ(ierr); >> //ierr = PCFactorSetShiftPd(prec,PETSC_TRUE);CHKERRQ(ierr); You need to set the option for the pc that is doing the factorization; this is a PC that is inside the prec. The easiest way to find these things is by running with -help and then looking for the prefix For the subdomain solves the prefix are is_localD_ and is_localN_ so you should use the options - is_localD_pc_factor_shift_positive_definite and -is_localN_pc_factor_shift_positive_definite There is currently no subroutine that "pulls out" the inner KSP's for the Neuman and Dirichlet problems for use in the code; though there should be PCISGetDPC() and PCISGetNPC() that would get the pointer to ksp_N and ksp_D objects inside the PC_IS data structured defined in src/ksp/pc/impls/ is/pcis.h You can easily add these routines. Then use them to get the inner PC and set the shift option (and anything else you want to set). All the code for the NN is in src/ksp/pc/is and src/ksp/pc/is/nn you'll have to dig around in there to figure things out. This piece of code was written a long time ago and is hardly ever used. Barry On Mar 3, 2009, at 2:33 AM, Andreas Grassl wrote: > any suggestions? > > cheers > > ando > > Andreas Grassl schrieb: >> Barry Smith schrieb: >>> Use MatCreateIS() to create the matrix. Use MatSetValuesLocal() to >>> put >>> the values in the matrix >>> then use PCSetType(pc,PCNN); to set the preconditioner to NN. >>> >> >> I followed your advice, but still run into problems. >> >> my sourcecode: >> >> ierr = KSPCreate(comm,&solver);CHKERRQ(ierr); >> ierr = >> KSPSetOperators(solver,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); >> ierr = KSPSetInitialGuessNonzero(solver,PETSC_TRUE);CHKERRQ(ierr); >> ierr = KSPGetPC(solver,&prec);CHKERRQ(ierr); >> ierr = PCSetType(prec,PCNN);CHKERRQ(ierr); >> //ierr = PCFactorSetShiftPd(prec,PETSC_TRUE);CHKERRQ(ierr); >> ierr = KSPSetUp(solver);CHKERRQ(ierr); >> ierr = KSPSolve(solver,B,X);CHKERRQ(ierr); >> >> and the error message: >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Detected zero pivot in LU factorization >> see >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot >> ! >> [0]PETSC ERROR: Zero pivot row 801 value 2.78624e-13 tolerance >> 4.28598e-12 * rowsum 4.28598! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 3, Fri Jan 30 >> 17:55:56 CST 2009 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Unknown Name on a linux64-g named mat1.uibk.ac.at by >> csae1801 Fri Feb 27 10:12:34 2009 >> [0]PETSC ERROR: Libraries linked from >> /home/lux/csae1801/petsc/petsc-3.0.0-p3/linux64-gnu-c-debug/lib >> [0]PETSC ERROR: Configure run at Wed Feb 18 10:30:58 2009 >> [0]PETSC ERROR: Configure options --with-64-bit-indices >> --with-scalar-type=real --with-precision=double --with-cc=icc >> --with-fc=ifort --with-cxx=icpc --with-shared=0 --with-mpi=1 >> --download-mpich=ifneeded --with-scalapack=1 >> --download-scalapack=ifneeded --download-f-blas-lapack=yes >> --with-blacs=1 --download-blacs=yes PETSC_ARCH=linux64-gnu-c-debug >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatLUFactorNumeric_Inode() line 1335 in >> src/mat/impls/aij/seq/inode.c >> [0]PETSC ERROR: MatLUFactorNumeric() line 2338 in src/mat/interface/ >> matrix.c >> [0]PETSC ERROR: PCSetUp_LU() line 222 in src/ksp/pc/impls/factor/lu/ >> lu.c >> [0]PETSC ERROR: PCSetUp() line 794 in src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: PCISSetUp() line 137 in src/ksp/pc/impls/is/pcis.c >> [0]PETSC ERROR: PCSetUp_NN() line 28 in src/ksp/pc/impls/is/nn/nn.c >> [0]PETSC ERROR: PCSetUp() line 794 in src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: User provided function() line 1274 in petscsolver.c >> >> >> Running PCFactorSetShift doesn't affect the output. >> >> any ideas? >> >> cheers >> >> ando >> > > -- > /"\ > \ / ASCII Ribbon > X against HTML email > / \ > > From damian.kaleta at mail.utexas.edu Tue Mar 3 14:51:52 2009 From: damian.kaleta at mail.utexas.edu (Damian Kaleta) Date: Tue, 3 Mar 2009 14:51:52 -0600 Subject: fortran basic code Message-ID: <84A8EF35-3790-45BC-9E4E-4C622C39B73B@mail.utexas.edu> Hi I have a hard time implementing very simple program using fortran. The same code works perfectly for me in C. Here is a source code: program main implicit none #include "/home/damian/petsc-2.3.3-p15/include/finclude/petsc.h" #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscpc.h" #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscsys.h" #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscvec.h" #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscmat.h" PetscErrorCode::pierr Mat::zpastp PetscInt::s s=3 call PetscInitialize(PETSC_NULL_CHARACTER,pierr) call MatCreateSeqAIJ(PETSC_COMM_SELF, 10, 10, 1, PETSC_NULL_INTEGER, zpastp, pierr) call MatSetValue(zpastp, 2, 2, s, INSERT_VALUES,pierr) call MatAssemblyBegin(zpastp,MAT_FINAL_ASSEMBLY,pierr) call MatAssemblyEnd(zpastp,MAT_FINAL_ASSEMBLY,pierr) call MatView(zpastp, PETSC_VIEWER_STDOUT_SELF, pierr) call MatDestroy(zpastp,pierr) call PetscFinalize(pierr) end and the result: [damian at utcem001 pet]$ ./ex4f row 0: row 1: row 2: (2, 1.4822e-323) row 3: row 4: row 5: row 6: row 7: row 8: row 9: why don't I get 3 ? But some number close to zero ? I build PETSc to use integers. Thanks, Damian From bsmith at mcs.anl.gov Tue Mar 3 15:11:17 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Mar 2009 15:11:17 -0600 Subject: fortran basic code In-Reply-To: <84A8EF35-3790-45BC-9E4E-4C622C39B73B@mail.utexas.edu> References: <84A8EF35-3790-45BC-9E4E-4C622C39B73B@mail.utexas.edu> Message-ID: <1796DF71-D275-4795-B35E-532A9673BDD7@mcs.anl.gov> On Mar 3, 2009, at 2:51 PM, Damian Kaleta wrote: > Hi > > I have a hard time implementing very simple program using fortran. > The same code works perfectly for me in C. > > Here is a source code: > > program main > implicit none > > > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petsc.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscpc.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscsys.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscvec.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscmat.h" > > > PetscErrorCode::pierr > Mat::zpastp > PetscInt::s ^^^^^^^^^^ > > > s=3 > > call PetscInitialize(PETSC_NULL_CHARACTER,pierr) > call MatCreateSeqAIJ(PETSC_COMM_SELF, 10, 10, 1, PETSC_NULL_INTEGER, > zpastp, pierr) > > call MatSetValue(zpastp, 2, 2, s, INSERT_VALUES,pierr) ^^^^^^^^^^^^ You are passing an integer in where a double is expected. Change PetscInt::s to PetscScalar::s and it will work Or, do the smart thing and stick to C :-) Barry > > > call MatAssemblyBegin(zpastp,MAT_FINAL_ASSEMBLY,pierr) > call MatAssemblyEnd(zpastp,MAT_FINAL_ASSEMBLY,pierr) > > call MatView(zpastp, PETSC_VIEWER_STDOUT_SELF, pierr) > call MatDestroy(zpastp,pierr) > call PetscFinalize(pierr) > > end > > > and the result: > [damian at utcem001 pet]$ ./ex4f > row 0: > row 1: > row 2: (2, 1.4822e-323) > row 3: > row 4: > row 5: > row 6: > row 7: > row 8: > row 9: > > why don't I get 3 ? But some number close to zero ? I build PETSc to > use integers. > > > Thanks, > Damian From knepley at gmail.com Tue Mar 3 15:16:46 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Mar 2009 15:16:46 -0600 Subject: fortran basic code In-Reply-To: <84A8EF35-3790-45BC-9E4E-4C622C39B73B@mail.utexas.edu> References: <84A8EF35-3790-45BC-9E4E-4C622C39B73B@mail.utexas.edu> Message-ID: MatSetValue() takes a PetscScalar, but you declared s as PetscInt. Matt On Tue, Mar 3, 2009 at 2:51 PM, Damian Kaleta wrote: > Hi > > I have a hard time implementing very simple program using fortran. The same > code works perfectly for me in C. > > Here is a source code: > > program main > implicit none > > > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petsc.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscpc.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscsys.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscvec.h" > #include "/home/damian/petsc-2.3.3-p15/include/finclude/petscmat.h" > > > PetscErrorCode::pierr > Mat::zpastp > PetscInt::s > > s=3 > > call PetscInitialize(PETSC_NULL_CHARACTER,pierr) > call MatCreateSeqAIJ(PETSC_COMM_SELF, 10, 10, 1, PETSC_NULL_INTEGER, > zpastp, pierr) > > call MatSetValue(zpastp, 2, 2, s, INSERT_VALUES,pierr) > > call MatAssemblyBegin(zpastp,MAT_FINAL_ASSEMBLY,pierr) > call MatAssemblyEnd(zpastp,MAT_FINAL_ASSEMBLY,pierr) > > call MatView(zpastp, PETSC_VIEWER_STDOUT_SELF, pierr) > call MatDestroy(zpastp,pierr) > call PetscFinalize(pierr) > > end > > > and the result: > [damian at utcem001 pet]$ ./ex4f > row 0: > row 1: > row 2: (2, 1.4822e-323) > row 3: > row 4: > row 5: > row 6: > row 7: > row 8: > row 9: > > why don't I get 3 ? But some number close to zero ? I build PETSc to use > integers. > > > Thanks, > Damian > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Tue Mar 3 15:42:21 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 03 Mar 2009 16:42:21 -0500 Subject: How could I see the solution in the FGMREScycle()? Message-ID: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> Hi, I am going to track the updated solution, but I was not able to see the solution in FGMREScycle() where linear system is solved. Any other options provided to take a look at those values? Thanks, -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From knepley at gmail.com Tue Mar 3 15:47:33 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Mar 2009 15:47:33 -0600 Subject: How could I see the solution in the FGMREScycle()? In-Reply-To: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> References: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> Message-ID: The solution is expensive to build in GMRES. You have to do this explicitly if you want it by using a custom monitor and calling KSPBuildSolution(). Matt On Tue, Mar 3, 2009 at 3:42 PM, (Rebecca) Xuefei YUAN wrote: > > Hi, > > I am going to track the updated solution, but I was not able to see the > solution in FGMREScycle() where linear system is solved. Any other options > provided to take a look at those values? > > Thanks, > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 3 15:53:46 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Mar 2009 15:53:46 -0600 Subject: How could I see the solution in the FGMREScycle()? In-Reply-To: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> References: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> Message-ID: <89D03822-9DA4-4BCD-891E-AEB6F4D9D875@mcs.anl.gov> The GMRES family of solvers does not compute the solution explicitly during the iterations. It is only at the end that it builds the solution from the Krylov space it has generated. Computing the solution at each iteration is expensive and best avoided. You can use KSPBuildSolution() called from inside a monitor routine you provide with KSPMonitorSet() to compute the solution and then display it anyway you like with VecView() for example. Barry On Mar 3, 2009, at 3:42 PM, (Rebecca) Xuefei YUAN wrote: > > Hi, > > I am going to track the updated solution, but I was not able to see > the solution in FGMREScycle() where linear system is solved. Any > other options provided to take a look at those values? > > Thanks, > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From Andreas.Grassl at student.uibk.ac.at Fri Mar 6 02:20:27 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Fri, 06 Mar 2009 09:20:27 +0100 Subject: PCNN preconditioner In-Reply-To: References: <49A6BF7C.1030503@student.uibk.ac.at> <49A7B040.9050309@student.uibk.ac.at> <49ACEB61.7040906@student.uibk.ac.at> Message-ID: <49B0DCCB.1090100@student.uibk.ac.at> Barry Smith schrieb: > > For the subdomain solves the prefix are is_localD_ and is_localN_ > so you should use the options > -is_localD_pc_factor_shift_positive_definite and > -is_localN_pc_factor_shift_positive_definite With both options it is working now. > > There is currently no subroutine that "pulls out" the inner KSP's for > the > Neuman and Dirichlet problems for use in the code; though there should be > PCISGetDPC() and PCISGetNPC() that would get the pointer to ksp_N and ksp_D > objects inside the PC_IS data structured defined in > src/ksp/pc/impls/is/pcis.h > You can easily add these routines. Then use them to get the inner PC and > set > the shift option (and anything else you want to set). I'll try later, for now I'm happy with the options Thank you for helping, I'll give some feedback cheers ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From C.Klaij at marin.nl Fri Mar 6 03:12:45 2009 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 6 Mar 2009 10:12:45 +0100 Subject: intel compilers and ml fails References: <5D9143EF9FADE942BEF6F2A636A861170800F671@MAR150CV1.marin.local> Message-ID: <5D9143EF9FADE942BEF6F2A636A861170800F67E@MAR150CV1.marin.local> I found a workaround for this problem: When PETSc configures ML it uses a flag called -lPEPCF90 which causes the error. I configured ML manually using the exact same command that PETSc would use (taken from the log), except for the -lPEPCF90 flag. Then I configured PETSc with --with-ml-include=/path/to/ml/include --with-ml-lib=/path/to/ml/liblibml.a and now -pc_type ml is working as expected. Chris -----Original Message----- From: Klaij, Christiaan Sent: Tue 3/3/2009 2:50 PM To: petsc-users at mcs.anl.gov Subject: intel compilers and ml fails Hello, I'm trying to build petsc-3.0.0-p3 using the intel compilers and math kernel on a linux pc. Everything's fine as long as I don't use ml. When I do use ml, I get an "Error running configure on ML". The ml configure scripts says "configure: error: linking to Fortran libraries from C fails". This is my configure command: $ config/configure.py --with-cxx=icpc --with-cc=icc --with-fc=ifort --with-blas-lapack-dir=/opt/intel/mkl/10.1.1.019 --download-ml=1 --download-mpich=1 There's no problem when using gcc and gfortran. The intel compilers and mkl are version 10.1. Any ideas on how to fix this problem? Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From enjoywm at cs.wm.edu Fri Mar 6 12:05:59 2009 From: enjoywm at cs.wm.edu (Yixun Liu) Date: Fri, 06 Mar 2009 13:05:59 -0500 Subject: memory allocation Message-ID: <49B16607.8060908@cs.wm.edu> Hi, To allocate space for matrix I can specify the exact number of non-zero elements for diagonal and off diagonal or just specify a maximal number for each row. Do they have the same performance? Thanks. Yixun From enjoywm at cs.wm.edu Fri Mar 6 12:50:25 2009 From: enjoywm at cs.wm.edu (Yixun Liu) Date: Fri, 06 Mar 2009 13:50:25 -0500 Subject: matrix assembling time Message-ID: <49B17071.1030104@cs.wm.edu> Hi, Using PETSc the assembling time for a mesh with 6000 vertices is about 14 second parallelized on 4 processors, but another sequential program based on gmm lib is about 0.6 second. PETSc's solver is much faster than gmm, but I don't know why its assembling is so slow although I have preallocate an enough space for the matrix. MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, PETSC_NULL, 1000, PETSC_NULL); Yixun From bsmith at mcs.anl.gov Fri Mar 6 12:58:50 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 Mar 2009 12:58:50 -0600 Subject: memory allocation In-Reply-To: <49B16607.8060908@cs.wm.edu> References: <49B16607.8060908@cs.wm.edu> Message-ID: On Mar 6, 2009, at 12:05 PM, Yixun Liu wrote: > Hi, > To allocate space for matrix I can specify the exact number of non- > zero > elements for diagonal and off diagonal or just specify a maximal > number > for each row. Do they have the same performance? If you specifiy the exact counts for each row then there will me no additional memory allocations and no wasted space. If you provide an upper bound then then will be no additional memory allocations and some space (depending on the structure of your matrix will be unused). Asside from memory usage both approaches will take the same amount of time. Barry > > > Thanks. > > Yixun From bsmith at mcs.anl.gov Fri Mar 6 13:04:15 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 Mar 2009 13:04:15 -0600 Subject: matrix assembling time In-Reply-To: <49B17071.1030104@cs.wm.edu> References: <49B17071.1030104@cs.wm.edu> Message-ID: <9C54965B-34D0-4E00-BA02-E1AF20361447@mcs.anl.gov> Run the same job with -info and send the results to petsc-maint at mcs.anl.gov On Mar 6, 2009, at 12:50 PM, Yixun Liu wrote: > Hi, > Using PETSc the assembling time for a mesh with 6000 vertices is > about > 14 second parallelized on 4 processors, but another sequential program > based on gmm lib is about 0.6 second. PETSc's solver is much faster > than > gmm, but I don't know why its assembling is so slow although I have > preallocate an enough space for the matrix. > > MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, > PETSC_NULL, 1000, PETSC_NULL); > > Yixun > From irfan.khan at gatech.edu Mon Mar 9 01:24:53 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Mon, 9 Mar 2009 02:24:53 -0400 (EDT) Subject: passing data between different communicators using PETSc In-Reply-To: <2015382425.944001236579856784.JavaMail.root@mail4.gatech.edu> Message-ID: <1013617597.944021236579893111.JavaMail.root@mail4.gatech.edu> Hi I have divided my processes into fluid compute ranks (n) and solid compute ranks (m), where n > m. For most of the part, the fluid compute ranks communicate among themselves and so do the solid compute nodes. However twice during each timestep the some data is transfered between the fluid and solid compute nodes. For instance, the fluid compute nodes generate fluid force on the solid body that needs to be transmitted every time step and the solid compute node calculate the resulting displacement from the forces and transmit back the displacement information to the fluid nodes. The force and displacement vectors for total nodes are distributed randomly on the fluid and solid nodes. I have attached a pdf that basically shows that the distribution of vectors is random I am currently trying to use MPI_Allgatherv to accumulate all the data from send ranks to receive ranks, but I am sure this will be very costly for large data distributed over many ranks. Is there an efficient way to do this using PETSc? Please let me know if you need more information or better explanation. Thanks Irfan -------------- next part -------------- A non-text attachment was scrubbed... Name: LBM-FEM-communicationNodes.pdf Type: application/pdf Size: 186076 bytes Desc: not available URL: From knepley at gmail.com Mon Mar 9 11:41:20 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 9 Mar 2009 11:41:20 -0500 Subject: passing data between different communicators using PETSc In-Reply-To: <1013617597.944021236579893111.JavaMail.root@mail4.gatech.edu> References: <2015382425.944001236579856784.JavaMail.root@mail4.gatech.edu> <1013617597.944021236579893111.JavaMail.root@mail4.gatech.edu> Message-ID: You can use a VecScatter, and PETSc will try to use the best MPI implementation. Matt On Mon, Mar 9, 2009 at 1:24 AM, Khan, Irfan wrote: > Hi > I have divided my processes into fluid compute ranks (n) and solid compute > ranks (m), where n > m. For most of the part, the fluid compute ranks > communicate among themselves and so do the solid compute nodes. However > twice during each timestep the some data is transfered between the fluid and > solid compute nodes. > > For instance, the fluid compute nodes generate fluid force on the solid > body that needs to be transmitted every time step and the solid compute node > calculate the resulting displacement from the forces and transmit back the > displacement information to the fluid nodes. The force and displacement > vectors for total nodes are distributed randomly on the fluid and solid > nodes. I have attached a pdf that basically shows that the distribution of > vectors is random > > I am currently trying to use MPI_Allgatherv to accumulate all the data from > send ranks to receive ranks, but I am sure this will be very costly for > large data distributed over many ranks. Is there an efficient way to do this > using PETSc? > > Please let me know if you need more information or better explanation. > > Thanks > Irfan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From weidong.lian at gmail.com Tue Mar 10 10:59:42 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Tue, 10 Mar 2009 16:59:42 +0100 Subject: Hello for compiled problem Message-ID: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> Hello everyone, I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the compiler gave me the following information, see result_1. I also ran the "make test" and it worked successfully, but when I linked petsc to compile my programme, it told me that information see results_2. Any suggestion will be appreciated. Thank you in advance. Weidong $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ /ftn-custom g++: option ? -PIC ? non reconnue libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler g++: option ? -PIC ? non reconnue libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk g++: option ? -PIC ? non reconnue libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto g++: option ? -PIC ? non reconnue libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler g++: option ? -PIC ? non reconnue libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn #####################################results_2################################ /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: undefined reference to `PetscTableCreate(int, _n_PetscTable**)' /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: undefined reference to `PetscTableDestroy(_n_PetscTable*)' /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': ##################################################################################### My configuration of petsc: &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& Pushing language C Popping language C Pushing language Cxx Popping language Cxx Pushing language FC Popping language FC sh: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue ss Executing: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con fig.guess sh: x86_64-unknown-linux-gnu sh: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub x86_64-unknown-linux-gnu Executing: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con fig.sub x86_64-unknown-linux-gnu sh: x86_64-unknown-linux-gnu ================================================================================ ================================================================================ Starting Configure Run at Tue Mar 10 16:23:30 2009 Configure Options: --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 --with-dynamic=1 Working directory: /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 Machine uname: ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST 2005', 'x86_64') Python version: 2.3.4 (#1, Feb 17 2005, 21:01:10) [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] ================================================================================ Pushing language C Popping language C Pushing language Cxx Popping language Cxx Pushing language FC Popping language FC ================================================================================ TEST configureExternalPackagesDir from config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c onfig/BuildSystem/config/framework.py:815) TESTING: configureExternalPackagesDir from config.framework(config/BuildSystem/config/framework.py:815) ================================================================================ TEST configureLibrary from PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P ETSc/packages/NetCDF.py:10) TESTING: configureLibrary from PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) Find a NetCDF installation and check if it can work with PETSc ================================================================================ TEST configureLibrary from PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE TSc/packages/PVODE.py:10) TESTING: configureLibrary from PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) Find a PVODE installation and check if it can work with PETSc ================================================================================ TEST configureDebuggers from PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co nfig/PETSc/utilities/debuggers.py:22) TESTING: configureDebuggers from PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) Find a default debugger and determine its arguments -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 10 11:07:05 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Mar 2009 11:07:05 -0500 Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian wrote: > Hello everyone, > > I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the > compiler gave me the following information, see result_1. > I also ran the "make test" and it worked successfully, but when I linked > petsc to compile my programme, it told me that information see results_2. > Any suggestion will be appreciated. > Thank you in advance. 1) Any report like this should be sent to petsc-maint at mcs.anl.gov because we need configure.log and make*.log. 2) The warning about -PIC comes about because we cannot parse the warning messages from your compiler. We can try to add this in when we get the log. 3) If 'make test' works and your link does not, then you have constructed your link line incorrectly. Are you using the PETSc makefiles? Matt > > Weidong > > $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ > /ftn-custom > g++: option ? -PIC ? non reconnue > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler > g++: option ? -PIC ? non reconnue > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk > g++: option ? -PIC ? non reconnue > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto > g++: option ? -PIC ? non reconnue > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler > g++: option ? -PIC ? non reconnue > libfast in: > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn > > > #####################################results_2################################ > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): > In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, > _p_Mat**)': > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: > undefined reference to `PetscTableCreate(int, _n_PetscTable**)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: > undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: > undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: > undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: > undefined reference to `PetscTableDestroy(_n_PetscTable*)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): > In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': > > ##################################################################################### > > > My configuration of petsc: > &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& > > Pushing language C > Popping language C > Pushing language Cxx > Popping language Cxx > Pushing language FC > Popping language FC > sh: /bin/sh > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue > ss > Executing: /bin/sh > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > fig.guess > sh: x86_64-unknown-linux-gnu > > sh: /bin/sh > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub > x86_64-unknown-linux-gnu > > Executing: /bin/sh > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > fig.sub x86_64-unknown-linux-gnu > > sh: x86_64-unknown-linux-gnu > > > > ================================================================================ > > ================================================================================ > Starting Configure Run at Tue Mar 10 16:23:30 2009 > Configure Options: --configModules=PETSc.Configure > --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx > x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 > --with-dynamic=1 > Working directory: > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 > Machine uname: > ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST 2005', > 'x86_64') > Python version: > 2.3.4 (#1, Feb 17 2005, 21:01:10) > [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] > > ================================================================================ > Pushing language C > Popping language C > Pushing language Cxx > Popping language Cxx > Pushing language FC > Popping language FC > > ================================================================================ > TEST configureExternalPackagesDir from > config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c > onfig/BuildSystem/config/framework.py:815) > TESTING: configureExternalPackagesDir from > config.framework(config/BuildSystem/config/framework.py:815) > > ================================================================================ > TEST configureLibrary from > PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P > ETSc/packages/NetCDF.py:10) > TESTING: configureLibrary from > PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) > Find a NetCDF installation and check if it can work with PETSc > > ================================================================================ > TEST configureLibrary from > PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE > TSc/packages/PVODE.py:10) > TESTING: configureLibrary from > PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) > Find a PVODE installation and check if it can work with PETSc > > ================================================================================ > TEST configureDebuggers from > PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co > nfig/PETSc/utilities/debuggers.py:22) > TESTING: configureDebuggers from > PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) > Find a default debugger and determine its arguments > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From weidong.lian at gmail.com Tue Mar 10 11:34:28 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Tue, 10 Mar 2009 17:34:28 +0100 Subject: Hello for compiled problem In-Reply-To: References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> Message-ID: <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> Hi, I did not use the PETSc makefiles. First I do not know where it located. Second, I have configured my makefile to link my programme, before it worked, that's in another computer, the difference lies in the fact that this time in the lib directory, there were not *.so library, but *.lib library. Maybe I think it is the problem. Thanks. Weidong On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley wrote: > On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian wrote: > >> Hello everyone, >> >> I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the >> compiler gave me the following information, see result_1. >> I also ran the "make test" and it worked successfully, but when I linked >> petsc to compile my programme, it told me that information see results_2. >> Any suggestion will be appreciated. >> Thank you in advance. > > > 1) Any report like this should be sent to petsc-maint at mcs.anl.gov because > we need configure.log and make*.log. > > 2) The warning about -PIC comes about because we cannot parse the warning > messages from your compiler. We can try to > add this in when we get the log. > > 3) If 'make test' works and your link does not, then you have constructed > your link line incorrectly. Are you using the PETSc makefiles? > > Matt > > >> >> Weidong >> >> $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ >> /ftn-custom >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn >> >> >> #####################################results_2################################ >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): >> In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, >> _p_Mat**)': >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: >> undefined reference to `PetscTableCreate(int, _n_PetscTable**)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: >> undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: >> undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: >> undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: >> undefined reference to `PetscTableDestroy(_n_PetscTable*)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): >> In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': >> >> ##################################################################################### >> >> >> My configuration of petsc: >> &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& >> >> Pushing language C >> Popping language C >> Pushing language Cxx >> Popping language Cxx >> Pushing language FC >> Popping language FC >> sh: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue >> ss >> Executing: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >> fig.guess >> sh: x86_64-unknown-linux-gnu >> >> sh: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub >> x86_64-unknown-linux-gnu >> >> Executing: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >> fig.sub x86_64-unknown-linux-gnu >> >> sh: x86_64-unknown-linux-gnu >> >> >> >> ================================================================================ >> >> ================================================================================ >> Starting Configure Run at Tue Mar 10 16:23:30 2009 >> Configure Options: --configModules=PETSc.Configure >> --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx >> x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 >> --with-dynamic=1 >> Working directory: >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 >> Machine uname: >> ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST >> 2005', 'x86_64') >> Python version: >> 2.3.4 (#1, Feb 17 2005, 21:01:10) >> [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] >> >> ================================================================================ >> Pushing language C >> Popping language C >> Pushing language Cxx >> Popping language Cxx >> Pushing language FC >> Popping language FC >> >> ================================================================================ >> TEST configureExternalPackagesDir from >> config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c >> onfig/BuildSystem/config/framework.py:815) >> TESTING: configureExternalPackagesDir from >> config.framework(config/BuildSystem/config/framework.py:815) >> >> ================================================================================ >> TEST configureLibrary from >> PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P >> ETSc/packages/NetCDF.py:10) >> TESTING: configureLibrary from >> PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) >> Find a NetCDF installation and check if it can work with PETSc >> >> ================================================================================ >> TEST configureLibrary from >> PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE >> TSc/packages/PVODE.py:10) >> TESTING: configureLibrary from >> PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) >> Find a PVODE installation and check if it can work with PETSc >> >> ================================================================================ >> TEST configureDebuggers from >> PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co >> nfig/PETSc/utilities/debuggers.py:22) >> TESTING: configureDebuggers from >> PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) >> Find a default debugger and determine its arguments >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From weidong.lian at gmail.com Tue Mar 10 11:56:37 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Tue, 10 Mar 2009 17:56:37 +0100 Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> Message-ID: <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> Hello, Following are my makefile for link petsc. Before I configure with option fortrun compiler with gfortran, it generates the shared library, now in my computer with g77, so there is no shared library, I use the static library as below. Thanks for your information. Weidong **************************************************** #FOR PETSC ifdef HAVE_PETSC DEFS := $(DEFS) -DHAVE_PETSC PETSC_DIR=$(DEVROOT)/Solver/petscSeq PETSC_ARCH=$(ARCHOS) PETSC_LIBDIR := $(PETSC_DIR)/$(PETSC_ARCH)/lib INCLUDES := $(INCLUDES) -I$(PETSC_DIR)/include -I$(PETSC_DIR)/include/mpiuni -I$(PETSC_DIR)/include/adic -I$(PETSC_DIR)/$(PETSC_ARCH)/include ADDLIB := $(ADDLIB) -llapack #ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) -lpetsccontrib -lpetscksp -lpetscsnes -lpetscvec -lpetsc -lpetscdm -lpetscmat -lpetscts ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a endif #End For PETSC ***************************************************************************************** On Tue, Mar 10, 2009 at 5:34 PM, Wei-Dong Lian wrote: > Hi, > I did not use the PETSc makefiles. First I do not know where it located. > Second, I have configured my makefile to link my programme, before it > worked, that's in another computer, the difference lies in the fact that > this time in the lib directory, there were not *.so library, but *.lib > library. Maybe I think it is the problem. > Thanks. > Weidong > > > On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley wrote: > >> On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian wrote: >> >>> Hello everyone, >>> >>> I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the >>> compiler gave me the following information, see result_1. >>> I also ran the "make test" and it worked successfully, but when I linked >>> petsc to compile my programme, it told me that information see results_2. >>> Any suggestion will be appreciated. >>> Thank you in advance. >> >> >> 1) Any report like this should be sent to petsc-maint at mcs.anl.gov because >> we need configure.log and make*.log. >> >> 2) The warning about -PIC comes about because we cannot parse the warning >> messages from your compiler. We can try to >> add this in when we get the log. >> >> 3) If 'make test' works and your link does not, then you have constructed >> your link line incorrectly. Are you using the PETSc makefiles? >> >> Matt >> >> >>> >>> Weidong >>> >>> $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ >>> /ftn-custom >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn >>> >>> >>> #####################################results_2################################ >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): >>> In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, >>> _p_Mat**)': >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: >>> undefined reference to `PetscTableCreate(int, _n_PetscTable**)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: >>> undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: >>> undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: >>> undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: >>> undefined reference to `PetscTableDestroy(_n_PetscTable*)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): >>> In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': >>> >>> ##################################################################################### >>> >>> >>> My configuration of petsc: >>> &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& >>> >>> Pushing language C >>> Popping language C >>> Pushing language Cxx >>> Popping language Cxx >>> Pushing language FC >>> Popping language FC >>> sh: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue >>> ss >>> Executing: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >>> fig.guess >>> sh: x86_64-unknown-linux-gnu >>> >>> sh: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub >>> x86_64-unknown-linux-gnu >>> >>> Executing: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >>> fig.sub x86_64-unknown-linux-gnu >>> >>> sh: x86_64-unknown-linux-gnu >>> >>> >>> >>> ================================================================================ >>> >>> ================================================================================ >>> Starting Configure Run at Tue Mar 10 16:23:30 2009 >>> Configure Options: --configModules=PETSc.Configure >>> --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx >>> x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 >>> --with-dynamic=1 >>> Working directory: >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 >>> Machine uname: >>> ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST >>> 2005', 'x86_64') >>> Python version: >>> 2.3.4 (#1, Feb 17 2005, 21:01:10) >>> [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] >>> >>> ================================================================================ >>> Pushing language C >>> Popping language C >>> Pushing language Cxx >>> Popping language Cxx >>> Pushing language FC >>> Popping language FC >>> >>> ================================================================================ >>> TEST configureExternalPackagesDir from >>> config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c >>> onfig/BuildSystem/config/framework.py:815) >>> TESTING: configureExternalPackagesDir from >>> config.framework(config/BuildSystem/config/framework.py:815) >>> >>> ================================================================================ >>> TEST configureLibrary from >>> PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P >>> ETSc/packages/NetCDF.py:10) >>> TESTING: configureLibrary from >>> PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) >>> Find a NetCDF installation and check if it can work with PETSc >>> >>> ================================================================================ >>> TEST configureLibrary from >>> PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE >>> TSc/packages/PVODE.py:10) >>> TESTING: configureLibrary from >>> PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) >>> Find a PVODE installation and check if it can work with PETSc >>> >>> ================================================================================ >>> TEST configureDebuggers from >>> PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co >>> nfig/PETSc/utilities/debuggers.py:22) >>> TESTING: configureDebuggers from >>> PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) >>> Find a default debugger and determine its arguments >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 10 12:22:26 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Mar 2009 12:22:26 -0500 Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 11:34 AM, Wei-Dong Lian wrote: > Hi, > I did not use the PETSc makefiles. First I do not know where it located. > Second, I have configured my makefile to link my programme, before it > worked, that's in another computer, the difference lies in the fact that > this time in the lib directory, there were not *.so library, but *.lib > library. Maybe I think it is the problem. We CANNOT help you unless you send the logs I asked for in my last message. Send them to petsc-maint at mcs.anl.gov. Matt > > Thanks. > Weidong > > > On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley wrote: > >> On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian wrote: >> >>> Hello everyone, >>> >>> I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the >>> compiler gave me the following information, see result_1. >>> I also ran the "make test" and it worked successfully, but when I linked >>> petsc to compile my programme, it told me that information see results_2. >>> Any suggestion will be appreciated. >>> Thank you in advance. >> >> >> 1) Any report like this should be sent to petsc-maint at mcs.anl.gov because >> we need configure.log and make*.log. >> >> 2) The warning about -PIC comes about because we cannot parse the warning >> messages from your compiler. We can try to >> add this in when we get the log. >> >> 3) If 'make test' works and your link does not, then you have constructed >> your link line incorrectly. Are you using the PETSc makefiles? >> >> Matt >> >> >>> >>> Weidong >>> >>> $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ >>> /ftn-custom >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler >>> g++: option ? -PIC ? non reconnue >>> libfast in: >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn >>> >>> >>> #####################################results_2################################ >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): >>> In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, >>> _p_Mat**)': >>> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: >>> undefined reference to `PetscTableCreate(int, _n_PetscTable**)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: >>> undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: >>> undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: >>> undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: >>> undefined reference to `PetscTableDestroy(_n_PetscTable*)' >>> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): >>> In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': >>> >>> ##################################################################################### >>> >>> >>> My configuration of petsc: >>> &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& >>> >>> Pushing language C >>> Popping language C >>> Pushing language Cxx >>> Popping language Cxx >>> Pushing language FC >>> Popping language FC >>> sh: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue >>> ss >>> Executing: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >>> fig.guess >>> sh: x86_64-unknown-linux-gnu >>> >>> sh: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub >>> x86_64-unknown-linux-gnu >>> >>> Executing: /bin/sh >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >>> fig.sub x86_64-unknown-linux-gnu >>> >>> sh: x86_64-unknown-linux-gnu >>> >>> >>> >>> ================================================================================ >>> >>> ================================================================================ >>> Starting Configure Run at Tue Mar 10 16:23:30 2009 >>> Configure Options: --configModules=PETSc.Configure >>> --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx >>> x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 >>> --with-dynamic=1 >>> Working directory: >>> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 >>> Machine uname: >>> ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST >>> 2005', 'x86_64') >>> Python version: >>> 2.3.4 (#1, Feb 17 2005, 21:01:10) >>> [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] >>> >>> ================================================================================ >>> Pushing language C >>> Popping language C >>> Pushing language Cxx >>> Popping language Cxx >>> Pushing language FC >>> Popping language FC >>> >>> ================================================================================ >>> TEST configureExternalPackagesDir from >>> config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c >>> onfig/BuildSystem/config/framework.py:815) >>> TESTING: configureExternalPackagesDir from >>> config.framework(config/BuildSystem/config/framework.py:815) >>> >>> ================================================================================ >>> TEST configureLibrary from >>> PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P >>> ETSc/packages/NetCDF.py:10) >>> TESTING: configureLibrary from >>> PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) >>> Find a NetCDF installation and check if it can work with PETSc >>> >>> ================================================================================ >>> TEST configureLibrary from >>> PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE >>> TSc/packages/PVODE.py:10) >>> TESTING: configureLibrary from >>> PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) >>> Find a PVODE installation and check if it can work with PETSc >>> >>> ================================================================================ >>> TEST configureDebuggers from >>> PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co >>> nfig/PETSc/utilities/debuggers.py:22) >>> TESTING: configureDebuggers from >>> PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) >>> Find a default debugger and determine its arguments >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 10 12:43:57 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 10 Mar 2009 12:43:57 -0500 Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> Message-ID: <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> The order that libraries are listed in a makefile IS IMPORTANT. DDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a $ (PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a $ (PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a $ (PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a $(PETSC_LIBDIR)/ libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a Here you have some crazy, nonsense ordering of the libraries. Whoever made this "makefile" doesn't have a clue about unix; you cannot just dump random strings of characters into files and expect to develop software! It requires some basic understanding of what you are doing. Barry On Mar 10, 2009, at 11:56 AM, Wei-Dong Lian wrote: > Hello, > Following are my makefile for link petsc. Before I configure with > option fortrun compiler with gfortran, it generates the shared > library, now in my computer with g77, so there is no shared library, > I use the static library as below. > Thanks for your information. > Weidong > > **************************************************** > #FOR PETSC > ifdef HAVE_PETSC > DEFS := $(DEFS) -DHAVE_PETSC > PETSC_DIR=$(DEVROOT)/Solver/petscSeq > PETSC_ARCH=$(ARCHOS) > PETSC_LIBDIR := $(PETSC_DIR)/$(PETSC_ARCH)/lib > INCLUDES := $(INCLUDES) -I$(PETSC_DIR)/include -I$(PETSC_DIR)/ > include/mpiuni -I$(PETSC_DIR)/include/adic -I$(PETSC_DIR)/$ > (PETSC_ARCH)/include > > ADDLIB := $(ADDLIB) -llapack > #ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) - > lpetsccontrib -lpetscksp -lpetscsnes -lpetscvec -lpetsc -lpetscdm - > lpetscmat -lpetscts > ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a $ > (PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a $ > (PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a $ > (PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a $ > (PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a > > endif > #End For PETSC > ***************************************************************************************** > On Tue, Mar 10, 2009 at 5:34 PM, Wei-Dong Lian > wrote: > Hi, > I did not use the PETSc makefiles. First I do not know where it > located. Second, I have configured my makefile to link my programme, > before it worked, that's in another computer, the difference lies in > the fact that this time in the lib directory, there were not *.so > library, but *.lib library. Maybe I think it is the problem. > Thanks. > Weidong > > > On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley > wrote: > On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian > wrote: > Hello everyone, > > I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the > compiler gave me the following information, see result_1. > I also ran the "make test" and it worked successfully, but when I > linked petsc to compile my programme, it told me that information > see results_2. > Any suggestion will be appreciated. > Thank you in advance. > > 1) Any report like this should be sent to petsc-maint at mcs.anl.gov > because we need configure.log and make*.log. > > 2) The warning about -PIC comes about because we cannot parse the > warning messages from your compiler. We can try to > add this in when we get the log. > > 3) If 'make test' works and your link does not, then you have > constructed your link line incorrectly. Are you using the PETSc > makefiles? > > Matt > > > Weidong > > $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ > /ftn-custom > g++: option ? -PIC ? non reconnue > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/explicit > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/explicit/euler > g++: option ? -PIC ? non reconnue > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/explicit/rk > g++: option ? -PIC ? non reconnue > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/explicit/rk/ftn-auto > g++: option ? -PIC ? non reconnue > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/implicit > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/implicit/beuler > g++: option ? -PIC ? non reconnue > libfast in: /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ > ts/impls/implicit/cn > > #####################################results_2 > ################################ > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/ > libpetscdm.a(daint.o)(.text+0x2aaf): In function > `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/ > daint.c:201: undefined reference to `PetscTableCreate(int, > _n_PetscTable**)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/ > libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/ > latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: undefined > reference to `PetscTableAddCount(_n_PetscTable*, int)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/ > libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/ > latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: undefined > reference to `PetscTableGetCount(_n_PetscTable*, int*)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/ > libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/ > latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: undefined > reference to `PetscTableFind(_n_PetscTable*, int, int*)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/ > libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/ > latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: undefined > reference to `PetscTableDestroy(_n_PetscTable*)' > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/ > libpetscdm.a(daint.o)(.text+0x5adf): In function > `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': > ##################################################################################### > > > My configuration of petsc: > &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& > > Pushing language C > Popping language C > Pushing language Cxx > Popping language Cxx > Pushing language FC > Popping language FC > sh: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/OpenSource/ > petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue > ss > Executing: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/ > OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > fig.guess > sh: x86_64-unknown-linux-gnu > > sh: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/OpenSource/ > petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub > x86_64-unknown-linux-gnu > > Executing: /bin/sh /usr/local/temp/lian/Develop/latest/Solver/ > OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > fig.sub x86_64-unknown-linux-gnu > > sh: x86_64-unknown-linux-gnu > > > = > = > = > = > = > = > = > = > = > = > ====================================================================== > = > = > = > = > = > = > = > = > = > = > ====================================================================== > Starting Configure Run at Tue Mar 10 16:23:30 2009 > Configure Options: --configModules=PETSc.Configure -- > optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 -- > with-cx > x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 -- > with-dynamic=1 > Working directory: /usr/local/temp/lian/Develop/latest/Solver/ > OpenSource/petsc-3.0.0-p3 > Machine uname: > ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST > 2005', 'x86_64') > Python version: > 2.3.4 (#1, Feb 17 2005, 21:01:10) > [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] > = > = > = > = > = > = > = > = > = > = > ====================================================================== > Pushing language C > Popping language C > Pushing language Cxx > Popping language Cxx > Pushing language FC > Popping language FC > = > = > = > = > = > = > = > = > = > = > ====================================================================== > TEST configureExternalPackagesDir from config.framework(/usr/local/ > temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c > onfig/BuildSystem/config/framework.py:815) > TESTING: configureExternalPackagesDir from config.framework(config/ > BuildSystem/config/framework.py:815) > = > = > = > = > = > = > = > = > = > = > ====================================================================== > TEST configureLibrary from PETSc.packages.NetCDF(/usr/local/temp/ > lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P > ETSc/packages/NetCDF.py:10) > TESTING: configureLibrary from PETSc.packages.NetCDF(config/PETSc/ > packages/NetCDF.py:10) > Find a NetCDF installation and check if it can work with PETSc > = > = > = > = > = > = > = > = > = > = > ====================================================================== > TEST configureLibrary from PETSc.packages.PVODE(/usr/local/temp/lian/ > Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE > TSc/packages/PVODE.py:10) > TESTING: configureLibrary from PETSc.packages.PVODE(config/PETSc/ > packages/PVODE.py:10) > Find a PVODE installation and check if it can work with PETSc > = > = > = > = > = > = > = > = > = > = > ====================================================================== > TEST configureDebuggers from PETSc.utilities.debuggers(/usr/local/ > temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co > nfig/PETSc/utilities/debuggers.py:22) > TESTING: configureDebuggers from PETSc.utilities.debuggers(config/ > PETSc/utilities/debuggers.py:22) > Find a default debugger and determine its arguments > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > From weidong.lian at gmail.com Tue Mar 10 12:52:32 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Tue, 10 Mar 2009 18:52:32 +0100 Subject: Hello for compiled problem In-Reply-To: <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> Message-ID: <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> Hello, I am so sorry for that, but you are right. Thanks for your advice and I will pay more attention from now on. Thank you very much. Weidong On Tue, Mar 10, 2009 at 6:43 PM, Barry Smith wrote: > > The order that libraries are listed in a makefile IS IMPORTANT. > > DDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a > $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a > $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a > $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a > $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a > > Here you have some crazy, nonsense ordering of the libraries. Whoever made > this "makefile" doesn't have a clue about unix; you cannot just dump random > strings of characters into files and expect to develop software! It requires > some basic understanding of what you are doing. > > > Barry > > > > On Mar 10, 2009, at 11:56 AM, Wei-Dong Lian wrote: > > Hello, >> Following are my makefile for link petsc. Before I configure with option >> fortrun compiler with gfortran, it generates the shared library, now in my >> computer with g77, so there is no shared library, I use the static library >> as below. >> Thanks for your information. >> Weidong >> >> **************************************************** >> #FOR PETSC >> ifdef HAVE_PETSC >> DEFS := $(DEFS) -DHAVE_PETSC >> PETSC_DIR=$(DEVROOT)/Solver/petscSeq >> PETSC_ARCH=$(ARCHOS) >> PETSC_LIBDIR := $(PETSC_DIR)/$(PETSC_ARCH)/lib >> INCLUDES := $(INCLUDES) -I$(PETSC_DIR)/include >> -I$(PETSC_DIR)/include/mpiuni -I$(PETSC_DIR)/include/adic >> -I$(PETSC_DIR)/$(PETSC_ARCH)/include >> >> ADDLIB := $(ADDLIB) -llapack >> #ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) >> -lpetsccontrib -lpetscksp -lpetscsnes -lpetscvec -lpetsc -lpetscdm >> -lpetscmat -lpetscts >> ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a >> $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a >> $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a >> $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a >> $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a >> >> endif >> #End For PETSC >> >> ***************************************************************************************** >> On Tue, Mar 10, 2009 at 5:34 PM, Wei-Dong Lian >> wrote: >> Hi, >> I did not use the PETSc makefiles. First I do not know where it located. >> Second, I have configured my makefile to link my programme, before it >> worked, that's in another computer, the difference lies in the fact that >> this time in the lib directory, there were not *.so library, but *.lib >> library. Maybe I think it is the problem. >> Thanks. >> Weidong >> >> >> On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley >> wrote: >> On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian >> wrote: >> Hello everyone, >> >> I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the >> compiler gave me the following information, see result_1. >> I also ran the "make test" and it worked successfully, but when I linked >> petsc to compile my programme, it told me that information see results_2. >> Any suggestion will be appreciated. >> Thank you in advance. >> >> 1) Any report like this should be sent to petsc-maint at mcs.anl.gov because >> we need configure.log and make*.log. >> >> 2) The warning about -PIC comes about because we cannot parse the warning >> messages from your compiler. We can try to >> add this in when we get the log. >> >> 3) If 'make test' works and your link does not, then you have constructed >> your link line incorrectly. Are you using the PETSc makefiles? >> >> Matt >> >> >> Weidong >> >> $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ >> /ftn-custom >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler >> g++: option ? -PIC ? non reconnue >> libfast in: >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn >> >> >> #####################################results_2################################ >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): >> In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, >> _p_Mat**)': >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: >> undefined reference to `PetscTableCreate(int, _n_PetscTable**)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: >> undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: >> undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: >> undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: >> undefined reference to `PetscTableDestroy(_n_PetscTable*)' >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): >> In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': >> >> ##################################################################################### >> >> >> My configuration of petsc: >> &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& >> >> Pushing language C >> Popping language C >> Pushing language Cxx >> Popping language Cxx >> Pushing language FC >> Popping language FC >> sh: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue >> ss >> Executing: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >> fig.guess >> sh: x86_64-unknown-linux-gnu >> >> sh: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub >> x86_64-unknown-linux-gnu >> >> Executing: /bin/sh >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con >> fig.sub x86_64-unknown-linux-gnu >> >> sh: x86_64-unknown-linux-gnu >> >> >> >> ================================================================================ >> >> ================================================================================ >> Starting Configure Run at Tue Mar 10 16:23:30 2009 >> Configure Options: --configModules=PETSc.Configure >> --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx >> x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 >> --with-dynamic=1 >> Working directory: >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 >> Machine uname: >> ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST >> 2005', 'x86_64') >> Python version: >> 2.3.4 (#1, Feb 17 2005, 21:01:10) >> [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] >> >> ================================================================================ >> Pushing language C >> Popping language C >> Pushing language Cxx >> Popping language Cxx >> Pushing language FC >> Popping language FC >> >> ================================================================================ >> TEST configureExternalPackagesDir from >> config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c >> onfig/BuildSystem/config/framework.py:815) >> TESTING: configureExternalPackagesDir from >> config.framework(config/BuildSystem/config/framework.py:815) >> >> ================================================================================ >> TEST configureLibrary from >> PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P >> ETSc/packages/NetCDF.py:10) >> TESTING: configureLibrary from >> PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) >> Find a NetCDF installation and check if it can work with PETSc >> >> ================================================================================ >> TEST configureLibrary from >> PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE >> TSc/packages/PVODE.py:10) >> TESTING: configureLibrary from >> PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) >> Find a PVODE installation and check if it can work with PETSc >> >> ================================================================================ >> TEST configureDebuggers from >> PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co >> nfig/PETSc/utilities/debuggers.py:22) >> TESTING: configureDebuggers from >> PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) >> Find a default debugger and determine its arguments >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 10 12:58:52 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2009 12:58:52 -0500 (CDT) Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> Message-ID: Sugest doing the following: cd src/ksp/ksp/examples/tutorials make ex2 And now make sure *all* compile and include options are also used in your makefile. [-PIC is an error - configure should have picked up -fPIC - not -PIC] One easy way to do this is to use PETSc makefiles [check src/ksp/ksp/examples/tutorials/makefile]. This shows how to use PETSc 'make targets and variables. But since you want to use your own targets - you can pickup atleat the variables by Perhaps doing the following: include ${PETSC_DIR}/conf/variables < now use $(PETSC_LIB) > Satish On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > Hello, > I am so sorry for that, but you are right. Thanks for your advice and I will > pay more attention from now on. Thank you very much. > Weidong > > On Tue, Mar 10, 2009 at 6:43 PM, Barry Smith wrote: > > > > > The order that libraries are listed in a makefile IS IMPORTANT. > > > > DDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a > > $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a > > $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a > > $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a > > $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a > > > > Here you have some crazy, nonsense ordering of the libraries. Whoever made > > this "makefile" doesn't have a clue about unix; you cannot just dump random > > strings of characters into files and expect to develop software! It requires > > some basic understanding of what you are doing. > > > > > > Barry > > > > > > > > On Mar 10, 2009, at 11:56 AM, Wei-Dong Lian wrote: > > > > Hello, > >> Following are my makefile for link petsc. Before I configure with option > >> fortrun compiler with gfortran, it generates the shared library, now in my > >> computer with g77, so there is no shared library, I use the static library > >> as below. > >> Thanks for your information. > >> Weidong > >> > >> **************************************************** > >> #FOR PETSC > >> ifdef HAVE_PETSC > >> DEFS := $(DEFS) -DHAVE_PETSC > >> PETSC_DIR=$(DEVROOT)/Solver/petscSeq > >> PETSC_ARCH=$(ARCHOS) > >> PETSC_LIBDIR := $(PETSC_DIR)/$(PETSC_ARCH)/lib > >> INCLUDES := $(INCLUDES) -I$(PETSC_DIR)/include > >> -I$(PETSC_DIR)/include/mpiuni -I$(PETSC_DIR)/include/adic > >> -I$(PETSC_DIR)/$(PETSC_ARCH)/include > >> > >> ADDLIB := $(ADDLIB) -llapack > >> #ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) > >> -lpetsccontrib -lpetscksp -lpetscsnes -lpetscvec -lpetsc -lpetscdm > >> -lpetscmat -lpetscts > >> ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a > >> $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a > >> $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a > >> $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a > >> $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a > >> > >> endif > >> #End For PETSC > >> > >> ***************************************************************************************** > >> On Tue, Mar 10, 2009 at 5:34 PM, Wei-Dong Lian > >> wrote: > >> Hi, > >> I did not use the PETSc makefiles. First I do not know where it located. > >> Second, I have configured my makefile to link my programme, before it > >> worked, that's in another computer, the difference lies in the fact that > >> this time in the lib directory, there were not *.so library, but *.lib > >> library. Maybe I think it is the problem. > >> Thanks. > >> Weidong > >> > >> > >> On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley > >> wrote: > >> On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian > >> wrote: > >> Hello everyone, > >> > >> I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the > >> compiler gave me the following information, see result_1. > >> I also ran the "make test" and it worked successfully, but when I linked > >> petsc to compile my programme, it told me that information see results_2. > >> Any suggestion will be appreciated. > >> Thank you in advance. > >> > >> 1) Any report like this should be sent to petsc-maint at mcs.anl.gov because > >> we need configure.log and make*.log. > >> > >> 2) The warning about -PIC comes about because we cannot parse the warning > >> messages from your compiler. We can try to > >> add this in when we get the log. > >> > >> 3) If 'make test' works and your link does not, then you have constructed > >> your link line incorrectly. Are you using the PETSc makefiles? > >> > >> Matt > >> > >> > >> Weidong > >> > >> $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ > >> /ftn-custom > >> g++: option ? -PIC ? non reconnue > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler > >> g++: option ? -PIC ? non reconnue > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk > >> g++: option ? -PIC ? non reconnue > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto > >> g++: option ? -PIC ? non reconnue > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler > >> g++: option ? -PIC ? non reconnue > >> libfast in: > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn > >> > >> > >> #####################################results_2################################ > >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): > >> In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, > >> _p_Mat**)': > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: > >> undefined reference to `PetscTableCreate(int, _n_PetscTable**)' > >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: > >> undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' > >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: > >> undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' > >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: > >> undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' > >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: > >> undefined reference to `PetscTableDestroy(_n_PetscTable*)' > >> /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): > >> In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, _p_Mat**)': > >> > >> ##################################################################################### > >> > >> > >> My configuration of petsc: > >> &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& > >> > >> Pushing language C > >> Popping language C > >> Pushing language Cxx > >> Popping language Cxx > >> Pushing language FC > >> Popping language FC > >> sh: /bin/sh > >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue > >> ss > >> Executing: /bin/sh > >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > >> fig.guess > >> sh: x86_64-unknown-linux-gnu > >> > >> sh: /bin/sh > >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub > >> x86_64-unknown-linux-gnu > >> > >> Executing: /bin/sh > >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > >> fig.sub x86_64-unknown-linux-gnu > >> > >> sh: x86_64-unknown-linux-gnu > >> > >> > >> > >> ================================================================================ > >> > >> ================================================================================ > >> Starting Configure Run at Tue Mar 10 16:23:30 2009 > >> Configure Options: --configModules=PETSc.Configure > >> --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 --with-cx > >> x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 > >> --with-dynamic=1 > >> Working directory: > >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 > >> Machine uname: > >> ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST > >> 2005', 'x86_64') > >> Python version: > >> 2.3.4 (#1, Feb 17 2005, 21:01:10) > >> [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] > >> > >> ================================================================================ > >> Pushing language C > >> Popping language C > >> Pushing language Cxx > >> Popping language Cxx > >> Pushing language FC > >> Popping language FC > >> > >> ================================================================================ > >> TEST configureExternalPackagesDir from > >> config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c > >> onfig/BuildSystem/config/framework.py:815) > >> TESTING: configureExternalPackagesDir from > >> config.framework(config/BuildSystem/config/framework.py:815) > >> > >> ================================================================================ > >> TEST configureLibrary from > >> PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P > >> ETSc/packages/NetCDF.py:10) > >> TESTING: configureLibrary from > >> PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) > >> Find a NetCDF installation and check if it can work with PETSc > >> > >> ================================================================================ > >> TEST configureLibrary from > >> PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE > >> TSc/packages/PVODE.py:10) > >> TESTING: configureLibrary from > >> PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) > >> Find a PVODE installation and check if it can work with PETSc > >> > >> ================================================================================ > >> TEST configureDebuggers from > >> PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co > >> nfig/PETSc/utilities/debuggers.py:22) > >> TESTING: configureDebuggers from > >> PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) > >> Find a default debugger and determine its arguments > >> > >> > >> > >> > >> -- > >> What most experimenters take for granted before they begin their > >> experiments is infinitely more interesting than any results to which their > >> experiments lead. > >> -- Norbert Wiener > >> > >> > >> > > > From weidong.lian at gmail.com Tue Mar 10 14:33:35 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Tue, 10 Mar 2009 20:33:35 +0100 Subject: Hello for compiled problem In-Reply-To: References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> Message-ID: <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> Hello, I did not configure the flag -PIC, can you tell me how to change the -PIC to -fPIC. By the way, why there are two types lib *.so and *.lib in the library path. But when I change option to FC=g77, it only left *.lib in the library path. Thank you very much. Weidong On Tue, Mar 10, 2009 at 6:58 PM, Satish Balay wrote: > Sugest doing the following: > > cd src/ksp/ksp/examples/tutorials > make ex2 > > And now make sure *all* compile and include options are also used in > your makefile. [-PIC is an error - configure should have picked up > -fPIC - not -PIC] > > One easy way to do this is to use PETSc makefiles [check > src/ksp/ksp/examples/tutorials/makefile]. This shows how to use PETSc > 'make targets and variables. > > But since you want to use your own targets - you can pickup atleat the > variables by Perhaps doing the following: > > include ${PETSC_DIR}/conf/variables > < now use $(PETSC_LIB) > > > Satish > > > > > On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > > > Hello, > > I am so sorry for that, but you are right. Thanks for your advice and I > will > > pay more attention from now on. Thank you very much. > > Weidong > > > > On Tue, Mar 10, 2009 at 6:43 PM, Barry Smith wrote: > > > > > > > > The order that libraries are listed in a makefile IS IMPORTANT. > > > > > > DDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a > > > $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a > > > $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a > > > $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a > > > $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a > > > > > > Here you have some crazy, nonsense ordering of the libraries. Whoever > made > > > this "makefile" doesn't have a clue about unix; you cannot just dump > random > > > strings of characters into files and expect to develop software! It > requires > > > some basic understanding of what you are doing. > > > > > > > > > Barry > > > > > > > > > > > > On Mar 10, 2009, at 11:56 AM, Wei-Dong Lian wrote: > > > > > > Hello, > > >> Following are my makefile for link petsc. Before I configure with > option > > >> fortrun compiler with gfortran, it generates the shared library, now > in my > > >> computer with g77, so there is no shared library, I use the static > library > > >> as below. > > >> Thanks for your information. > > >> Weidong > > >> > > >> **************************************************** > > >> #FOR PETSC > > >> ifdef HAVE_PETSC > > >> DEFS := $(DEFS) -DHAVE_PETSC > > >> PETSC_DIR=$(DEVROOT)/Solver/petscSeq > > >> PETSC_ARCH=$(ARCHOS) > > >> PETSC_LIBDIR := $(PETSC_DIR)/$(PETSC_ARCH)/lib > > >> INCLUDES := $(INCLUDES) -I$(PETSC_DIR)/include > > >> -I$(PETSC_DIR)/include/mpiuni -I$(PETSC_DIR)/include/adic > > >> -I$(PETSC_DIR)/$(PETSC_ARCH)/include > > >> > > >> ADDLIB := $(ADDLIB) -llapack > > >> #ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) > > >> -lpetsccontrib -lpetscksp -lpetscsnes -lpetscvec -lpetsc -lpetscdm > > >> -lpetscmat -lpetscts > > >> ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a > > >> $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a > > >> $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a > > >> $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a > > >> $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a > > >> > > >> endif > > >> #End For PETSC > > >> > > >> > ***************************************************************************************** > > >> On Tue, Mar 10, 2009 at 5:34 PM, Wei-Dong Lian < > weidong.lian at gmail.com> > > >> wrote: > > >> Hi, > > >> I did not use the PETSc makefiles. First I do not know where it > located. > > >> Second, I have configured my makefile to link my programme, before it > > >> worked, that's in another computer, the difference lies in the fact > that > > >> this time in the lib directory, there were not *.so library, but *.lib > > >> library. Maybe I think it is the problem. > > >> Thanks. > > >> Weidong > > >> > > >> > > >> On Tue, Mar 10, 2009 at 5:07 PM, Matthew Knepley > > >> wrote: > > >> On Tue, Mar 10, 2009 at 10:59 AM, Wei-Dong Lian < > weidong.lian at gmail.com> > > >> wrote: > > >> Hello everyone, > > >> > > >> I compiled the petsc-3.0.0-p3 with GCC 3.4.3 under linux 64. But the > > >> compiler gave me the following information, see result_1. > > >> I also ran the "make test" and it worked successfully, but when I > linked > > >> petsc to compile my programme, it told me that information see > results_2. > > >> Any suggestion will be appreciated. > > >> Thank you in advance. > > >> > > >> 1) Any report like this should be sent to petsc-maint at mcs.anl.govbecause > > >> we need configure.log and make*.log. > > >> > > >> 2) The warning about -PIC comes about because we cannot parse the > warning > > >> messages from your compiler. We can try to > > >> add this in when we get the log. > > >> > > >> 3) If 'make test' works and your link does not, then you have > constructed > > >> your link line incorrectly. Are you using the PETSc makefiles? > > >> > > >> Matt > > >> > > >> > > >> Weidong > > >> > > >> $$$$$$$$$$$$$$$$$$$$$Result_1$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ > > >> /ftn-custom > > >> g++: option ? -PIC ? non reconnue > > >> libfast in: > > >> /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/euler > > >> g++: option ? -PIC ? non reconnue > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk > > >> g++: option ? -PIC ? non reconnue > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/explicit/rk/ftn-auto > > >> g++: option ? -PIC ? non reconnue > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/beuler > > >> g++: option ? -PIC ? non reconnue > > >> libfast in: > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/ts/impls/implicit/cn > > >> > > >> > > >> > #####################################results_2################################ > > >> > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2aaf): > > >> In function `DAGetWireBasketInterpolation(_p_DA*, _p_Mat*, MatReuse, > > >> _p_Mat**)': > > >> > /usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:201: > > >> undefined reference to `PetscTableCreate(int, _n_PetscTable**)' > > >> > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2b3d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:203: > > >> undefined reference to `PetscTableAddCount(_n_PetscTable*, int)' > > >> > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2ba1):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:205: > > >> undefined reference to `PetscTableGetCount(_n_PetscTable*, int*)' > > >> > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d24):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:209: > > >> undefined reference to `PetscTableFind(_n_PetscTable*, int, int*)' > > >> > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x2d8d):/usr/local/temp/lian/Develop/latest/Solver/petscSeq/src/dm/da/utils/daint.c:212: > > >> undefined reference to `PetscTableDestroy(_n_PetscTable*)' > > >> > /usr/local/temp/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscdm.a(daint.o)(.text+0x5adf): > > >> In function `DAGetFaceInterpolation(_p_DA*, _p_Mat*, MatReuse, > _p_Mat**)': > > >> > > >> > ##################################################################################### > > >> > > >> > > >> My configuration of petsc: > > >> &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& > > >> > > >> Pushing language C > > >> Popping language C > > >> Pushing language Cxx > > >> Popping language Cxx > > >> Pushing language FC > > >> Popping language FC > > >> sh: /bin/sh > > >> > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.gue > > >> ss > > >> Executing: /bin/sh > > >> > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > > >> fig.guess > > >> sh: x86_64-unknown-linux-gnu > > >> > > >> sh: /bin/sh > > >> > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/config.sub > > >> x86_64-unknown-linux-gnu > > >> > > >> Executing: /bin/sh > > >> > /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/BuildSystem/config/packages/con > > >> fig.sub x86_64-unknown-linux-gnu > > >> > > >> sh: x86_64-unknown-linux-gnu > > >> > > >> > > >> > > >> > ================================================================================ > > >> > > >> > ================================================================================ > > >> Starting Configure Run at Tue Mar 10 16:23:30 2009 > > >> Configure Options: --configModules=PETSc.Configure > > >> --optionsModule=PETSc.compilerOptions --with-cc=gcc --with-fc=g77 > --with-cx > > >> x=g++ --with-mpi=0 --with-x=0 --with-clanguage=cxx --with-shared=1 > > >> --with-dynamic=1 > > >> Working directory: > > >> /usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3 > > >> Machine uname: > > >> ('Linux', 'frioul', '2.6.9-22.ELsmp', '#1 SMP Sat Oct 8 21:32:36 BST > > >> 2005', 'x86_64') > > >> Python version: > > >> 2.3.4 (#1, Feb 17 2005, 21:01:10) > > >> [GCC 3.4.3 20041212 (Red Hat 3.4.3-9.EL4)] > > >> > > >> > ================================================================================ > > >> Pushing language C > > >> Popping language C > > >> Pushing language Cxx > > >> Popping language Cxx > > >> Pushing language FC > > >> Popping language FC > > >> > > >> > ================================================================================ > > >> TEST configureExternalPackagesDir from > > >> > config.framework(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/c > > >> onfig/BuildSystem/config/framework.py:815) > > >> TESTING: configureExternalPackagesDir from > > >> config.framework(config/BuildSystem/config/framework.py:815) > > >> > > >> > ================================================================================ > > >> TEST configureLibrary from > > >> > PETSc.packages.NetCDF(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/P > > >> ETSc/packages/NetCDF.py:10) > > >> TESTING: configureLibrary from > > >> PETSc.packages.NetCDF(config/PETSc/packages/NetCDF.py:10) > > >> Find a NetCDF installation and check if it can work with PETSc > > >> > > >> > ================================================================================ > > >> TEST configureLibrary from > > >> > PETSc.packages.PVODE(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/config/PE > > >> TSc/packages/PVODE.py:10) > > >> TESTING: configureLibrary from > > >> PETSc.packages.PVODE(config/PETSc/packages/PVODE.py:10) > > >> Find a PVODE installation and check if it can work with PETSc > > >> > > >> > ================================================================================ > > >> TEST configureDebuggers from > > >> > PETSc.utilities.debuggers(/usr/local/temp/lian/Develop/latest/Solver/OpenSource/petsc-3.0.0-p3/co > > >> nfig/PETSc/utilities/debuggers.py:22) > > >> TESTING: configureDebuggers from > > >> PETSc.utilities.debuggers(config/PETSc/utilities/debuggers.py:22) > > >> Find a default debugger and determine its arguments > > >> > > >> > > >> > > >> > > >> -- > > >> What most experimenters take for granted before they begin their > > >> experiments is infinitely more interesting than any results to which > their > > >> experiments lead. > > >> -- Norbert Wiener > > >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 10 14:45:14 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2009 14:45:14 -0500 (CDT) Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> Message-ID: On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > Hello, > > I did not configure the flag -PIC, can you tell me how to change the -PIC to > -fPIC. Its a problem with configure - and you default language of choice. Try changing your lang to english - when building PETSc. Perhaps the following might work. export LANG en_US.UTF-8 ./config/configure.py ... make > By the way, why there are two types lib *.so and *.lib in the library path. Perhaps you mean .a - not .lib If you use --with-shared=1 or --with-dynamic=1 - you get .so files aswell. > But when I change option to FC=g77, it only left *.lib in the library path. Due to the -PIC issue - perhaps .so files didn't get created.. Satish From bsmith at mcs.anl.gov Tue Mar 10 15:19:31 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 10 Mar 2009 15:19:31 -0500 Subject: Hello for compiled problem In-Reply-To: References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> Message-ID: <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> Satish, Why does config/configure.py NOT automatically turn on LANG english when it starts up? Then we would not have to parse warning/error messages in all languages? Barry On Mar 10, 2009, at 2:45 PM, Satish Balay wrote: > On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > >> Hello, >> >> I did not configure the flag -PIC, can you tell me how to change >> the -PIC to >> -fPIC. > > Its a problem with configure - and you default language of choice. Try > changing your lang to english - when building PETSc. > > Perhaps the following might work. > > export LANG en_US.UTF-8 > ./config/configure.py ... > make > >> By the way, why there are two types lib *.so and *.lib in the >> library path. > > Perhaps you mean .a - not .lib > > If you use --with-shared=1 or --with-dynamic=1 - you get .so files > aswell. > >> But when I change option to FC=g77, it only left *.lib in the >> library path. > > Due to the -PIC issue - perhaps .so files didn't get created.. > > Satish > From balay at mcs.anl.gov Tue Mar 10 15:24:52 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2009 15:24:52 -0500 (CDT) Subject: Hello for compiled problem In-Reply-To: <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> Message-ID: I'll check if this can be done.. For one - we'll need this change for both configure - and make. Satish On Tue, 10 Mar 2009, Barry Smith wrote: > > Satish, > > Why does config/configure.py NOT automatically turn on LANG english when it > starts up? > Then we would not have to parse warning/error messages in all languages? > > Barry > > On Mar 10, 2009, at 2:45 PM, Satish Balay wrote: > > > On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > > > > > Hello, > > > > > > I did not configure the flag -PIC, can you tell me how to change the -PIC > > > to > > > -fPIC. > > > > Its a problem with configure - and you default language of choice. Try > > changing your lang to english - when building PETSc. > > > > Perhaps the following might work. > > > > export LANG en_US.UTF-8 > > ./config/configure.py ... > > make > > > > > By the way, why there are two types lib *.so and *.lib in the library > > > path. > > > > Perhaps you mean .a - not .lib > > > > If you use --with-shared=1 or --with-dynamic=1 - you get .so files aswell. > > > > > But when I change option to FC=g77, it only left *.lib in the library > > > path. > > > > Due to the -PIC issue - perhaps .so files didn't get created.. > > > > Satish > > > From balay at mcs.anl.gov Tue Mar 10 15:37:46 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2009 15:37:46 -0500 (CDT) Subject: Hello for compiled problem In-Reply-To: References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100934x24c69477i9234ff21b85618af@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> Message-ID: Weidong, What do you have for the following? env |grep LANG cd src/benchmarks gcc sizeof.c gcc -PIC sizeof.c export LANG=en_US.UTF-8 gcc -PIC sizeof.c thanks, Satish From weidong.lian at gmail.com Tue Mar 10 16:35:20 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Tue, 10 Mar 2009 22:35:20 +0100 Subject: Hello for compiled problem In-Reply-To: References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> Message-ID: <5ee7644a0903101435g54171f43j6872191b6a477888@mail.gmail.com> Satish, I am very grateful that you are so kind for my problem. Thanks for all of your team workers. Petsc is great. Now I am in France, so it is 22:10 and I am at home, my computer is Ubuntu 64 English, so it works very well in my computer. Tomorrow morning I will try that. First I am so sorry for my poor programming level giving you so much trouble, I am a new user under linux. My major is Mechanics, so my makefile seems unprofessional. But I am interested in programming. I will take more time for that later. Now I also found a problem for my computer about using petsc. In my computer, petsc can be compiled successfully without any problem. So I use the *.so library to link into my programme, it worked very well. But today I just have a try to link *.a library into my programme and it can not be compiled successfully. The problem is the same as in my office's computer. Now I am sure it is my problem linking the static lib *.a, can you tell me how to use your petsc with linking the *.a, I have read your makefile in the examples as you told me, but it is a little complicated. And you also told me that the order of the static lib is very important. What should I do for changing my makefile? I really want to use your makefile to link my programme, but I word in many computer in the network under Linux, So different computer have differect petsc, that's why I want to use my makefile to link petsc. Thanks very much. It is really very kind of you. Yours truly, Weidong ***********************My makefile to linking the petsc into my programme*********************************************** #FOR PETSC ifdef HAVE_PETSC DEFS := $(DEFS) -DHAVE_PETSC DEPS := $(DEPS) SolverInterfaces/PetscSeq #PETSC_DIR=$(DEVROOT)/Solver/petscSeq PETSC_ARCH=$(ARCHOS) PETSC_LIBDIR := $(PETSC_DIR)/$(PETSC_ARCH)/lib INCLUDES := $(INCLUDES) -I$(PETSC_DIR)/include -I$(PETSC_DIR)/include/mpiuni -I$(PETSC_DIR)/include/adic -I$(PETSC_DIR)/$(PETSC_ARCH)/include ADDLIB := $(ADDLIB) -llapack ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libmpiuni.a # ######for shared library it worked well #ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) -lpetsccontrib -lpetscksp -lpetscsnes -lpetscvec -lpetsc -lpetscdm -lpetscmat -lpetscts #####for static library it told me that many unreferenced sysbol as blow ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a ifndef PARALLEL LIBS:= $(LIBS) PetscSeq endif endif #End For PETSC ************************************************************************************************* ##################error information############################### /home/lian/Develop/Solver/petscSeq/src/mat/impls/aij/mpi/mpiaij.c:797: undefined reference to `MPIUNI_TMP' /home/lian/Develop/Solver/petscSeq/x86_64_linux/lib/libpetscmat.a(mpiaij.o): In function `MatZeroRows_MPIAIJ(_p_Mat*, int, int const*, double)': /home/lian/Develop/Solver/petscSeq/src/mat/impls/aij/mpi/mpiaij.c:612: undefined reference to `MPIUNI_TMP' /home/lian/Develop/Solver/petscSeq/src/mat/impls/aij/mpi/mpiaij.c:612: undefined reference to `MPIUNI_TMP' /home/lian/Develop/Solver/petscSeq/src/mat/impls/aij/mpi/mpiaij.c:612: undefined reference to `MPIUNI_TMP' /home/lian/Develop/Solver/petscSeq/src/mat/impls/aij/mpi/mpiaij.c:612: undefined reference to `MPIUNI_TMP' ################################################## On Tue, Mar 10, 2009 at 9:37 PM, Satish Balay wrote: > Weidong, > > What do you have for the following? > > env |grep LANG > cd src/benchmarks > gcc sizeof.c > gcc -PIC sizeof.c > export LANG=en_US.UTF-8 > gcc -PIC sizeof.c > > > thanks, > Satish > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 10 16:53:46 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2009 16:53:46 -0500 (CDT) Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903101435g54171f43j6872191b6a477888@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903100956x74bba433r9bd4cc0f8df9d9d@mail.gmail.com> <1E96E6EA-1F25-4C1D-83C7-3BF5C186DA58@mcs.anl.gov> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> <5ee7644a0903101435g54171f43j6872191b6a477888@mail.gmail.com> Message-ID: On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > Now I also found a problem for my computer about using petsc. > In my computer, petsc can be compiled successfully without any problem. So I > use the *.so library to link into my programme, it worked very well. But > today I just have a try to link *.a library into my programme and it can not > be compiled successfully. Why do you want to do this? [When shared libraries exist - the compiler prefers then - instead of static. So you should just stick with the compiler default behavior. And if build PETSc with --with-dynamic - then the .a files are useless anyway.] So the shared vs static usage should be chosen at PETSc configure step. [--with-shared=1/0, and do not use --with-dyanmic] So you should just use: ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) -lpetsccontrib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc Satish From weidong.lian at gmail.com Tue Mar 10 18:38:54 2009 From: weidong.lian at gmail.com (Wei-Dong Lian) Date: Wed, 11 Mar 2009 00:38:54 +0100 Subject: Hello for compiled problem In-Reply-To: References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> <5ee7644a0903101435g54171f43j6872191b6a477888@mail.gmail.com> Message-ID: <5ee7644a0903101638m7a169001xf62a593e1163e222@mail.gmail.com> Hi, [--with-shared=0, and do not use --with-dyanmic] I compiled petsc and link the static lib with my makefile "ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/ libpetsccontrib.a $(PETSC_LIBDIR)/libpetscksp.a $(PETSC_LIBDIR)/libpetscsnes.a $(PETSC_LIBDIR)/libpetscvec.a $(PETSC_LIBDIR)/libpetsc.a $(PETSC_LIBDIR)/libpetscdm.a $(PETSC_LIBDIR)/libpetscmat.a $(PETSC_LIBDIR)/libpetscts.a" The result was the same as before. So I really wonder how to link petsc with static library? This is useful when I could not generate shared library. Tomorrow morning I will try the *setCompilers.py *that you sent me for avoiding the error of -PIC. it is 00:37, I will go to sleep. See you tomorrow. Good night. yours sincerely Weidong On Tue, Mar 10, 2009 at 10:53 PM, Satish Balay wrote: > > > On Tue, 10 Mar 2009, Wei-Dong Lian wrote: > > > Now I also found a problem for my computer about using petsc. > > In my computer, petsc can be compiled successfully without any problem. > So I > > use the *.so library to link into my programme, it worked very well. But > > today I just have a try to link *.a library into my programme and it can > not > > be compiled successfully. > > Why do you want to do this? [When shared libraries exist - the > compiler prefers then - instead of static. So you should just stick > with the compiler default behavior. And if build PETSc with > --with-dynamic - then the .a files are useless anyway.] > > So the shared vs static usage should be chosen at PETSc configure step. > [--with-shared=1/0, and do not use --with-dyanmic] > > > So you should just use: > > ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) > -lpetsccontrib -lpetscts -lpetscsnes -lpetscksp -lpetscdm > -lpetscmat -lpetscvec -lpetsc > > > Satish > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 10 18:42:04 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Mar 2009 18:42:04 -0500 (CDT) Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903101638m7a169001xf62a593e1163e222@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903101052o1bd4cfd5n1b20a73447cbecc5@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> <5ee7644a0903101435g54171f43j6872191b6a477888@mail.gmail.com> <5ee7644a0903101638m7a169001xf62a593e1163e222@mail.gmail.com> Message-ID: On Wed, 11 Mar 2009, Wei-Dong Lian wrote: > Hi, > > [--with-shared=0, and do not use --with-dyanmic] > I compiled petsc and link the static lib with my makefile > "ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/ libpetsccontrib.a > $(PETSC_LIBDIR)/libpetscksp.a $(PETSC_LIBDIR)/libpetscsnes.a > $(PETSC_LIBDIR)/libpetscvec.a $(PETSC_LIBDIR)/libpetsc.a > $(PETSC_LIBDIR)/libpetscdm.a $(PETSC_LIBDIR)/libpetscmat.a > $(PETSC_LIBDIR)/libpetscts.a" > The result was the same as before. Because the order is same as before. Use the library order mentioned in my previous e-mail. > > -lpetsccontrib -lpetscts -lpetscsnes -lpetscksp -lpetscdm > > -lpetscmat -lpetscvec -lpetsc This might get it compiled - but it won't run. As mentioned before you'll have to rebuild PETSc without '--with-dyanmic' Satish So I really wonder how to link petsc with > static library? This is useful when I could not generate shared library. > Tomorrow morning I will try the *setCompilers.py *that you sent me for > avoiding the error of -PIC. it is 00:37, I will go to sleep. > See you tomorrow. Good night. > yours sincerely > Weidong > From knepley at gmail.com Tue Mar 10 18:46:16 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Mar 2009 18:46:16 -0500 Subject: Hello for compiled problem In-Reply-To: <5ee7644a0903101638m7a169001xf62a593e1163e222@mail.gmail.com> References: <5ee7644a0903100859o6ef19098k97e5626ab23eb5ce@mail.gmail.com> <5ee7644a0903101233o5b4d5e1ao3641819d479e4530@mail.gmail.com> <768CD1F3-A65A-4023-82B4-1CF677DCF121@mcs.anl.gov> <5ee7644a0903101435g54171f43j6872191b6a477888@mail.gmail.com> <5ee7644a0903101638m7a169001xf62a593e1163e222@mail.gmail.com> Message-ID: On Tue, Mar 10, 2009 at 6:38 PM, Wei-Dong Lian wrote: > Hi, > > [--with-shared=0, and do not use --with-dyanmic] > I compiled petsc and link the static lib with my makefile > "ADDLIBSTATIC := $(ADDLIBSTATIC) $(PETSC_LIBDIR)/ libpetsccontrib.a > $(PETSC_LIBDIR)/libpetscksp.a $(PETSC_LIBDIR)/libpetscsnes.a > $(PETSC_LIBDIR)/libpetscvec.a $(PETSC_LIBDIR)/libpetsc.a > $(PETSC_LIBDIR)/libpetscdm.a $(PETSC_LIBDIR)/libpetscmat.a > $(PETSC_LIBDIR)/libpetscts.a" > 1) You should be using the form Satish suggested at the bottom of this mail 2) ALWAYS always always mail the full error message. Matt > The result was the same as before. So I really wonder how to link petsc > with static library? This is useful when I could not generate shared > library. > Tomorrow morning I will try the *setCompilers.py *that you sent me for > avoiding the error of -PIC. it is 00:37, I will go to sleep. > See you tomorrow. Good night. > yours sincerely > Weidong > > > On Tue, Mar 10, 2009 at 10:53 PM, Satish Balay wrote: > >> >> >> On Tue, 10 Mar 2009, Wei-Dong Lian wrote: >> >> > Now I also found a problem for my computer about using petsc. >> > In my computer, petsc can be compiled successfully without any problem. >> So I >> > use the *.so library to link into my programme, it worked very well. But >> > today I just have a try to link *.a library into my programme and it can >> not >> > be compiled successfully. >> >> Why do you want to do this? [When shared libraries exist - the >> compiler prefers then - instead of static. So you should just stick >> with the compiler default behavior. And if build PETSc with >> --with-dynamic - then the .a files are useless anyway.] >> >> So the shared vs static usage should be chosen at PETSc configure step. >> [--with-shared=1/0, and do not use --with-dyanmic] >> >> >> So you should just use: >> >> ADDLIB := $(ADDLIB) -L$(PETSC_LIBDIR) -Wl,-rpath,$(PETSC_LIBDIR) >> -lpetsccontrib -lpetscts -lpetscsnes -lpetscksp -lpetscdm >> -lpetscmat -lpetscvec -lpetsc >> >> >> Satish >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahuramazda10 at gmail.com Thu Mar 12 08:20:46 2009 From: ahuramazda10 at gmail.com (Santolo Felaco) Date: Thu, 12 Mar 2009 14:20:46 +0100 Subject: Problems with Socket Message-ID: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> Hi I use Petsc 2.3.1 (I don't use the last version because I work on a software developed with this version). I tried to used the Petsc socket, but the client and server programs but they don't connect (the execution is blocked). I am using two examples of Petsc: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42a.c.htmland http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42.c.html I tried to execute ex42a on machine and ex42 on other machine, and I execute both programs on same machine. Do you help me? Thanks. Bye. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuentesdt at gmail.com Wed Mar 11 20:43:56 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Wed, 11 Mar 2009 20:43:56 -0500 (CDT) Subject: multiple rhs Message-ID: Hello, I have a sparse matrix, A, with which I want to solve multiple right hand sides with a direct solver. Is this the correct call sequence ? MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); IS isrow,iscol; MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); MatLUFactorNumeric(Afact,A,&info); MatMatSolve(Afact,B,X); my solve keeps running out of memory "[0]PETSC ERROR: Memory requested xxx!" is this in bytes? I can't tell if the problem I'm trying to solve is too large form my machine or if I just have bug in the call sequence. thank you, David Fuentes From knepley at gmail.com Thu Mar 12 08:36:56 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 12 Mar 2009 08:36:56 -0500 Subject: Problems with Socket In-Reply-To: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> References: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> Message-ID: On Thu, Mar 12, 2009 at 8:20 AM, Santolo Felaco wrote: > Hi I use Petsc 2.3.1 (I don't use the last version because I work on a > software developed with this version). > I tried to used the Petsc socket, but the client and server programs but > they don't connect (the execution is blocked). > I am using two examples of Petsc: > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42a.c.htmland > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42.c.html > > I tried to execute ex42a on machine and ex42 on other machine, and I > execute both programs on same machine. 1) It is extremely hard to debug old code. We have made several socket fixes for new OS/architectures. 2) Make sure you start the server (ex42a) and then the client (ex42) 3) Run using the debugger, -start_in_debugger, and give us a stack trace of the hang Matt > > Do you help me? Thanks. > Bye. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Mar 12 08:41:47 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 12 Mar 2009 08:41:47 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: David, I do not see any problem with the calling sequence. The memory is determined in MatLUFactorSymbolic(). Does your code crashes within MatLUFactorSymbolic()? Please send us complete error message. Hong On Wed, 11 Mar 2009, David Fuentes wrote: > > Hello, > > I have a sparse matrix, A, with which I want to solve multiple right hand > sides > with a direct solver. Is this the correct call sequence ? > > > MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); > IS isrow,iscol; > MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); > MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); > MatLUFactorNumeric(Afact,A,&info); > MatMatSolve(Afact,B,X); > > > my solve keeps running out of memory > > "[0]PETSC ERROR: Memory requested xxx!" > > > is this in bytes? I can't tell if the problem I'm trying to solve > is too large form my machine or if I just have bug in the call sequence. > > > > > thank you, > David Fuentes > From fuentesdt at gmail.com Thu Mar 12 09:17:46 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Thu, 12 Mar 2009 09:17:46 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: Thanks Hong, The complete error message is attached. I think I just had too big of a matrix. The matrix i'm trying to factor is 327680 x 327680 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process 2074058752 [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [0]PETSC ERROR: Memory requested 1258466480! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 22:57:05 CST 2009 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by dfuentes Wed Mar 11 20:30:37 2009 [0]PETSC ERROR: Libraries linked from /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in src/mat/interface/matrix.c [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in unknowndirectory/src/KalmanFilter.cxx [0]PETSC ERROR: GeneratePRFTmap() line 182 in unknowndirectory/src/MainDriver.cxx [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 On Thu, 12 Mar 2009, Hong Zhang wrote: > > David, > > I do not see any problem with the calling sequence. > > The memory is determined in MatLUFactorSymbolic(). > Does your code crashes within MatLUFactorSymbolic()? > Please send us complete error message. > > Hong > > On Wed, 11 Mar 2009, David Fuentes wrote: > >> >> Hello, >> >> I have a sparse matrix, A, with which I want to solve multiple right hand >> sides >> with a direct solver. Is this the correct call sequence ? >> >> >> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >> IS isrow,iscol; >> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >> MatLUFactorNumeric(Afact,A,&info); >> MatMatSolve(Afact,B,X); >> >> >> my solve keeps running out of memory >> >> "[0]PETSC ERROR: Memory requested xxx!" >> >> >> is this in bytes? I can't tell if the problem I'm trying to solve >> is too large form my machine or if I just have bug in the call sequence. >> >> >> >> >> thank you, >> David Fuentes >> > From knepley at gmail.com Thu Mar 12 09:19:19 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 12 Mar 2009 09:19:19 -0500 Subject: multiple rhs In-Reply-To: References: Message-ID: You can try using a sparse direct solver like MUMPS instead of PETSc LU. Matt On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes wrote: > Thanks Hong, > > The complete error message is attached. I think I just had too big > of a matrix. The matrix i'm trying to factor is 327680 x 327680 > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Out of memory. This could be due to allocating > [0]PETSC ERROR: too large an object or bleeding by not properly > [0]PETSC ERROR: destroying unneeded objects. > [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process > 2074058752 > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [0]PETSC ERROR: Memory requested 1258466480! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 22:57:05 > CST 2009 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by dfuentes > Wed Mar 11 20:30:37 2009 > [0]PETSC ERROR: Libraries linked from > /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib > [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 > [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded > --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 > --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c > [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c > [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c > [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in > src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in > src/mat/interface/matrix.c > [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in > unknowndirectory/src/KalmanFilter.cxx > [0]PETSC ERROR: GeneratePRFTmap() line 182 in > unknowndirectory/src/MainDriver.cxx > [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx > application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: > aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 > > > > > > > > > > > On Thu, 12 Mar 2009, Hong Zhang wrote: > > >> David, >> >> I do not see any problem with the calling sequence. >> >> The memory is determined in MatLUFactorSymbolic(). >> Does your code crashes within MatLUFactorSymbolic()? >> Please send us complete error message. >> >> Hong >> >> On Wed, 11 Mar 2009, David Fuentes wrote: >> >> >>> Hello, >>> >>> I have a sparse matrix, A, with which I want to solve multiple right hand >>> sides >>> with a direct solver. Is this the correct call sequence ? >>> >>> >>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>> IS isrow,iscol; >>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>> MatLUFactorNumeric(Afact,A,&info); >>> MatMatSolve(Afact,B,X); >>> >>> >>> my solve keeps running out of memory >>> >>> "[0]PETSC ERROR: Memory requested xxx!" >>> >>> >>> is this in bytes? I can't tell if the problem I'm trying to solve >>> is too large form my machine or if I just have bug in the call sequence. >>> >>> >>> >>> >>> thank you, >>> David Fuentes >>> >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahuramazda10 at gmail.com Thu Mar 12 09:19:14 2009 From: ahuramazda10 at gmail.com (Santolo Felaco) Date: Thu, 12 Mar 2009 15:19:14 +0100 Subject: Problems with Socket In-Reply-To: References: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> Message-ID: <5f76eef60903120719u47948190s688a6bf194bb1264@mail.gmail.com> Hi, I execute ex42a and then ex42. Execution: mpirun -np 1 ./ex42a -start_in_debugger (gdb) bt #0 0xb7f9a410 in __kernel_vsyscall () #1 0xb7dd8c90 in nanosleep () from /lib/../lib/tls/i686/cmov/libc.so.6 #2 0xb7dd8ac7 in sleep () from /lib/../lib/tls/i686/cmov/libc.so.6 #3 0x08195837 in PetscSleep (s=10) at psleep.c:40 #4 0x081c277e in PetscAttachDebugger () at adebug.c:411 #5 0x081836a2 in PetscOptionsCheckInitial_Private () at init.c:371 #6 0x08187dfe in PetscInitialize (argc=0xbfb5edb0, args=0xbfb5edb4, file=0x0, help=0x8224da0 "Sends a PETSc vector to a socket connection, receives it back, within a loop. Works with ex42.c.\n") at pinit.c:488 #7 0x0805073b in main (argc=2, args=0xbfb5ee34) at ex42a.c:17 mpirun -np 1 ./ex42 -start_in_debugger (gdb) bt #0 0xb7f8a410 in __kernel_vsyscall () #1 0xb7dc8c90 in nanosleep () from /lib/../lib/tls/i686/cmov/libc.so.6 #2 0xb7dc8ac7 in sleep () from /lib/../lib/tls/i686/cmov/libc.so.6 #3 0x081953d7 in PetscSleep (s=10) at psleep.c:40 #4 0x081c231e in PetscAttachDebugger () at adebug.c:411 #5 0x08183242 in PetscOptionsCheckInitial_Private () at init.c:371 #6 0x0818799e in PetscInitialize (argc=0xbfaded30, args=0xbfaded34, file=0x0, help=0x8223c00 "Reads a PETSc vector from a socket connection, then sends it back within a loop. Works with ex42.m or ex42a.c\n") at pinit.c:488 #7 0x0805073b in main (argc=2, args=0xbfadedb4) at ex42.c:18 2009/3/12 Matthew Knepley > On Thu, Mar 12, 2009 at 8:20 AM, Santolo Felaco wrote: > >> Hi I use Petsc 2.3.1 (I don't use the last version because I work on a >> software developed with this version). >> I tried to used the Petsc socket, but the client and server programs but >> they don't connect (the execution is blocked). >> I am using two examples of Petsc: >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42a.c.htmland >> >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42.c.html >> >> I tried to execute ex42a on machine and ex42 on other machine, and I >> execute both programs on same machine. > > > 1) It is extremely hard to debug old code. We have made several socket > fixes for new OS/architectures. > > 2) Make sure you start the server (ex42a) and then the client (ex42) > > 3) Run using the debugger, -start_in_debugger, and give us a stack trace of > the hang > > Matt > > >> >> Do you help me? Thanks. >> Bye. >> > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 12 10:33:25 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 12 Mar 2009 10:33:25 -0500 Subject: Problems with Socket In-Reply-To: <5f76eef60903120719u47948190s688a6bf194bb1264@mail.gmail.com> References: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> <5f76eef60903120719u47948190s688a6bf194bb1264@mail.gmail.com> Message-ID: You have to type 'cont' in the debugger. Matt On Thu, Mar 12, 2009 at 9:19 AM, Santolo Felaco wrote: > Hi, > I execute ex42a and then ex42. > > Execution: > > mpirun -np 1 ./ex42a -start_in_debugger > > (gdb) bt > #0 0xb7f9a410 in __kernel_vsyscall () > #1 0xb7dd8c90 in nanosleep () from /lib/../lib/tls/i686/cmov/libc.so.6 > #2 0xb7dd8ac7 in sleep () from /lib/../lib/tls/i686/cmov/libc.so.6 > #3 0x08195837 in PetscSleep (s=10) at psleep.c:40 > #4 0x081c277e in PetscAttachDebugger () at adebug.c:411 > #5 0x081836a2 in PetscOptionsCheckInitial_Private () at init.c:371 > #6 0x08187dfe in PetscInitialize (argc=0xbfb5edb0, args=0xbfb5edb4, > file=0x0, > help=0x8224da0 "Sends a PETSc vector to a socket connection, receives > it back, within a loop. Works with ex42.c.\n") at pinit.c:488 > #7 0x0805073b in main (argc=2, args=0xbfb5ee34) at ex42a.c:17 > > mpirun -np 1 ./ex42 -start_in_debugger > > (gdb) bt > #0 0xb7f8a410 in __kernel_vsyscall () > #1 0xb7dc8c90 in nanosleep () from /lib/../lib/tls/i686/cmov/libc.so.6 > #2 0xb7dc8ac7 in sleep () from /lib/../lib/tls/i686/cmov/libc.so.6 > #3 0x081953d7 in PetscSleep (s=10) at psleep.c:40 > #4 0x081c231e in PetscAttachDebugger () at adebug.c:411 > #5 0x08183242 in PetscOptionsCheckInitial_Private () at init.c:371 > #6 0x0818799e in PetscInitialize (argc=0xbfaded30, args=0xbfaded34, > file=0x0, > help=0x8223c00 "Reads a PETSc vector from a socket connection, then > sends it back within a loop. Works with ex42.m or ex42a.c\n") at pinit.c:488 > #7 0x0805073b in main (argc=2, args=0xbfadedb4) at ex42.c:18 > > > 2009/3/12 Matthew Knepley > > On Thu, Mar 12, 2009 at 8:20 AM, Santolo Felaco wrote: >> >>> Hi I use Petsc 2.3.1 (I don't use the last version because I work on a >>> software developed with this version). >>> I tried to used the Petsc socket, but the client and server programs but >>> they don't connect (the execution is blocked). >>> I am using two examples of Petsc: >>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42a.c.htmland >>> >>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42.c.html >>> >>> I tried to execute ex42a on machine and ex42 on other machine, and I >>> execute both programs on same machine. >> >> >> 1) It is extremely hard to debug old code. We have made several socket >> fixes for new OS/architectures. >> >> 2) Make sure you start the server (ex42a) and then the client (ex42) >> >> 3) Run using the debugger, -start_in_debugger, and give us a stack trace >> of the hang >> >> Matt >> >> >>> >>> Do you help me? Thanks. >>> Bye. >>> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 12 12:59:16 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 12 Mar 2009 12:59:16 -0500 Subject: Problems with Socket In-Reply-To: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> References: <5f76eef60903120620j576e457by803ae4af7edd3e9d@mail.gmail.com> Message-ID: <54CC3E9F-E9CD-4F93-A00A-77EF941A6280@mcs.anl.gov> You have to upgrade to 3.0.0 to expect these to work. Barry On Mar 12, 2009, at 8:20 AM, Santolo Felaco wrote: > Hi I use Petsc 2.3.1 (I don't use the last version because I work on > a software developed with this version). > I tried to used the Petsc socket, but the client and server programs > but they don't connect (the execution is blocked). > I am using two examples of Petsc: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42a.c.html > and > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/vec/vec/examples/tutorials/ex42.c.html > > I tried to execute ex42a on machine and ex42 on other machine, and I > execute both programs on same machine. > > Do you help me? Thanks. > Bye. From fuentesdt at gmail.com Thu Mar 12 13:50:13 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Thu, 12 Mar 2009 13:50:13 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: Thanks Matt, Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ? Does it use a sparse direct storage or allocate the full n x n matrix? df On Thu, 12 Mar 2009, Matthew Knepley wrote: > You can try using a sparse direct solver like MUMPS instead of PETSc LU. > > Matt > > On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes wrote: > >> Thanks Hong, >> >> The complete error message is attached. I think I just had too big >> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >> >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Out of memory. This could be due to allocating >> [0]PETSC ERROR: too large an object or bleeding by not properly >> [0]PETSC ERROR: destroying unneeded objects. >> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process >> 2074058752 >> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >> [0]PETSC ERROR: Memory requested 1258466480! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 22:57:05 >> CST 2009 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by dfuentes >> Wed Mar 11 20:30:37 2009 >> [0]PETSC ERROR: Libraries linked from >> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >> [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded >> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c >> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c >> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c >> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >> src/mat/impls/aij/seq/aijfact.c >> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >> src/mat/interface/matrix.c >> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >> unknowndirectory/src/KalmanFilter.cxx >> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >> unknowndirectory/src/MainDriver.cxx >> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx >> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: >> aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >> >> >> >> >> >> >> >> >> >> >> On Thu, 12 Mar 2009, Hong Zhang wrote: >> >> >>> David, >>> >>> I do not see any problem with the calling sequence. >>> >>> The memory is determined in MatLUFactorSymbolic(). >>> Does your code crashes within MatLUFactorSymbolic()? >>> Please send us complete error message. >>> >>> Hong >>> >>> On Wed, 11 Mar 2009, David Fuentes wrote: >>> >>> >>>> Hello, >>>> >>>> I have a sparse matrix, A, with which I want to solve multiple right hand >>>> sides >>>> with a direct solver. Is this the correct call sequence ? >>>> >>>> >>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>> IS isrow,iscol; >>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>> MatLUFactorNumeric(Afact,A,&info); >>>> MatMatSolve(Afact,B,X); >>>> >>>> >>>> my solve keeps running out of memory >>>> >>>> "[0]PETSC ERROR: Memory requested xxx!" >>>> >>>> >>>> is this in bytes? I can't tell if the problem I'm trying to solve >>>> is too large form my machine or if I just have bug in the call sequence. >>>> >>>> >>>> >>>> >>>> thank you, >>>> David Fuentes >>>> >>>> >>> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > From knepley at gmail.com Thu Mar 12 14:12:21 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 12 Mar 2009 14:12:21 -0500 Subject: multiple rhs In-Reply-To: References: Message-ID: On Thu, Mar 12, 2009 at 1:50 PM, David Fuentes wrote: > Thanks Matt, > > Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ? > Does it use a sparse direct storage or allocate the full n x n matrix? No, MUMPS is "sparse direct" so it uses MPIAIJ. Matt > > df > > On Thu, 12 Mar 2009, Matthew Knepley wrote: > > You can try using a sparse direct solver like MUMPS instead of PETSc LU. >> >> Matt >> >> On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes >> wrote: >> >> Thanks Hong, >>> >>> The complete error message is attached. I think I just had too big >>> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >>> >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>> [0]PETSC ERROR: too large an object or bleeding by not properly >>> [0]PETSC ERROR: destroying unneeded objects. >>> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process >>> 2074058752 >>> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >>> [0]PETSC ERROR: Memory requested 1258466480! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 22:57:05 >>> CST 2009 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by >>> dfuentes >>> Wed Mar 11 20:30:37 2009 >>> [0]PETSC ERROR: Libraries linked from >>> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >>> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >>> [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded >>> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >>> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c >>> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c >>> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c >>> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >>> src/mat/impls/aij/seq/aijfact.c >>> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >>> src/mat/interface/matrix.c >>> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >>> unknowndirectory/src/KalmanFilter.cxx >>> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >>> unknowndirectory/src/MainDriver.cxx >>> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx >>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: >>> aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Thu, 12 Mar 2009, Hong Zhang wrote: >>> >>> >>> David, >>>> >>>> I do not see any problem with the calling sequence. >>>> >>>> The memory is determined in MatLUFactorSymbolic(). >>>> Does your code crashes within MatLUFactorSymbolic()? >>>> Please send us complete error message. >>>> >>>> Hong >>>> >>>> On Wed, 11 Mar 2009, David Fuentes wrote: >>>> >>>> >>>> Hello, >>>>> >>>>> I have a sparse matrix, A, with which I want to solve multiple right >>>>> hand >>>>> sides >>>>> with a direct solver. Is this the correct call sequence ? >>>>> >>>>> >>>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>>> IS isrow,iscol; >>>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>>> MatLUFactorNumeric(Afact,A,&info); >>>>> MatMatSolve(Afact,B,X); >>>>> >>>>> >>>>> my solve keeps running out of memory >>>>> >>>>> "[0]PETSC ERROR: Memory requested xxx!" >>>>> >>>>> >>>>> is this in bytes? I can't tell if the problem I'm trying to solve >>>>> is too large form my machine or if I just have bug in the call >>>>> sequence. >>>>> >>>>> >>>>> >>>>> >>>>> thank you, >>>>> David Fuentes >>>>> >>>>> >>>>> >>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments >> is infinitely more interesting than any results to which their experiments >> lead. >> -- Norbert Wiener >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Mar 12 15:09:32 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 12 Mar 2009 15:09:32 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: >> >> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ? >> Does it use a sparse direct storage or allocate the full n x n matrix? > > > No, MUMPS is "sparse direct" so it uses MPIAIJ. For mpi dense matrix, you can use plapack Hong > > >> >> df >> >> On Thu, 12 Mar 2009, Matthew Knepley wrote: >> >> You can try using a sparse direct solver like MUMPS instead of PETSc LU. >>> >>> Matt >>> >>> On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes >>> wrote: >>> >>> Thanks Hong, >>>> >>>> The complete error message is attached. I think I just had too big >>>> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >>>> >>>> >>>> [0]PETSC ERROR: --------------------- Error Message >>>> ------------------------------------ >>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>> [0]PETSC ERROR: destroying unneeded objects. >>>> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process >>>> 2074058752 >>>> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >>>> [0]PETSC ERROR: Memory requested 1258466480! >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 22:57:05 >>>> CST 2009 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by >>>> dfuentes >>>> Wed Mar 11 20:30:37 2009 >>>> [0]PETSC ERROR: Libraries linked from >>>> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >>>> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >>>> [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded >>>> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >>>> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 >>>> [0]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c >>>> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c >>>> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c >>>> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >>>> src/mat/impls/aij/seq/aijfact.c >>>> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >>>> src/mat/interface/matrix.c >>>> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >>>> unknowndirectory/src/KalmanFilter.cxx >>>> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >>>> unknowndirectory/src/MainDriver.cxx >>>> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx >>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: >>>> aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Thu, 12 Mar 2009, Hong Zhang wrote: >>>> >>>> >>>> David, >>>>> >>>>> I do not see any problem with the calling sequence. >>>>> >>>>> The memory is determined in MatLUFactorSymbolic(). >>>>> Does your code crashes within MatLUFactorSymbolic()? >>>>> Please send us complete error message. >>>>> >>>>> Hong >>>>> >>>>> On Wed, 11 Mar 2009, David Fuentes wrote: >>>>> >>>>> >>>>> Hello, >>>>>> >>>>>> I have a sparse matrix, A, with which I want to solve multiple right >>>>>> hand >>>>>> sides >>>>>> with a direct solver. Is this the correct call sequence ? >>>>>> >>>>>> >>>>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>>>> IS isrow,iscol; >>>>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>>>> MatLUFactorNumeric(Afact,A,&info); >>>>>> MatMatSolve(Afact,B,X); >>>>>> >>>>>> >>>>>> my solve keeps running out of memory >>>>>> >>>>>> "[0]PETSC ERROR: Memory requested xxx!" >>>>>> >>>>>> >>>>>> is this in bytes? I can't tell if the problem I'm trying to solve >>>>>> is too large form my machine or if I just have bug in the call >>>>>> sequence. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> thank you, >>>>>> David Fuentes >>>>>> >>>>>> >>>>>> >>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments >>> is infinitely more interesting than any results to which their experiments >>> lead. >>> -- Norbert Wiener >>> >>> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > From fuentesdt at gmail.com Thu Mar 12 18:44:32 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Thu, 12 Mar 2009 18:44:32 -0500 (CDT) Subject: parallel MatMatMult In-Reply-To: References: Message-ID: MatMatMult(X,Y,...,...,Z) where X is MPIDENSE and Y is MPIAIJ seems to work but when X is MPIAIJ and Y is MPIDENSE doesn't ? says its not supported. there seems to be all permuations in the source MatMatMult_MPIAIJ_MPIDense MatMatMult_MPIDense_MPIAIJ MatMatMult_MPIAIJ_MPIAIJ MatMatMult_MPIDense_MPIDense ? From fuentesdt at gmail.com Thu Mar 12 20:03:39 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Thu, 12 Mar 2009 20:03:39 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: Hi Hong, What solver would I use to do a factorization of a dense parallel matrix w/ plapack? I don't see a MPI_SOLVER_PLAPACK ? On Thu, 12 Mar 2009, Hong Zhang wrote: > >>> >>> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ? >>> Does it use a sparse direct storage or allocate the full n x n matrix? >> >> >> No, MUMPS is "sparse direct" so it uses MPIAIJ. > > For mpi dense matrix, you can use plapack > > Hong >> >> >>> >>> df >>> >>> On Thu, 12 Mar 2009, Matthew Knepley wrote: >>> >>> You can try using a sparse direct solver like MUMPS instead of PETSc LU. >>>> >>>> Matt >>>> >>>> On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes >>>> wrote: >>>> >>>> Thanks Hong, >>>>> >>>>> The complete error message is attached. I think I just had too big >>>>> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >>>>> >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message >>>>> ------------------------------------ >>>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>>> [0]PETSC ERROR: destroying unneeded objects. >>>>> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process >>>>> 2074058752 >>>>> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >>>>> [0]PETSC ERROR: Memory requested 1258466480! >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 >>>>> 22:57:05 >>>>> CST 2009 >>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by >>>>> dfuentes >>>>> Wed Mar 11 20:30:37 2009 >>>>> [0]PETSC ERROR: Libraries linked from >>>>> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >>>>> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >>>>> [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded >>>>> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >>>>> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c >>>>> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c >>>>> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c >>>>> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >>>>> src/mat/impls/aij/seq/aijfact.c >>>>> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >>>>> src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >>>>> unknowndirectory/src/KalmanFilter.cxx >>>>> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >>>>> unknowndirectory/src/MainDriver.cxx >>>>> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx >>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: >>>>> aborting job: >>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, 12 Mar 2009, Hong Zhang wrote: >>>>> >>>>> >>>>> David, >>>>>> >>>>>> I do not see any problem with the calling sequence. >>>>>> >>>>>> The memory is determined in MatLUFactorSymbolic(). >>>>>> Does your code crashes within MatLUFactorSymbolic()? >>>>>> Please send us complete error message. >>>>>> >>>>>> Hong >>>>>> >>>>>> On Wed, 11 Mar 2009, David Fuentes wrote: >>>>>> >>>>>> >>>>>> Hello, >>>>>>> >>>>>>> I have a sparse matrix, A, with which I want to solve multiple right >>>>>>> hand >>>>>>> sides >>>>>>> with a direct solver. Is this the correct call sequence ? >>>>>>> >>>>>>> >>>>>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>>>>> IS isrow,iscol; >>>>>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>>>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>>>>> MatLUFactorNumeric(Afact,A,&info); >>>>>>> MatMatSolve(Afact,B,X); >>>>>>> >>>>>>> >>>>>>> my solve keeps running out of memory >>>>>>> >>>>>>> "[0]PETSC ERROR: Memory requested xxx!" >>>>>>> >>>>>>> >>>>>>> is this in bytes? I can't tell if the problem I'm trying to solve >>>>>>> is too large form my machine or if I just have bug in the call >>>>>>> sequence. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> thank you, >>>>>>> David Fuentes >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments >>>> is infinitely more interesting than any results to which their >>>> experiments >>>> lead. >>>> -- Norbert Wiener >>>> >>>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments >> is infinitely more interesting than any results to which their experiments >> lead. >> -- Norbert Wiener >> > From fuentesdt at gmail.com Thu Mar 12 20:11:24 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Thu, 12 Mar 2009 20:11:24 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: I'm getting plapack errors in "external library" with MatMatMult_MPIDense_MPIDense with plapack? How is memory handled for a matrix of type MATMPIDENSE? Are all NxN entries allocated and ready for use at time of creation? or do I have to MatInsertValues then Assemble to be ready to use a matrix? [0]PETSC ERROR: --------------------- Error Message ---------------------------- -------- [0]PETSC ERROR: Error in external library! [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message ------------ ------------------------ [1]PETSC ERROR: Error in external library! Due to aparent bugs in PLAPACK,this is not currently supported! [1]PETSC ERROR: Due to aparent bugs in PLAPACK,this is not currently supported! [1]PETSC ERROR: ---------------------------------------------------------------- -------- [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 4, Fri Mar 6 14:46:08 CST 20 09 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. On Thu, 12 Mar 2009, Hong Zhang wrote: > >>> >>> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ? >>> Does it use a sparse direct storage or allocate the full n x n matrix? >> >> >> No, MUMPS is "sparse direct" so it uses MPIAIJ. > > For mpi dense matrix, you can use plapack > > Hong >> >> >>> >>> df >>> >>> On Thu, 12 Mar 2009, Matthew Knepley wrote: >>> >>> You can try using a sparse direct solver like MUMPS instead of PETSc LU. >>>> >>>> Matt >>>> >>>> On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes >>>> wrote: >>>> >>>> Thanks Hong, >>>>> >>>>> The complete error message is attached. I think I just had too big >>>>> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >>>>> >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message >>>>> ------------------------------------ >>>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>>> [0]PETSC ERROR: destroying unneeded objects. >>>>> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process >>>>> 2074058752 >>>>> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >>>>> [0]PETSC ERROR: Memory requested 1258466480! >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 >>>>> 22:57:05 >>>>> CST 2009 >>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by >>>>> dfuentes >>>>> Wed Mar 11 20:30:37 2009 >>>>> [0]PETSC ERROR: Libraries linked from >>>>> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >>>>> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >>>>> [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded >>>>> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >>>>> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 >>>>> [0]PETSC ERROR: >>>>> ------------------------------------------------------------------------ >>>>> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c >>>>> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c >>>>> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/freespace.c >>>>> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >>>>> src/mat/impls/aij/seq/aijfact.c >>>>> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >>>>> src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >>>>> unknowndirectory/src/KalmanFilter.cxx >>>>> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >>>>> unknowndirectory/src/MainDriver.cxx >>>>> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx >>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: >>>>> aborting job: >>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, 12 Mar 2009, Hong Zhang wrote: >>>>> >>>>> >>>>> David, >>>>>> >>>>>> I do not see any problem with the calling sequence. >>>>>> >>>>>> The memory is determined in MatLUFactorSymbolic(). >>>>>> Does your code crashes within MatLUFactorSymbolic()? >>>>>> Please send us complete error message. >>>>>> >>>>>> Hong >>>>>> >>>>>> On Wed, 11 Mar 2009, David Fuentes wrote: >>>>>> >>>>>> >>>>>> Hello, >>>>>>> >>>>>>> I have a sparse matrix, A, with which I want to solve multiple right >>>>>>> hand >>>>>>> sides >>>>>>> with a direct solver. Is this the correct call sequence ? >>>>>>> >>>>>>> >>>>>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>>>>> IS isrow,iscol; >>>>>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>>>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>>>>> MatLUFactorNumeric(Afact,A,&info); >>>>>>> MatMatSolve(Afact,B,X); >>>>>>> >>>>>>> >>>>>>> my solve keeps running out of memory >>>>>>> >>>>>>> "[0]PETSC ERROR: Memory requested xxx!" >>>>>>> >>>>>>> >>>>>>> is this in bytes? I can't tell if the problem I'm trying to solve >>>>>>> is too large form my machine or if I just have bug in the call >>>>>>> sequence. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> thank you, >>>>>>> David Fuentes >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments >>>> is infinitely more interesting than any results to which their >>>> experiments >>>> lead. >>>> -- Norbert Wiener >>>> >>>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments >> is infinitely more interesting than any results to which their experiments >> lead. >> -- Norbert Wiener >> > From hzhang at mcs.anl.gov Thu Mar 12 20:24:36 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 12 Mar 2009 20:24:36 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: > > What solver would I use to do a factorization of a dense parallel matrix w/ > plapack? MAT_SOLVER_PLAPACK. See ~petsc-3.0.0/src/mat/examples/tests/ex103.c Hong > > I don't see a > > MPI_SOLVER_PLAPACK > > ? > > > > > > On Thu, 12 Mar 2009, Hong Zhang wrote: > >> >>>> >>>> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ? >>>> Does it use a sparse direct storage or allocate the full n x n matrix? >>> >>> >>> No, MUMPS is "sparse direct" so it uses MPIAIJ. >> >> For mpi dense matrix, you can use plapack >> >> Hong >>> >>> >>>> >>>> df >>>> >>>> On Thu, 12 Mar 2009, Matthew Knepley wrote: >>>> >>>> You can try using a sparse direct solver like MUMPS instead of PETSc LU. >>>>> >>>>> Matt >>>>> >>>>> On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes >>>>> wrote: >>>>> >>>>> Thanks Hong, >>>>>> >>>>>> The complete error message is attached. I think I just had too big >>>>>> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >>>>>> >>>>>> >>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>> ------------------------------------ >>>>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>>>> [0]PETSC ERROR: destroying unneeded objects. >>>>>> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by process >>>>>> 2074058752 >>>>>> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. >>>>>> [0]PETSC ERROR: Memory requested 1258466480! >>>>>> [0]PETSC ERROR: >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 >>>>>> 22:57:05 >>>>>> CST 2009 >>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>> [0]PETSC ERROR: >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 by >>>>>> dfuentes >>>>>> Wed Mar 11 20:30:37 2009 >>>>>> [0]PETSC ERROR: Libraries linked from >>>>>> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >>>>>> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >>>>>> [0]PETSC ERROR: Configure options --download-f-blas-lapack=ifneeded >>>>>> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >>>>>> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with-shared=0 >>>>>> [0]PETSC ERROR: >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c >>>>>> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/memory/mtr.c >>>>>> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in >>>>>> src/mat/utils/freespace.c >>>>>> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >>>>>> src/mat/impls/aij/seq/aijfact.c >>>>>> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >>>>>> src/mat/interface/matrix.c >>>>>> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >>>>>> unknowndirectory/src/KalmanFilter.cxx >>>>>> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >>>>>> unknowndirectory/src/MainDriver.cxx >>>>>> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/MainDriver.cxx >>>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0[unset]: >>>>>> aborting job: >>>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Thu, 12 Mar 2009, Hong Zhang wrote: >>>>>> >>>>>> >>>>>> David, >>>>>>> >>>>>>> I do not see any problem with the calling sequence. >>>>>>> >>>>>>> The memory is determined in MatLUFactorSymbolic(). >>>>>>> Does your code crashes within MatLUFactorSymbolic()? >>>>>>> Please send us complete error message. >>>>>>> >>>>>>> Hong >>>>>>> >>>>>>> On Wed, 11 Mar 2009, David Fuentes wrote: >>>>>>> >>>>>>> >>>>>>> Hello, >>>>>>>> >>>>>>>> I have a sparse matrix, A, with which I want to solve multiple right >>>>>>>> hand >>>>>>>> sides >>>>>>>> with a direct solver. Is this the correct call sequence ? >>>>>>>> >>>>>>>> >>>>>>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>>>>>> IS isrow,iscol; >>>>>>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>>>>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>>>>>> MatLUFactorNumeric(Afact,A,&info); >>>>>>>> MatMatSolve(Afact,B,X); >>>>>>>> >>>>>>>> >>>>>>>> my solve keeps running out of memory >>>>>>>> >>>>>>>> "[0]PETSC ERROR: Memory requested xxx!" >>>>>>>> >>>>>>>> >>>>>>>> is this in bytes? I can't tell if the problem I'm trying to solve >>>>>>>> is too large form my machine or if I just have bug in the call >>>>>>>> sequence. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> thank you, >>>>>>>> David Fuentes >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments >>>>> is infinitely more interesting than any results to which their >>>>> experiments >>>>> lead. >>>>> -- Norbert Wiener >>>>> >>>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments >>> is infinitely more interesting than any results to which their experiments >>> lead. >>> -- Norbert Wiener >>> >> > From renzhengyong at gmail.com Fri Mar 13 06:06:38 2009 From: renzhengyong at gmail.com (RenZhengYong) Date: Fri, 13 Mar 2009 12:06:38 +0100 Subject: Mumps, BoomerAMG, or Pestc? In-Reply-To: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> References: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> Message-ID: <4913f8f50903130406x1385c9b4t9afff995f04fde1@mail.gmail.com> Sorry for mistyping petsc to pestc. On Fri, Mar 13, 2009 at 12:03 PM, RenZhengYong wrote: > Dear petsc team, > > I known your excellent codes several months ago. I want to use Mumps as > direct solver for multi-sources problem and BoomerAMG for real-value based > iterative solver. I went though the doc of the petsc. It showed that petsc > offered a easy and top interface to these two packages. So, I think if I > chose the petsc as my tools, the coding work should be easier for me as only > data structure of petsc should be leaned, not one for Mumps and one for > Hypre. > > Am I right? can pestc do the work Mumps and BloomAMG do? I want to get > your confirm on my decision. As you said, sometimes it is important to make > a correct decision and also to learn petsc should not be a short way. If > petsc really do the fast direct solving and fast algebra multigrid > algorithms by Mumps and BloomAMG, respectively, I think PESTC should > definitely be the first choice for my following PhD project. > > Best regards, > > Zhengyong > > > -- > Zhengyong Ren > AUG Group, Institute of Geophysics > Department of Geoscience > NO H 47 Sonneggstrasse 5 > CH-8092, Z?rich, Switzerland > Tel: +41 44 633 37561 > e-mail: renzh at ethz.ch > Gmail: renzhengyong at gmail.com > -- Zhengyong Ren AUG Group, Institute of Geophysics Department of Geoscience NO H 47 Sonneggstrasse 5 CH-8092, Z?rich, Switzerland Tel: +41 44 633 37561 e-mail: renzh at ethz.ch Gmail: renzhengyong at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From renzhengyong at gmail.com Fri Mar 13 06:03:16 2009 From: renzhengyong at gmail.com (RenZhengYong) Date: Fri, 13 Mar 2009 12:03:16 +0100 Subject: Mumps, BoomerAMG, or Pestc? Message-ID: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> Dear petsc team, I known your excellent codes several months ago. I want to use Mumps as direct solver for multi-sources problem and BoomerAMG for real-value based iterative solver. I went though the doc of the petsc. It showed that petsc offered a easy and top interface to these two packages. So, I think if I chose the petsc as my tools, the coding work should be easier for me as only data structure of petsc should be leaned, not one for Mumps and one for Hypre. Am I right? can pestc do the work Mumps and BloomAMG do? I want to get your confirm on my decision. As you said, sometimes it is important to make a correct decision and also to learn petsc should not be a short way. If petsc really do the fast direct solving and fast algebra multigrid algorithms by Mumps and BloomAMG, respectively, I think PESTC should definitely be the first choice for my following PhD project. Best regards, Zhengyong -- Zhengyong Ren AUG Group, Institute of Geophysics Department of Geoscience NO H 47 Sonneggstrasse 5 CH-8092, Z?rich, Switzerland Tel: +41 44 633 37561 e-mail: renzh at ethz.ch Gmail: renzhengyong at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 13 08:32:04 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 13 Mar 2009 08:32:04 -0500 Subject: Mumps, BoomerAMG, or Pestc? In-Reply-To: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> References: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> Message-ID: On Fri, Mar 13, 2009 at 6:03 AM, RenZhengYong wrote: > Dear petsc team, > > I known your excellent codes several months ago. I want to use Mumps as > direct solver for multi-sources problem and BoomerAMG for real-value based > iterative solver. I went though the doc of the petsc. It showed that petsc > offered a easy and top interface to these two packages. So, I think if I > chose the petsc as my tools, the coding work should be easier for me as only > data structure of petsc should be leaned, not one for Mumps and one for > Hypre. > > Am I right? can pestc do the work Mumps and BloomAMG do? I want to get > your confirm on my decision. As you said, sometimes it is important to make > a correct decision and also to learn petsc should not be a short way. If > petsc really do the fast direct solving and fast algebra multigrid > algorithms by Mumps and BloomAMG, respectively, I think PESTC should > definitely be the first choice for my following PhD project. Yes, PETSc provides a uniform interface to both BoomerAMG and MUMPS. You must indicate during configure that you want both packages (--download-mumps --download-hypre) and then they are available as a mat_solver_package and a PC if you build an AIJ matrix. Thanks, Matt > > Best regards, > > Zhengyong > > > -- > Zhengyong Ren > AUG Group, Institute of Geophysics > Department of Geoscience > NO H 47 Sonneggstrasse 5 > CH-8092, Z?rich, Switzerland > Tel: +41 44 633 37561 > e-mail: renzh at ethz.ch > Gmail: renzhengyong at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Fri Mar 13 08:46:01 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 13 Mar 2009 08:46:01 -0500 (CDT) Subject: Mumps, BoomerAMG, or Pestc? In-Reply-To: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> References: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> Message-ID: Zhengyong, > I known your excellent codes several months ago. I want to use Mumps as > direct solver for multi-sources problem and BoomerAMG for real-value based > iterative solver. I went though the doc of the petsc. It showed that petsc > offered a easy and top interface to these two packages. So, I think if I > chose the petsc as my tools, the coding work should be easier for me as only > data structure of petsc should be leaned, not one for Mumps and one for > Hypre. > > Am I right? can pestc do the work Mumps and BloomAMG do? I want to get your Yes, you can use Mumps, BloomAMG and other packages (see http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/external.html) without changing your application code. > confirm on my decision. As you said, sometimes it is important to make a > correct decision and also to learn petsc should not be a short way. If petsc > really do the fast direct solving and fast algebra multigrid algorithms by > Mumps and BloomAMG, respectively, I think PESTC should definitely be the > first choice for my following PhD project. Here is how to get started: 1. install petsc with configure options '--download-scalapack --download-superlu --download-superlu_dist --download-mumps --download-blacs --download-hypre' (you need F90 compiler for mumps. Mumps requires scalapack and blacs. I also suggest you install superlu and superlu_dist - commonly used sparse direct sovlers) 2. build petsc and test the installation 3. test mumps and hypre, e.g., cd ~petsc/src/ksp/ksp/examples/tutorials make ex2 ./ex2 -ksp_monitor -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps Norm of error < 1.e-12 iterations 1 ./ex2 -ksp_monitor -pc_type hypre -pc_hypre_type boomeramg 0 KSP Residual norm 7.372609060992e+00 1 KSP Residual norm 5.824128070306e-02 2 KSP Residual norm 8.370364383637e-05 Norm of error 8.39541e-05 iterations 2 Let us know if you encounter difficulty, Hong > > > -- > Zhengyong Ren > AUG Group, Institute of Geophysics > Department of Geoscience > NO H 47 Sonneggstrasse 5 > CH-8092, Z?rich, Switzerland > Tel: +41 44 633 37561 > e-mail: renzh at ethz.ch > Gmail: renzhengyong at gmail.com > From renzhengyong at gmail.com Fri Mar 13 09:02:46 2009 From: renzhengyong at gmail.com (REN) Date: Fri, 13 Mar 2009 15:02:46 +0100 Subject: Mumps, BoomerAMG, or Pestc? In-Reply-To: References: <4913f8f50903130403l5f80fabcrf1dfb591e81ad8d6@mail.gmail.com> Message-ID: <1236952966.5298.1.camel@geop-106.ethz.ch> Hi, Matthew and Hong, I will use PETSc for my 4 years project. :) Thanks Zhengyong Ren On Fri, 2009-03-13 at 08:46 -0500, Hong Zhang wrote: > Zhengyong, > > > I known your excellent codes several months ago. I want to use Mumps as > > direct solver for multi-sources problem and BoomerAMG for real-value based > > iterative solver. I went though the doc of the petsc. It showed that petsc > > offered a easy and top interface to these two packages. So, I think if I > > chose the petsc as my tools, the coding work should be easier for me as only > > data structure of petsc should be leaned, not one for Mumps and one for > > Hypre. > > > > Am I right? can pestc do the work Mumps and BloomAMG do? I want to get your > > Yes, you can use Mumps, BloomAMG and other packages > (see http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/external.html) > without changing your application code. > > > confirm on my decision. As you said, sometimes it is important to make a > > correct decision and also to learn petsc should not be a short way. If petsc > > really do the fast direct solving and fast algebra multigrid algorithms by > > Mumps and BloomAMG, respectively, I think PESTC should definitely be the > > first choice for my following PhD project. > > Here is how to get started: > > 1. install petsc with configure options > '--download-scalapack --download-superlu --download-superlu_dist > --download-mumps --download-blacs --download-hypre' > > (you need F90 compiler for mumps. Mumps requires > scalapack and blacs. I also suggest you install > superlu and superlu_dist - commonly used sparse direct sovlers) > > 2. build petsc and test the installation > > 3. test mumps and hypre, e.g., > cd ~petsc/src/ksp/ksp/examples/tutorials > make ex2 > ./ex2 -ksp_monitor -ksp_type preonly -pc_type lu > -pc_factor_mat_solver_package mumps > > Norm of error < 1.e-12 iterations 1 > > ./ex2 -ksp_monitor -pc_type hypre -pc_hypre_type boomeramg > > 0 KSP Residual norm 7.372609060992e+00 > 1 KSP Residual norm 5.824128070306e-02 > 2 KSP Residual norm 8.370364383637e-05 > Norm of error 8.39541e-05 iterations 2 > > Let us know if you encounter difficulty, > > Hong > > > > > > -- > > Zhengyong Ren > > AUG Group, Institute of Geophysics > > Department of Geoscience > > NO H 47 Sonneggstrasse 5 > > CH-8092, Zrich, Switzerland > > Tel: +41 44 633 37561 > > e-mail: renzh at ethz.ch > > Gmail: renzhengyong at gmail.com > > From rxk at cfdrc.com Fri Mar 13 12:48:39 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Fri, 13 Mar 2009 11:48:39 -0600 Subject: matrix assembling time In-Reply-To: <49B17071.1030104@cs.wm.edu> Message-ID: Hi, This is Ravi Kannan from CFD Research Corporation. One basic question on the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a sparse matrix and the bandwidth of A (i.e. the distance between non zero elements) is high, does PETSc reorder the matrix/matrix-equations so as to solve more efficiently. If yes, is there any specific command to do the above? Thanks Ravi -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Yixun Liu Sent: Friday, March 06, 2009 12:50 PM To: PETSC Subject: matrix assembling time Hi, Using PETSc the assembling time for a mesh with 6000 vertices is about 14 second parallelized on 4 processors, but another sequential program based on gmm lib is about 0.6 second. PETSc's solver is much faster than gmm, but I don't know why its assembling is so slow although I have preallocate an enough space for the matrix. MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, PETSC_NULL, 1000, PETSC_NULL); Yixun From knepley at gmail.com Fri Mar 13 12:34:01 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 13 Mar 2009 12:34:01 -0500 Subject: matrix assembling time In-Reply-To: References: <49B17071.1030104@cs.wm.edu> Message-ID: On Fri, Mar 13, 2009 at 12:48 PM, Ravi Kannan wrote: > Hi, > This is Ravi Kannan from CFD Research Corporation. One basic question on > the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a > sparse matrix and the bandwidth of A (i.e. the distance between non zero > elements) is high, does PETSc reorder the matrix/matrix-equations so as to > solve more efficiently. If yes, is there any specific command to do the > above? You can reorder the matrix using the MatOrdering class. Matt > > Thanks > Ravi > > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Yixun Liu > Sent: Friday, March 06, 2009 12:50 PM > To: PETSC > Subject: matrix assembling time > > > Hi, > Using PETSc the assembling time for a mesh with 6000 vertices is about > 14 second parallelized on 4 processors, but another sequential program > based on gmm lib is about 0.6 second. PETSc's solver is much faster than > gmm, but I don't know why its assembling is so slow although I have > preallocate an enough space for the matrix. > > MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, > PETSC_NULL, 1000, PETSC_NULL); > > Yixun > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuentesdt at gmail.com Fri Mar 13 15:26:37 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Fri, 13 Mar 2009 15:26:37 -0500 (CDT) Subject: Performance of MatMatSolve Message-ID: The majority of time in my code is spent in the MatMatSolve. I'm running MatMatSolve in parallel using Mumps as the factored matrix. Using top, I've noticed that during the MatMatSolve the majority of the load seems to be on the root process. Is this expected? Or do I most likely have a problem with the matrices that I'm passing in? thank you, David Fuentes From rxk at cfdrc.com Fri Mar 13 16:39:25 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Fri, 13 Mar 2009 15:39:25 -0600 Subject: matrix assembling time In-Reply-To: Message-ID: Hi Matt Are you suggesting to use MatGetOrdering()? Will it work for parallel matrix? Thanks. Ravi -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Matthew Knepley Sent: Friday, March 13, 2009 11:34 AM To: PETSc users list Subject: Re: matrix assembling time On Fri, Mar 13, 2009 at 12:48 PM, Ravi Kannan wrote: Hi, This is Ravi Kannan from CFD Research Corporation. One basic question on the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a sparse matrix and the bandwidth of A (i.e. the distance between non zero elements) is high, does PETSc reorder the matrix/matrix-equations so as to solve more efficiently. If yes, is there any specific command to do the above? You can reorder the matrix using the MatOrdering class. Matt Thanks Ravi -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Yixun Liu Sent: Friday, March 06, 2009 12:50 PM To: PETSC Subject: matrix assembling time Hi, Using PETSc the assembling time for a mesh with 6000 vertices is about 14 second parallelized on 4 processors, but another sequential program based on gmm lib is about 0.6 second. PETSc's solver is much faster than gmm, but I don't know why its assembling is so slow although I have preallocate an enough space for the matrix. MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, PETSC_NULL, 1000, PETSC_NULL); Yixun -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 13 15:36:53 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 13 Mar 2009 15:36:53 -0500 Subject: matrix assembling time In-Reply-To: References: Message-ID: On Fri, Mar 13, 2009 at 4:39 PM, Ravi Kannan wrote: > Hi Matt > > Are you suggesting to use MatGetOrdering()? > That is one way. > Will it work for parallel matrix? > It depends on the particular ordering, but I think most do. Matt > Thanks. > > Ravi > > -----Original Message----- > *From:* petsc-users-bounces at mcs.anl.gov [mailto: > petsc-users-bounces at mcs.anl.gov]*On Behalf Of *Matthew Knepley > *Sent:* Friday, March 13, 2009 11:34 AM > *To:* PETSc users list > *Subject:* Re: matrix assembling time > > On Fri, Mar 13, 2009 at 12:48 PM, Ravi Kannan wrote: > >> Hi, >> This is Ravi Kannan from CFD Research Corporation. One basic question >> on >> the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a >> sparse matrix and the bandwidth of A (i.e. the distance between non zero >> elements) is high, does PETSc reorder the matrix/matrix-equations so as to >> solve more efficiently. If yes, is there any specific command to do the >> above? > > > You can reorder the matrix using the MatOrdering class. > > Matt > > >> >> Thanks >> Ravi >> >> >> >> -----Original Message----- >> From: petsc-users-bounces at mcs.anl.gov >> [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Yixun Liu >> Sent: Friday, March 06, 2009 12:50 PM >> To: PETSC >> Subject: matrix assembling time >> >> >> Hi, >> Using PETSc the assembling time for a mesh with 6000 vertices is about >> 14 second parallelized on 4 processors, but another sequential program >> based on gmm lib is about 0.6 second. PETSc's solver is much faster than >> gmm, but I don't know why its assembling is so slow although I have >> preallocate an enough space for the matrix. >> >> MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, >> PETSC_NULL, 1000, PETSC_NULL); >> >> Yixun >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Mar 13 19:55:00 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 13 Mar 2009 19:55:00 -0500 Subject: matrix assembling time In-Reply-To: References: Message-ID: <9E851A30-A825-457A-87E6-BDB3E7B33334@mcs.anl.gov> On Mar 13, 2009, at 12:48 PM, Ravi Kannan wrote: > Hi, > This is Ravi Kannan from CFD Research Corporation. One basic > question on > the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a > sparse matrix and the bandwidth of A (i.e. the distance between non > zero > elements) is high, does PETSc reorder the matrix/matrix-equations so > as to > solve more efficiently. Depends on what you mean. All the direct solvers use reorderings automatically to reduce fill and hence limit memory and flop usage. The iterative solvers do not. There is much less to gain by reordering for iterative solvers (no memory gain and only a relatively smallish improved cache gain). The "PETSc approach" is that one does the following 1) partitions the grid across processors (using a mesh partitioner) and then 2) numbers the grid on each process in a reasonable ordering BEFORE generating the linear system. Thus the sparse matrix automatically gets a good layout from the layout of the grid. So if you do 1) and 2) then no additional reordering is needed. Barry > If yes, is there any specific command to do the > above? > > Thanks > Ravi > > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Yixun Liu > Sent: Friday, March 06, 2009 12:50 PM > To: PETSC > Subject: matrix assembling time > > > Hi, > Using PETSc the assembling time for a mesh with 6000 vertices is > about > 14 second parallelized on 4 processors, but another sequential program > based on gmm lib is about 0.6 second. PETSc's solver is much faster > than > gmm, but I don't know why its assembling is so slow although I have > preallocate an enough space for the matrix. > > MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000, > PETSC_NULL, 1000, PETSC_NULL); > > Yixun > > From bsmith at mcs.anl.gov Fri Mar 13 20:00:43 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 13 Mar 2009 20:00:43 -0500 Subject: multiple rhs In-Reply-To: References: Message-ID: On Mar 12, 2009, at 8:11 PM, David Fuentes wrote: > > I'm getting plapack errors in "external library" with > > MatMatMult_MPIDense_MPIDense > > with plapack? How is memory handled for a matrix > of type MATMPIDENSE? Are all NxN entries allocated and ready for > use at time of creation? Yes, it has all zeros in it. > or do I have to MatInsertValues > then Assemble to be ready to use a matrix? > > > > > > [0]PETSC ERROR: --------------------- Error Message > ---------------------------- > -------- > [0]PETSC ERROR: Error in external library! > [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message > ------------ > ------------------------ > [1]PETSC ERROR: Error in external library! > Due to aparent bugs in PLAPACK,this is not currently supported! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I could not get plapack to properly handle nonsquare matrices properly for these operations. I tried hard to debug but plapack is a mess of complexity, got very frustrated. If you take out the generation of this error message, so the code runs, then you can try to debug plapack (I am pretty sure the problem is in plapack, not in the PETSc interface). Barry > > [1]PETSC ERROR: Due to aparent bugs in PLAPACK,this is not currently > supported! > [1]PETSC ERROR: > ---------------------------------------------------------------- > -------- > [1]PETSC ERROR: Petsc Release Version 3.0.0, Patch 4, Fri Mar 6 > 14:46:08 CST 20 > 09 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: [1]PETSC ERROR: See docs/faq.html for hints about > trouble shooting. > > > > > > On Thu, 12 Mar 2009, Hong Zhang wrote: > >> >>>> Is MatCreateMPIDense the recommended matrix type to interface w/ >>>> mumps ? >>>> Does it use a sparse direct storage or allocate the full n x n >>>> matrix? >>> No, MUMPS is "sparse direct" so it uses MPIAIJ. >> >> For mpi dense matrix, you can use plapack >> >> Hong >>>> df >>>> On Thu, 12 Mar 2009, Matthew Knepley wrote: >>>> >>>> You can try using a sparse direct solver like MUMPS instead of >>>> PETSc LU. >>>>> >>>>> Matt >>>>> On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes >>>> > >>>>> wrote: >>>>> >>>>> Thanks Hong, >>>>>> The complete error message is attached. I think I just had too >>>>>> big >>>>>> of a matrix. The matrix i'm trying to factor is 327680 x 327680 >>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>> ------------------------------------ >>>>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>>>> [0]PETSC ERROR: destroying unneeded objects. >>>>>> [0]PETSC ERROR: Memory allocated 2047323584 Memory used by >>>>>> process >>>>>> 2074058752 >>>>>> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log >>>>>> for info. >>>>>> [0]PETSC ERROR: Memory requested 1258466480! >>>>>> [0]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 2, Wed Jan >>>>>> 14 22:57:05 >>>>>> CST 2009 >>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble >>>>>> shooting. >>>>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>>>> [0]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: ./RealTimeImaging on a gcc-4.1.2 named DIPWS019 >>>>>> by >>>>>> dfuentes >>>>>> Wed Mar 11 20:30:37 2009 >>>>>> [0]PETSC ERROR: Libraries linked from >>>>>> /usr/local/petsc/petsc-3.0.0-p2/gcc-4.1.2-mpich2-1.0.7-dbg/lib >>>>>> [0]PETSC ERROR: Configure run at Sat Jan 31 06:53:09 2009 >>>>>> [0]PETSC ERROR: Configure options --download-f-blas- >>>>>> lapack=ifneeded >>>>>> --with-mpi-dir=/usr/local --with-matlab=1 --with-matlab-engine=1 >>>>>> --with-matlab-dir=/usr/local/matlab2007a --CFLAGS=-fPIC --with- >>>>>> shared=0 >>>>>> [0]PETSC ERROR: >>>>>> ------------------------------------------------------------------------ >>>>>> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/ >>>>>> mal.c >>>>>> [0]PETSC ERROR: PetscTrMallocDefault() line 194 in src/sys/ >>>>>> memory/mtr.c >>>>>> [0]PETSC ERROR: PetscFreeSpaceGet() line 14 in src/mat/utils/ >>>>>> freespace.c >>>>>> [0]PETSC ERROR: MatLUFactorSymbolic_SeqAIJ() line 381 in >>>>>> src/mat/impls/aij/seq/aijfact.c >>>>>> [0]PETSC ERROR: MatLUFactorSymbolic() line 2289 in >>>>>> src/mat/interface/matrix.c >>>>>> [0]PETSC ERROR: KalmanFilter::DirectStateUpdate() line 456 in >>>>>> unknowndirectory/src/KalmanFilter.cxx >>>>>> [0]PETSC ERROR: GeneratePRFTmap() line 182 in >>>>>> unknowndirectory/src/MainDriver.cxx >>>>>> [0]PETSC ERROR: main() line 90 in unknowndirectory/src/ >>>>>> MainDriver.cxx >>>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process >>>>>> 0[unset]: >>>>>> aborting job: >>>>>> application called MPI_Abort(MPI_COMM_WORLD, 55) - process 0 >>>>>> On Thu, 12 Mar 2009, Hong Zhang wrote: >>>>>> >>>>>> David, >>>>>>> I do not see any problem with the calling sequence. >>>>>>> The memory is determined in MatLUFactorSymbolic(). >>>>>>> Does your code crashes within MatLUFactorSymbolic()? >>>>>>> Please send us complete error message. >>>>>>> Hong >>>>>>> On Wed, 11 Mar 2009, David Fuentes wrote: >>>>>>> >>>>>>> Hello, >>>>>>>> I have a sparse matrix, A, with which I want to solve >>>>>>>> multiple right >>>>>>>> hand >>>>>>>> sides >>>>>>>> with a direct solver. Is this the correct call sequence ? >>>>>>>> >>>>>>>> MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact); >>>>>>>> IS isrow,iscol; >>>>>>>> MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol); >>>>>>>> MatLUFactorSymbolic(Afact,A,isrow,iscol,&info); >>>>>>>> MatLUFactorNumeric(Afact,A,&info); >>>>>>>> MatMatSolve(Afact,B,X); >>>>>>>> my solve keeps running out of memory >>>>>>>> "[0]PETSC ERROR: Memory requested xxx!" >>>>>>>> is this in bytes? I can't tell if the problem I'm trying to >>>>>>>> solve >>>>>>>> is too large form my machine or if I just have bug in the call >>>>>>>> sequence. >>>>>>>> thank you, >>>>>>>> David Fuentes >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments >>>>> is infinitely more interesting than any results to which their >>>>> experiments >>>>> lead. >>>>> -- Norbert Wiener >>> -- >>> What most experimenters take for granted before they begin their >>> experiments >>> is infinitely more interesting than any results to which their >>> experiments >>> lead. >>> -- Norbert Wiener >> From hzhang at mcs.anl.gov Fri Mar 13 21:10:55 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 13 Mar 2009 21:10:55 -0500 (CDT) Subject: Performance of MatMatSolve In-Reply-To: References: Message-ID: David, You may run with option '-log_summary ' and check which function dominates the time. I suspect the symbolic factorization, because it is implemented sequentially in mumps. If this is the case, you may swich to superlu_dist which supports parallel symbolic factorization in the latest release. Let us know what you get, Hong On Fri, 13 Mar 2009, David Fuentes wrote: > > The majority of time in my code is spent in the MatMatSolve. I'm running > MatMatSolve in parallel using Mumps as the factored matrix. > Using top, I've noticed that during the MatMatSolve > the majority of the load seems to be on the root process. > Is this expected? Or do I most likely have a problem with the matrices that > I'm passing in? > > > > thank you, > David Fuentes > > From fuentesdt at gmail.com Sat Mar 14 15:41:15 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sat, 14 Mar 2009 15:41:15 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: Very Many Thanks for your efforts on this Barry. The PLAPACK website looks like it hasn't been updated since 2007. Maybe PLAPACK is in need of some maintenance? You said "nonsquare", is plapack working for you for square matrices ? thanks again, df >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> ---------------------------- >> -------- >> [0]PETSC ERROR: Error in external library! >> [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message >> ------------ >> ------------------------ >> [1]PETSC ERROR: Error in external library! >> Due to aparent bugs in PLAPACK,this is not currently supported! > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > I could not get plapack to properly handle nonsquare matrices properly for > these operations. > I tried hard to debug but plapack is a mess of complexity, got very > frustrated. If you take out the > generation of this error message, so the code runs, then you can try to debug > plapack > (I am pretty sure the problem is in plapack, not in the PETSc interface). > > Barry > From hzhang at mcs.anl.gov Sat Mar 14 16:18:02 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sat, 14 Mar 2009 16:18:02 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: > Very Many Thanks for your efforts on this Barry. The PLAPACK > website looks like it hasn't been updated since 2007. Maybe PLAPACK > is in need of some maintenance? You said "nonsquare", is plapack working for > you for square matrices ? Yes, it works for square matrices. See ~petsc/src/mat/examples/tests/ex103.c and ex107.c Hong > > >>> >>> >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> ---------------------------- >>> -------- >>> [0]PETSC ERROR: Error in external library! >>> [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message >>> ------------ >>> ------------------------ >>> [1]PETSC ERROR: Error in external library! >>> Due to aparent bugs in PLAPACK,this is not currently supported! >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> I could not get plapack to properly handle nonsquare matrices properly >> for these operations. >> I tried hard to debug but plapack is a mess of complexity, got very >> frustrated. If you take out the >> generation of this error message, so the code runs, then you can try to >> debug plapack >> (I am pretty sure the problem is in plapack, not in the PETSc interface). >> >> Barry >> > From fuentesdt at gmail.com Sat Mar 14 16:23:41 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sat, 14 Mar 2009 16:23:41 -0500 (CDT) Subject: Performance of MatMatSolve In-Reply-To: References: Message-ID: Thanks a lot Hong, The switch definitely seemed to balance the load during the SuperLU matmatsolve. Although I'm not completely sure what I'm seeing. Changing the #dof also seemed to affect the load balance of the Mumps MatMatSolve. I need to investigate a bit more. Looking in the profile. The majority of the time is spent in the MatSolve called by the MatMatSolve. ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ VecCopy 135030 1.0 6.3319e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecWAXPY 30 1.0 1.6069e-04 1.9 4.32e+03 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 840 VecScatterBegin 30 1.0 7.6072e-03 1.5 0.00e+00 0.0 4.7e+04 9.0e+02 0.0e+00 0 0 15 0 0 0 0 50 0 0 0 VecScatterEnd 30 1.0 9.1272e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMultAdd 30 1.0 3.3028e-01 1.4 3.89e+07 1.7 4.7e+04 9.0e+02 0.0e+00 0 0 15 0 0 0 0 50 0 0 3679 MatSolve 135030 1.0 3.0340e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 78 0 0 0 0 81 0 0 0 0 0 MatLUFactorSym 30 1.0 2.2563e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 30 1.0 2.7990e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 MatConvert 150 1.0 2.9276e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+02 0 0 0 0 4 0 0 0 0 30 0 MatScale 60 1.0 2.7492e-01 1.9 1.94e+07 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2210 MatAssemblyBegin 180 1.0 1.1748e+02236.9 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+02 2 0 0 0 5 2 0 0 0 40 0 MatAssemblyEnd 180 1.0 1.9992e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+02 0 0 0 0 5 0 0 0 0 40 0 MatGetRow 4320 1.7 2.2634e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 30 1.0 4.2578e+02 1.0 1.75e+11 1.7 4.7e+04 4.0e+06 2.4e+02 11 100 15 97 5 11100 50100 40 12841 MatMatSolve 30 1.0 3.0256e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+01 77 0 0 0 1 81 0 0 0 10 0 df On Fri, 13 Mar 2009, Hong Zhang wrote: > David, > > You may run with option '-log_summary ' and > check which function dominates the time. > I suspect the symbolic factorization, because it is > implemented sequentially in mumps. > > If this is the case, you may swich to superlu_dist > which supports parallel symbolic factorization > in the latest release. > > Let us know what you get, > > Hong > > On Fri, 13 Mar 2009, David Fuentes wrote: > >> >> The majority of time in my code is spent in the MatMatSolve. I'm running >> MatMatSolve in parallel using Mumps as the factored matrix. >> Using top, I've noticed that during the MatMatSolve >> the majority of the load seems to be on the root process. >> Is this expected? Or do I most likely have a problem with the matrices that >> I'm passing in? >> >> >> >> thank you, >> David Fuentes >> >> > From hzhang at mcs.anl.gov Sat Mar 14 17:00:18 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sat, 14 Mar 2009 17:00:18 -0500 (CDT) Subject: Performance of MatMatSolve In-Reply-To: References: Message-ID: David, Yes, MatMatSolve dominates. Can you also send us the output of '-log_summary' from superlu_dist? MUMPS only suppports centralized rhs vector b. Thus, we must scatter petsc distributed b into a seqential rhs vector (stored in root proc) in the petsc interface, which explains why the root proc takes longer time. I see that the numerical factorization and MatMatSolve are called 30 times. Do you iterate with the sequence similar to for i=0,1, ... B_i = X_(i-1) Solve A_i * X_i = B_i i.e., the rhs B is based on previously computed X? If this is the case, we should take sequential output X (mumps has this option) and feed it into next iteration without mpi scattering. Hong On Sat, 14 Mar 2009, David Fuentes wrote: > Thanks a lot Hong, > > The switch definitely seemed to balance the load during the SuperLU > matmatsolve. > Although I'm not completely sure what I'm seeing. Changing the #dof > also seemed to affect the load balance of the Mumps MatMatSolve. > I need to investigate a bit more. > > Looking in the profile. The majority of the time is spent in the > MatSolve called by the MatMatSolve. > > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flops --- Global --- --- > Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > VecCopy 135030 1.0 6.3319e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecWAXPY 30 1.0 1.6069e-04 1.9 4.32e+03 1.7 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 840 > VecScatterBegin 30 1.0 7.6072e-03 1.5 0.00e+00 0.0 4.7e+04 9.0e+02 > 0.0e+00 0 0 15 0 0 0 0 50 0 0 0 > VecScatterEnd 30 1.0 9.1272e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMultAdd 30 1.0 3.3028e-01 1.4 3.89e+07 1.7 4.7e+04 9.0e+02 > 0.0e+00 0 0 15 0 0 0 0 50 0 0 3679 > MatSolve 135030 1.0 3.0340e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 78 0 0 0 0 81 0 0 0 0 0 > MatLUFactorSym 30 1.0 2.2563e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatLUFactorNum 30 1.0 2.7990e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 > MatConvert 150 1.0 2.9276e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.8e+02 0 0 0 0 4 0 0 0 0 30 0 > MatScale 60 1.0 2.7492e-01 1.9 1.94e+07 1.7 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 2210 > MatAssemblyBegin 180 1.0 1.1748e+02236.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.4e+02 2 0 0 0 5 2 0 0 0 40 0 > MatAssemblyEnd 180 1.0 1.9992e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.4e+02 0 0 0 0 5 0 0 0 0 40 0 > MatGetRow 4320 1.7 2.2634e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMult 30 1.0 4.2578e+02 1.0 1.75e+11 1.7 4.7e+04 4.0e+06 > 2.4e+02 11 100 15 97 5 11100 50100 40 12841 > MatMatSolve 30 1.0 3.0256e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 6.0e+01 77 0 0 0 1 81 0 0 0 10 0 > > > > df > > > > On Fri, 13 Mar 2009, Hong Zhang wrote: > >> David, >> >> You may run with option '-log_summary ' and >> check which function dominates the time. >> I suspect the symbolic factorization, because it is >> implemented sequentially in mumps. >> >> If this is the case, you may swich to superlu_dist >> which supports parallel symbolic factorization >> in the latest release. >> >> Let us know what you get, >> >> Hong >> >> On Fri, 13 Mar 2009, David Fuentes wrote: >> >>> >>> The majority of time in my code is spent in the MatMatSolve. I'm running >>> MatMatSolve in parallel using Mumps as the factored matrix. >>> Using top, I've noticed that during the MatMatSolve >>> the majority of the load seems to be on the root process. >>> Is this expected? Or do I most likely have a problem with the matrices >>> that I'm passing in? >>> >>> >>> >>> thank you, >>> David Fuentes >>> >>> >> > From xy2102 at columbia.edu Sat Mar 14 17:47:14 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Sat, 14 Mar 2009 18:47:14 -0400 Subject: -pc_type mg In-Reply-To: References: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> Message-ID: <20090314184714.64u7yivgo400s8cg@cubmail.cc.columbia.edu> Dear all, Is there any examples that use mg as a preconditioner? How could I add my own restriction/prolongation matrix? Thanks, Rebecca From bsmith at mcs.anl.gov Sun Mar 15 13:00:29 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 15 Mar 2009 13:00:29 -0500 Subject: -pc_type mg In-Reply-To: <20090314184714.64u7yivgo400s8cg@cubmail.cc.columbia.edu> References: <20090303164221.i963xq0c0oosgss8@cubmail.cc.columbia.edu> <20090314184714.64u7yivgo400s8cg@cubmail.cc.columbia.edu> Message-ID: <514390EA-2DF5-4D72-B008-E616647C8B38@mcs.anl.gov> src/ksp/ksp/examples/tests/ex19.c On Mar 14, 2009, at 5:47 PM, (Rebecca) Xuefei YUAN wrote: > Dear all, > > Is there any examples that use mg as a preconditioner? How could I > add my own restriction/prolongation matrix? > > Thanks, > > Rebecca From fuentesdt at gmail.com Sun Mar 15 13:27:27 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sun, 15 Mar 2009 13:27:27 -0500 (CDT) Subject: Performance of MatMatSolve In-Reply-To: References: Message-ID: On Sat, 14 Mar 2009, Hong Zhang wrote: > > David, > > Yes, MatMatSolve dominates. Can you also send us the output of > '-log_summary' from superlu_dist? > > MUMPS only suppports centralized rhs vector b. > Thus, we must scatter petsc distributed b into a seqential rhs vector (stored > in root proc) in the petsc interface, which explains why the root proc takes > longer time. > I see that the numerical factorization and MatMatSolve are called > 30 times. > Do you iterate with the sequence similar to > for i=0,1, ... > B_i = X_(i-1) > Solve A_i * X_i = B_i > > i.e., the rhs B is based on previously computed X? Hong, Yes my sequence is similiar to the algorithm above. The numbers I sent were from superlu. I'm seeing pretty similiar performance profiles between the two. Sorry, I tried to get a good apples to apples comparison but getting seg faults as I increase the # of processors w/ mumps which is why it is ran w/ only 24 procs and super lu is w/ 40 procs. ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 3: State Update (superlu 40 processors) VecCopy 135030 1.0 6.3319e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecWAXPY 30 1.0 1.6069e-04 1.9 4.32e+03 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 840 VecScatterBegin 30 1.0 7.6072e-03 1.5 0.00e+00 0.0 4.7e+04 9.0e+02 0.0e+00 0 0 15 0 0 0 0 50 0 0 0 VecScatterEnd 30 1.0 9.1272e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMultAdd 30 1.0 3.3028e-01 1.4 3.89e+07 1.7 4.7e+04 9.0e+02 0.0e+00 0 0 15 0 0 0 0 50 0 0 3679 MatSolve 135030 1.0 3.0340e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 78 0 0 0 0 81 0 0 0 0 0 MatLUFactorSym 30 1.0 2.2563e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 30 1.0 2.7990e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 MatConvert 150 1.0 2.9276e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+02 0 0 0 0 2 0 0 0 0 30 0 MatScale 60 1.0 2.7492e-01 1.9 1.94e+07 1.7 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2210 MatAssemblyBegin 180 1.0 1.1748e+02236.9 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+02 2 0 0 0 2 2 0 0 0 40 0 MatAssemblyEnd 180 1.0 1.9992e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+02 0 0 0 0 2 0 0 0 0 40 0 MatGetRow 4320 1.7 2.2634e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 30 1.0 4.2578e+02 1.0 1.75e+11 1.7 4.7e+04 4.0e+06 2.4e+02 11100 15 97 2 11100 50100 40 12841 MatMatSolve 30 1.0 3.0256e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+01 77 0 0 0 1 81 0 0 0 10 0 --- Event Stage 3: State Update (mumps 24 processors) VecWAXPY 30 1.0 3.5802e-04 2.0 6.00e+03 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 377 VecScatterBegin 270090 1.0 2.6040e+0121.1 0.00e+00 0.0 3.1e+06 2.3e+03 0.0e+00 0 0 97 6 0 0 0 99 6 0 0 VecScatterEnd 135060 1.0 3.7928e+0164.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMultAdd 30 1.0 4.5802e-01 2.3 5.40e+07 1.1 1.7e+04 1.5e+03 0.0e+00 0 0 1 0 0 0 0 1 0 0 2653 MatSolve 135030 1.0 6.4960e+03 1.0 0.00e+00 0.0 3.1e+06 2.3e+03 1.5e+02 81 0 96 6 0 86 0 99 6 7 0 MatLUFactorSym 30 1.0 1.0538e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 30 1.0 4.4708e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+02 6 0 0 0 0 6 0 0 0 9 0 MatConvert 150 1.0 4.7433e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 6.3e+02 0 0 0 0 0 0 0 0 0 30 0 MatScale 60 1.0 4.3342e-01 6.7 2.70e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1402 MatAssemblyBegin 180 1.0 8.4294e+01 5.9 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+02 1 0 0 0 0 1 0 0 0 12 0 MatAssemblyEnd 180 1.0 1.3100e-01 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 4.2e+02 0 0 0 0 0 0 0 0 0 20 0 MatGetRow 6000 1.1 3.6813e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 30 1.0 6.1625e+02 1.0 2.43e+11 1.1 1.7e+04 6.8e+06 5.1e+02 8100 1 91 0 8100 1 94 25 8872 MatMatSolve 30 1.0 6.4946e+03 1.0 0.00e+00 0.0 3.1e+06 2.3e+03 1.2e+02 81 0 96 6 0 86 0 99 6 6 0 ------------------------------------------------------------------------------------------------------------------------ > On Sat, 14 Mar 2009, David Fuentes wrote: > >> Thanks a lot Hong, >> >> The switch definitely seemed to balance the load during the SuperLU >> matmatsolve. >> Although I'm not completely sure what I'm seeing. Changing the #dof >> also seemed to affect the load balance of the Mumps MatMatSolve. >> I need to investigate a bit more. >> >> Looking in the profile. The majority of the time is spent in the >> MatSolve called by the MatMatSolve. >> >> >> >> ------------------------------------------------------------------------------------------------------------------------ >> Event Count Time (sec) Flops --- Global --- --- >> Stage --- Total >> Max Ratio Max Ratio Max Ratio Mess Avg len >> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >> >> ------------------------------------------------------------------------------------------------------------------------ >> >> VecCopy 135030 1.0 6.3319e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 >> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecWAXPY 30 1.0 1.6069e-04 1.9 4.32e+03 1.7 0.0e+00 0.0e+00 >> 0.0e+00 0 0 0 0 0 0 0 0 0 0 840 >> VecScatterBegin 30 1.0 7.6072e-03 1.5 0.00e+00 0.0 4.7e+04 9.0e+02 >> 0.0e+00 0 0 15 0 0 0 0 50 0 0 0 >> VecScatterEnd 30 1.0 9.1272e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 >> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatMultAdd 30 1.0 3.3028e-01 1.4 3.89e+07 1.7 4.7e+04 9.0e+02 >> 0.0e+00 0 0 15 0 0 0 0 50 0 0 3679 >> MatSolve 135030 1.0 3.0340e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >> 0.0e+00 78 0 0 0 0 81 0 0 0 0 0 >> MatLUFactorSym 30 1.0 2.2563e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatLUFactorNum 30 1.0 2.7990e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >> 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 >> MatConvert 150 1.0 2.9276e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 >> 1.8e+02 0 0 0 0 4 0 0 0 0 30 0 >> MatScale 60 1.0 2.7492e-01 1.9 1.94e+07 1.7 0.0e+00 0.0e+00 >> 0.0e+00 0 0 0 0 0 0 0 0 0 0 2210 >> MatAssemblyBegin 180 1.0 1.1748e+02236.9 0.00e+00 0.0 0.0e+00 0.0e+00 >> 2.4e+02 2 0 0 0 5 2 0 0 0 40 0 >> MatAssemblyEnd 180 1.0 1.9992e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >> 2.4e+02 0 0 0 0 5 0 0 0 0 40 0 >> MatGetRow 4320 1.7 2.2634e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 >> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatMatMult 30 1.0 4.2578e+02 1.0 1.75e+11 1.7 4.7e+04 4.0e+06 >> 2.4e+02 11 100 15 97 5 11100 50100 40 12841 >> MatMatSolve 30 1.0 3.0256e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >> 6.0e+01 77 0 0 0 1 81 0 0 0 10 0 >> >> >> >> df >> >> >> >> On Fri, 13 Mar 2009, Hong Zhang wrote: >> >>> David, >>> >>> You may run with option '-log_summary ' and >>> check which function dominates the time. >>> I suspect the symbolic factorization, because it is >>> implemented sequentially in mumps. >>> >>> If this is the case, you may swich to superlu_dist >>> which supports parallel symbolic factorization >>> in the latest release. >>> >>> Let us know what you get, >>> >>> Hong >>> >>> On Fri, 13 Mar 2009, David Fuentes wrote: >>> >>>> >>>> The majority of time in my code is spent in the MatMatSolve. I'm running >>>> MatMatSolve in parallel using Mumps as the factored matrix. >>>> Using top, I've noticed that during the MatMatSolve >>>> the majority of the load seems to be on the root process. >>>> Is this expected? Or do I most likely have a problem with the matrices >>>> that I'm passing in? >>>> >>>> >>>> >>>> thank you, >>>> David Fuentes >>>> >>>> >>> >> > From fuentesdt at gmail.com Sun Mar 15 20:36:30 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sun, 15 Mar 2009 20:36:30 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: On Sat, 14 Mar 2009, Hong Zhang wrote: > >> Very Many Thanks for your efforts on this Barry. The PLAPACK >> website looks like it hasn't been updated since 2007. Maybe PLAPACK >> is in need of some maintenance? You said "nonsquare", is plapack working >> for you for square matrices ? > > Yes, it works for square matrices. > See ~petsc/src/mat/examples/tests/ex103.c and ex107.c > > Hong Hi Hong, Does Petsc/Plapack work for you for MatMatMult operations on square dense parallel matrices? the tests above seem to have matmult operations but not matmatmult. df From hzhang at mcs.anl.gov Mon Mar 16 09:43:43 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 16 Mar 2009 09:43:43 -0500 (CDT) Subject: multiple rhs In-Reply-To: References: Message-ID: >>> website looks like it hasn't been updated since 2007. Maybe PLAPACK >>> is in need of some maintenance? You said "nonsquare", is plapack working >>> for you for square matrices ? >> >> Yes, it works for square matrices. >> See ~petsc/src/mat/examples/tests/ex103.c and ex107.c >> >> Hong > > > Hi Hong, > > Does Petsc/Plapack work for you for MatMatMult operations > on square dense parallel matrices? the tests above seem to have matmult > operations but not matmatmult. > Sorry, I thought you want use plapack for solving Ax=b for square matrix. No, MatMatMult() does not work for square parallel matrices either. ex123.c crashes on square matrix as well. I'll try to investigate it when time permits. Hong From hzhang at mcs.anl.gov Mon Mar 16 10:34:17 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 16 Mar 2009 10:34:17 -0500 (CDT) Subject: Performance of MatMatSolve In-Reply-To: References: Message-ID: David, Superlu_dist seems sligtly better. Does mumps crashes during numeric factorization due to memory limitation? You may try the option '-mat_mumps_icntl_14 ' with num>20 (ICNTL(14): percentage of estimated workspace increase, default=20). Run your code with '-help' to see all available options. >From your output > MatSolve 135030 1.0 3.0340e+03 i.e., you called MatMatSolve() 30 times, with num of rhs= 135030 (matrix B has 135030/30 columns). Although superlu_dist and mumps suppport multiple rhs operation, petsc interface actually calls MatSolve() in a loop, which can be accelarated if petsc interfaces superlu/mumps's MatMatSolve() directly. I'll try to add it into the interface and let you know after I'm done (it might take a while because I'm tied with other projects). May I have your calling sequence of using MatMatSolve()? To me, the performances of superlu_dist and mumps are reasonable under current version of petsc library. Thanks for providing us the data, Hong On Sun, 15 Mar 2009, David Fuentes wrote: > On Sat, 14 Mar 2009, Hong Zhang wrote: > >> >> David, >> >> Yes, MatMatSolve dominates. Can you also send us the output of >> '-log_summary' from superlu_dist? >> >> MUMPS only suppports centralized rhs vector b. >> Thus, we must scatter petsc distributed b into a seqential rhs vector >> (stored in root proc) in the petsc interface, which explains why the root >> proc takes longer time. >> I see that the numerical factorization and MatMatSolve are called >> 30 times. >> Do you iterate with the sequence similar to >> for i=0,1, ... >> B_i = X_(i-1) >> Solve A_i * X_i = B_i >> >> i.e., the rhs B is based on previously computed X? > > Hong, > > Yes my sequence is similiar to the algorithm above. > > > The numbers I sent were from superlu. I'm seeing pretty similiar > performance profiles between the two. Sorry, I tried to get a good > apples to apples comparison but getting seg faults as I increase > the # of processors w/ mumps which is why it is ran w/ only 24 procs and > super lu is w/ 40 procs. > > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flops --- Global --- --- > Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 3: State Update (superlu 40 processors) > > VecCopy 135030 1.0 6.3319e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecWAXPY 30 1.0 1.6069e-04 1.9 4.32e+03 1.7 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 840 > VecScatterBegin 30 1.0 7.6072e-03 1.5 0.00e+00 0.0 4.7e+04 9.0e+02 > 0.0e+00 0 0 15 0 0 0 0 50 0 0 0 > VecScatterEnd 30 1.0 9.1272e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMultAdd 30 1.0 3.3028e-01 1.4 3.89e+07 1.7 4.7e+04 9.0e+02 > 0.0e+00 0 0 15 0 0 0 0 50 0 0 3679 > MatSolve 135030 1.0 3.0340e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 78 0 0 0 0 81 0 0 0 0 0 > MatLUFactorSym 30 1.0 2.2563e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatLUFactorNum 30 1.0 2.7990e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 > MatConvert 150 1.0 2.9276e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.8e+02 0 0 0 0 2 0 0 0 0 30 0 > MatScale 60 1.0 2.7492e-01 1.9 1.94e+07 1.7 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 2210 > MatAssemblyBegin 180 1.0 1.1748e+02236.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.4e+02 2 0 0 0 2 2 0 0 0 40 0 > MatAssemblyEnd 180 1.0 1.9992e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.4e+02 0 0 0 0 2 0 0 0 0 40 0 > MatGetRow 4320 1.7 2.2634e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMult 30 1.0 4.2578e+02 1.0 1.75e+11 1.7 4.7e+04 4.0e+06 > 2.4e+02 11100 15 97 2 11100 50100 40 12841 > MatMatSolve 30 1.0 3.0256e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 6.0e+01 77 0 0 0 1 81 0 0 0 10 0 > > --- Event Stage 3: State Update (mumps 24 processors) > > VecWAXPY 30 1.0 3.5802e-04 2.0 6.00e+03 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 377 > VecScatterBegin 270090 1.0 2.6040e+0121.1 0.00e+00 0.0 3.1e+06 2.3e+03 > 0.0e+00 0 0 97 6 0 0 0 99 6 0 0 > VecScatterEnd 135060 1.0 3.7928e+0164.2 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMultAdd 30 1.0 4.5802e-01 2.3 5.40e+07 1.1 1.7e+04 1.5e+03 > 0.0e+00 0 0 1 0 0 0 0 1 0 0 2653 > MatSolve 135030 1.0 6.4960e+03 1.0 0.00e+00 0.0 3.1e+06 2.3e+03 > 1.5e+02 81 0 96 6 0 86 0 99 6 7 0 > MatLUFactorSym 30 1.0 1.0538e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatLUFactorNum 30 1.0 4.4708e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.8e+02 6 0 0 0 0 6 0 0 0 9 0 > MatConvert 150 1.0 4.7433e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 > 6.3e+02 0 0 0 0 0 0 0 0 0 30 0 > MatScale 60 1.0 4.3342e-01 6.7 2.70e+07 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 1402 > MatAssemblyBegin 180 1.0 8.4294e+01 5.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.4e+02 1 0 0 0 0 1 0 0 0 12 0 > MatAssemblyEnd 180 1.0 1.3100e-01 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 > 4.2e+02 0 0 0 0 0 0 0 0 0 20 0 > MatGetRow 6000 1.1 3.6813e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatMatMult 30 1.0 6.1625e+02 1.0 2.43e+11 1.1 1.7e+04 6.8e+06 > 5.1e+02 8100 1 91 0 8100 1 94 25 8872 > MatMatSolve 30 1.0 6.4946e+03 1.0 0.00e+00 0.0 3.1e+06 2.3e+03 > 1.2e+02 81 0 96 6 0 86 0 99 6 6 0 > ------------------------------------------------------------------------------------------------------------------------ > > > > > > >> On Sat, 14 Mar 2009, David Fuentes wrote: >> >>> Thanks a lot Hong, >>> >>> The switch definitely seemed to balance the load during the SuperLU >>> matmatsolve. >>> Although I'm not completely sure what I'm seeing. Changing the #dof >>> also seemed to affect the load balance of the Mumps MatMatSolve. >>> I need to investigate a bit more. >>> >>> Looking in the profile. The majority of the time is spent in the >>> MatSolve called by the MatMatSolve. >>> >>> >>> >>> >>> ------------------------------------------------------------------------------------------------------------------------ >>> Event Count Time (sec) Flops --- Global --- --- >>> Stage --- Total >>> Max Ratio Max Ratio Max Ratio Mess Avg len >>> Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >>> >>> >>> ------------------------------------------------------------------------------------------------------------------------ >>> >>> VecCopy 135030 1.0 6.3319e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> VecWAXPY 30 1.0 1.6069e-04 1.9 4.32e+03 1.7 0.0e+00 0.0e+00 >>> 0.0e+00 0 0 0 0 0 0 0 0 0 0 840 >>> VecScatterBegin 30 1.0 7.6072e-03 1.5 0.00e+00 0.0 4.7e+04 9.0e+02 >>> 0.0e+00 0 0 15 0 0 0 0 50 0 0 0 >>> VecScatterEnd 30 1.0 9.1272e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> MatMultAdd 30 1.0 3.3028e-01 1.4 3.89e+07 1.7 4.7e+04 9.0e+02 >>> 0.0e+00 0 0 15 0 0 0 0 50 0 0 3679 >>> MatSolve 135030 1.0 3.0340e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 0.0e+00 78 0 0 0 0 81 0 0 0 0 0 >>> MatLUFactorSym 30 1.0 2.2563e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> MatLUFactorNum 30 1.0 2.7990e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 0.0e+00 7 0 0 0 0 7 0 0 0 0 0 >>> MatConvert 150 1.0 2.9276e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 1.8e+02 0 0 0 0 4 0 0 0 0 30 0 >>> MatScale 60 1.0 2.7492e-01 1.9 1.94e+07 1.7 0.0e+00 0.0e+00 >>> 0.0e+00 0 0 0 0 0 0 0 0 0 0 2210 >>> MatAssemblyBegin 180 1.0 1.1748e+02236.9 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 2.4e+02 2 0 0 0 5 2 0 0 0 40 0 >>> MatAssemblyEnd 180 1.0 1.9992e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 2.4e+02 0 0 0 0 5 0 0 0 0 40 0 >>> MatGetRow 4320 1.7 2.2634e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> MatMatMult 30 1.0 4.2578e+02 1.0 1.75e+11 1.7 4.7e+04 4.0e+06 >>> 2.4e+02 11 100 15 97 5 11100 50100 40 12841 >>> MatMatSolve 30 1.0 3.0256e+03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 >>> 6.0e+01 77 0 0 0 1 81 0 0 0 10 0 >>> >>> >>> >>> df >>> >>> >>> >>> On Fri, 13 Mar 2009, Hong Zhang wrote: >>> >>>> David, >>>> >>>> You may run with option '-log_summary ' and >>>> check which function dominates the time. >>>> I suspect the symbolic factorization, because it is >>>> implemented sequentially in mumps. >>>> >>>> If this is the case, you may swich to superlu_dist >>>> which supports parallel symbolic factorization >>>> in the latest release. >>>> >>>> Let us know what you get, >>>> >>>> Hong >>>> >>>> On Fri, 13 Mar 2009, David Fuentes wrote: >>>> >>>>> >>>>> The majority of time in my code is spent in the MatMatSolve. I'm running >>>>> MatMatSolve in parallel using Mumps as the factored matrix. >>>>> Using top, I've noticed that during the MatMatSolve >>>>> the majority of the load seems to be on the root process. >>>>> Is this expected? Or do I most likely have a problem with the matrices >>>>> that I'm passing in? >>>>> >>>>> >>>>> >>>>> thank you, >>>>> David Fuentes >>>>> >>>>> >>>> >>> >> > From rxk at cfdrc.com Tue Mar 17 11:41:59 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Tue, 17 Mar 2009 10:41:59 -0600 Subject: matrix assembling time In-Reply-To: Message-ID: Hi Barry and others For the iterative solver, you mentioned there is much less to gain by reording. However, you also said we should have a reasonable ordering before generating the linear system. Suppose I already have already assembled a large system in parallel (with bad bandwidth), will reordering the system help to solve the system or not? Do we have to do this before the assembling to PETSs solver? In this case, I think we will need to renumbering all the nodes and/or cells, not only processor-wise but globally considering the ghost cells. Is there alternative way such as explicit asking PETSc to reordering the assembled linear system? Thank you. Ravi -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Barry Smith Sent: Friday, March 13, 2009 6:55 PM To: PETSc users list Subject: Re: matrix assembling time On Mar 13, 2009, at 12:48 PM, Ravi Kannan wrote: > Hi, > This is Ravi Kannan from CFD Research Corporation. One basic > question on > the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a > sparse matrix and the bandwidth of A (i.e. the distance between non > zero > elements) is high, does PETSc reorder the matrix/matrix-equations so > as to > solve more efficiently. Depends on what you mean. All the direct solvers use reorderings automatically to reduce fill and hence limit memory and flop usage. The iterative solvers do not. There is much less to gain by reordering for iterative solvers (no memory gain and only a relatively smallish improved cache gain). The "PETSc approach" is that one does the following 1) partitions the grid across processors (using a mesh partitioner) and then 2) numbers the grid on each process in a reasonable ordering BEFORE generating the linear system. Thus the sparse matrix automatically gets a good layout from the layout of the grid. So if you do 1) and 2) then no additional reordering is needed. Barry From knepley at gmail.com Tue Mar 17 10:47:27 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Mar 2009 10:47:27 -0500 Subject: matrix assembling time In-Reply-To: References: Message-ID: On Tue, Mar 17, 2009 at 11:41 AM, Ravi Kannan wrote: > Hi Barry and others > > For the iterative solver, you mentioned there is much less to gain by > reording. > However, you also said we should have a reasonable ordering before > generating the linear system. > > Suppose I already have already assembled a large system in parallel (with > bad bandwidth), > will reordering the system help to solve the system or not? Possibly. However, why would you do that? Do we have to do this before the assembling to PETSs solver? Not sure what you mean here. You can compute an ordering at any time. > > In this case, I think we will need to renumbering all the nodes and/or > cells, not only processor-wise but globally considering the ghost cells. > Is there alternative way such as explicit asking PETSc to reordering the > assembled linear system? I do not see what you are asking here. Matt > > Thank you. > > Ravi > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Barry Smith > Sent: Friday, March 13, 2009 6:55 PM > To: PETSc users list > Subject: Re: matrix assembling time > > > > On Mar 13, 2009, at 12:48 PM, Ravi Kannan wrote: > > > Hi, > > This is Ravi Kannan from CFD Research Corporation. One basic > > question on > > the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a > > sparse matrix and the bandwidth of A (i.e. the distance between non > > zero > > elements) is high, does PETSc reorder the matrix/matrix-equations so > > as to > > solve more efficiently. > > Depends on what you mean. All the direct solvers use reorderings > automatically > to reduce fill and hence limit memory and flop usage. > > The iterative solvers do not. There is much less to gain by > reordering for iterative > solvers (no memory gain and only a relatively smallish improved cache > gain). > > The "PETSc approach" is that one does the following > 1) partitions the grid across processors (using a mesh partitioner) > and then > 2) numbers the grid on each process in a reasonable ordering > BEFORE generating the linear system. Thus the sparse matrix > automatically gets > a good layout from the layout of the grid. So if you do 1) and 2) then > no additional > reordering is needed. > > Barry > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rxk at cfdrc.com Tue Mar 17 13:15:36 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Tue, 17 Mar 2009 12:15:36 -0600 Subject: matrix assembling time In-Reply-To: Message-ID: -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Matthew Knepley Sent: Tuesday, March 17, 2009 9:47 AM To: PETSc users list Subject: Re: matrix assembling time On Tue, Mar 17, 2009 at 11:41 AM, Ravi Kannan wrote: Hi Barry and others For the iterative solver, you mentioned there is much less to gain by reording. However, you also said we should have a reasonable ordering before generating the linear system. Suppose I already have already assembled a large system in parallel (with bad bandwidth), will reordering the system help to solve the system or not? Possibly. However, why would you do that? Do we have to do this before the assembling to PETSs solver? Not sure what you mean here. You can compute an ordering at any time. In this case, I think we will need to renumbering all the nodes and/or cells, not only processor-wise but globally considering the ghost cells. Is there alternative way such as explicit asking PETSc to reordering the assembled linear system? I do not see what you are asking here. Matt Thank you. Ravi -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Barry Smith Sent: Friday, March 13, 2009 6:55 PM To: PETSc users list Subject: Re: matrix assembling time On Mar 13, 2009, at 12:48 PM, Ravi Kannan wrote: > Hi, > This is Ravi Kannan from CFD Research Corporation. One basic > question on > the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a > sparse matrix and the bandwidth of A (i.e. the distance between non > zero > elements) is high, does PETSc reorder the matrix/matrix-equations so > as to > solve more efficiently. Depends on what you mean. All the direct solvers use reorderings automatically to reduce fill and hence limit memory and flop usage. The iterative solvers do not. There is much less to gain by reordering for iterative solvers (no memory gain and only a relatively smallish improved cache gain). The "PETSc approach" is that one does the following 1) partitions the grid across processors (using a mesh partitioner) and then 2) numbers the grid on each process in a reasonable ordering BEFORE generating the linear system. Thus the sparse matrix automatically gets a good layout from the layout of the grid. So if you do 1) and 2) then no additional reordering is needed. Barry -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_subber at yahoo.com Wed Mar 18 11:32:53 2009 From: w_subber at yahoo.com (Waad Subber) Date: Wed, 18 Mar 2009 09:32:53 -0700 (PDT) Subject: LU in PETSc Message-ID: <482712.41864.qm@web38205.mail.mud.yahoo.com> Hi, I am trying to do LU factorization to an ill-conditioned system. PETSc gives me "Detected zero pivot in LU factorization". The problem is solved when I use PCFactorSetShiftNonzero. Just to be sure, I used Matlab and Lapack(DGETRF) to do the factorization. They did it without any complain and they gave me the same answer. However, the answer I got from PETSc is completely different. In PETSc I did the following: ????? call KSPCreate(PETSC_COMM_SELF,ksp1,ierr) ????? call KSPSetOperators(ksp1,PSASTF,PSASTF,SAME_NONZERO_PATTERN,ierr) ????? call KSPSetType(ksp1,KSPPREONLY,ierr) ????? call KSPGetPC(ksp1,prec1,ierr) ????? call PCSetType(prec1,PCLU,ierr) ????? call PCFactorSetShiftNonzero(prec1,PETSC_DECIDE,IERR) ????? call KSPSetFromOptions(ksp1,ierr) ????? call KSPSetUp(ksp1,ierr) Any idea why PETSc gives different answer than Matlab and Lapack routine (DGETRF). Thanks Waad -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 18 12:27:35 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 18 Mar 2009 12:27:35 -0500 Subject: LU in PETSc In-Reply-To: <482712.41864.qm@web38205.mail.mud.yahoo.com> References: <482712.41864.qm@web38205.mail.mud.yahoo.com> Message-ID: <91BB4A21-A648-4287-A21F-7AB9066FE4CF@mcs.anl.gov> The shift nonzero causes a c*I to be added to the matrix, so rather than solving A x = b it solves (A + c*I) z = b so, of course, z is not generally going to be the same as x. If you use a shift you do not want to use preonly as the KSP method you should use something like GMRES. Also to get an accurate answer use something like -ksp_rtol 1.e-12 Matlab and Lapack are doing column pivoting in the factorization to avoid the zero pivot. PETSc's sparse factorization does not have this functionality which is why it generated a zero pivot. Barry On Mar 18, 2009, at 11:32 AM, Waad Subber wrote: > Hi, > I am trying to do LU factorization to an ill-conditioned system. > PETSc gives me "Detected zero pivot in LU factorization". The > problem is solved when I use PCFactorSetShiftNonzero. > > Just to be sure, I used Matlab and Lapack(DGETRF) to do the > factorization. They did it without any complain and they gave me the > same answer. However, the answer I got from PETSc is completely > different. > > In PETSc I did the following: > > call KSPCreate(PETSC_COMM_SELF,ksp1,ierr) > call > KSPSetOperators(ksp1,PSASTF,PSASTF,SAME_NONZERO_PATTERN,ierr) > call KSPSetType(ksp1,KSPPREONLY,ierr) > call KSPGetPC(ksp1,prec1,ierr) > call PCSetType(prec1,PCLU,ierr) > call PCFactorSetShiftNonzero(prec1,PETSC_DECIDE,IERR) > call KSPSetFromOptions(ksp1,ierr) > call KSPSetUp(ksp1,ierr) > > Any idea why PETSc gives different answer than Matlab and Lapack > routine (DGETRF). > > Thanks > > Waad > From zonexo at gmail.com Thu Mar 19 04:46:22 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Thu, 19 Mar 2009 17:46:22 +0800 Subject: PETSc chooses to use mpif77 as fortran compiler In-Reply-To: <88D8C0A1-4302-4AE9-BFCA-1026E6309981@mcs.anl.gov> References: <956373f0807162016u5549463bveba3f8d2fb950233@mail.gmail.com> <88D8C0A1-4302-4AE9-BFCA-1026E6309981@mcs.anl.gov> Message-ID: <49C2146E.7090507@gmail.com> Hi, I tried to configure PETSc using : ./config/configure.py --with-vendor-compilers=intel --with-x=0 --with-hypre-dir=/home/taywb/lib/hypre-2.4.0b --with-debugging=0 --with-batch=1 --with-mpi-dir=/opt/mpi/mpich --with-mpi-shared=0 --with-blas-lapack-dir=/opt/intel/mkl/9.1.021/lib/em64t/ --with-shared=0 After running conftest and reconfigure.py, PETSc chooses mpif77 as the fortran compiler, instead of mpif90. I tried to use --with-fc=mpif90 but it was not allowed. I wonder why. I'm currently writing codes in F90. Will this configuration affect my F90 codes? Thank you very much. Regards, Wee-Beng TAY From knepley at gmail.com Thu Mar 19 07:58:26 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Mar 2009 07:58:26 -0500 Subject: PETSc chooses to use mpif77 as fortran compiler In-Reply-To: <49C2146E.7090507@gmail.com> References: <956373f0807162016u5549463bveba3f8d2fb950233@mail.gmail.com> <88D8C0A1-4302-4AE9-BFCA-1026E6309981@mcs.anl.gov> <49C2146E.7090507@gmail.com> Message-ID: On Thu, Mar 19, 2009 at 4:46 AM, Wee-Beng TAY wrote: > Hi, > > I tried to configure PETSc using : > > ./config/configure.py --with-vendor-compilers=intel --with-x=0 > --with-hypre-dir=/home/taywb/lib/hypre-2.4.0b --with-debugging=0 > --with-batch=1 --with-mpi-dir=/opt/mpi/mpich --with-mpi-shared=0 > --with-blas-lapack-dir=/opt/intel/mkl/9.1.021/lib/em64t/ --with-shared=0 > > After running conftest and reconfigure.py, > > PETSc chooses mpif77 as the fortran compiler, instead of mpif90. > > I tried to use --with-fc=mpif90 but it was not allowed. I wonder why. I'm > currently writing codes in F90. Will this configuration affect my F90 codes? We cannot tell anything without the log. Please send configure.log to petsc-maint at mcs.anl.gov Matt > > Thank you very much. > > Regards, > > Wee-Beng TAY > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyoung at ippt.gov.pl Thu Mar 19 09:31:04 2009 From: tyoung at ippt.gov.pl (Toby D. Young) Date: Thu, 19 Mar 2009 15:31:04 +0100 Subject: possible error error message on "make tests" Message-ID: <20090319153104.6d8aa394@rav.ippt.gov.pl> Hello, I configure and compile successfully petsc-3.0.0-p3 with: ./config/configure.py --with-blas-lib=libblas.so --with-lapack-lib=liblapack.so --with-dynamic=1 --with-shared=1 --with-mpi=0 (linux-gnu-c-debug x86_64). I get this worrying "possible error" message running the tests. Running test examples to verify correct installation Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process See http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html ././ex19: symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops Possible error running Graphics examples src/snes/examples/tutorials/ex19 1 MPI process See http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html ././ex19: symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops Error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process See http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html ././ex5f: symbol lookup error: ././ex5f: undefined symbol: dmgetlocalvector_ Completed test examples I have looked in the documentation and in the troubleshooting guide and found nothing. What does this "possible error" mean? ie; can I ignore it safely? I am not competent enough to understand why this happens. Thanks. Best, Toby -- Toby D. Young Adiunkt (Assistant Professor) Philosopher-Physicist Department of Computational Science Institute of Fundamental Technological Research Polish Academy of Sciences Room 206, ul. Swietokrzyska 21 00-049 Warszawa, Polska +48 22 826 12 81 ext. 184 http://rav.ippt.gov.pl/~tyoung From knepley at gmail.com Thu Mar 19 08:44:47 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Mar 2009 08:44:47 -0500 Subject: possible error error message on "make tests" In-Reply-To: <20090319153104.6d8aa394@rav.ippt.gov.pl> References: <20090319153104.6d8aa394@rav.ippt.gov.pl> Message-ID: On Thu, Mar 19, 2009 at 9:31 AM, Toby D. Young wrote: > > > Hello, > > I configure and compile successfully petsc-3.0.0-p3 with: > ./config/configure.py --with-blas-lib=libblas.so > --with-lapack-lib=liblapack.so --with-dynamic=1 --with-shared=1 > --with-mpi=0 > (linux-gnu-c-debug x86_64). I get this worrying "possible error" message > running the tests. > > Running test examples to verify correct installation > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > MPI process See > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html././ex19: > symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops > Possible error running Graphics examples > src/snes/examples/tutorials/ex19 1 MPI process See > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html././ex19: > symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops Error > running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI > process See > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html././ex5f: > symbol lookup error: ././ex5f: undefined symbol: dmgetlocalvector_ > Completed test examples > > I have looked in the documentation and in the troubleshooting guide and > found nothing. What does this "possible error" mean? ie; can I ignore it > safely? I am not competent enough to understand why this happens. Please send configure.log and make*.log to petsc-maint at mcs.anl.gov. It appears that you have a Fortran compiler that hates shared libraries. Matt > > Thanks. > > Best, > Toby > > -- > > Toby D. Young > Adiunkt (Assistant Professor) > Philosopher-Physicist > Department of Computational Science > Institute of Fundamental Technological Research > Polish Academy of Sciences > Room 206, ul. Swietokrzyska 21 > 00-049 Warszawa, Polska > > +48 22 826 12 81 ext. 184 > http://rav.ippt.gov.pl/~tyoung > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rxk at cfdrc.com Thu Mar 19 11:24:07 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Thu, 19 Mar 2009 10:24:07 -0600 Subject: possible error error message on "make tests" In-Reply-To: <20090319153104.6d8aa394@rav.ippt.gov.pl> Message-ID: Hi all, I am having some problems in obtaining speedups for FEM problems; I am not sure whether this is a problem related to the way I use MPI or due to the slow interconnection. So I was wondering if there are any sample benchmark FEM problems, which are guaranteed to give good speedups. Thanks Ravi -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Toby D. Young Sent: Thursday, March 19, 2009 8:31 AM To: PETSc users list Subject: possible error error message on "make tests" Hello, I configure and compile successfully petsc-3.0.0-p3 with: ./config/configure.py --with-blas-lib=libblas.so --with-lapack-lib=liblapack.so --with-dynamic=1 --with-shared=1 --with-mpi=0 (linux-gnu-c-debug x86_64). I get this worrying "possible error" message running the tests. Running test examples to verify correct installation Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process See http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html ././ex19: symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops Possible error running Graphics examples src/snes/examples/tutorials/ex19 1 MPI process See http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html ././ex19: symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops Error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process See http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html ././ex5f: symbol lookup error: ././ex5f: undefined symbol: dmgetlocalvector_ Completed test examples I have looked in the documentation and in the troubleshooting guide and found nothing. What does this "possible error" mean? ie; can I ignore it safely? I am not competent enough to understand why this happens. Thanks. Best, Toby -- Toby D. Young Adiunkt (Assistant Professor) Philosopher-Physicist Department of Computational Science Institute of Fundamental Technological Research Polish Academy of Sciences Room 206, ul. Swietokrzyska 21 00-049 Warszawa, Polska +48 22 826 12 81 ext. 184 http://rav.ippt.gov.pl/~tyoung From knepley at gmail.com Thu Mar 19 10:29:04 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Mar 2009 10:29:04 -0500 Subject: possible error error message on "make tests" In-Reply-To: References: <20090319153104.6d8aa394@rav.ippt.gov.pl> Message-ID: On Thu, Mar 19, 2009 at 11:24 AM, Ravi Kannan wrote: > Hi all, > I am having some problems in obtaining speedups for FEM problems; I > am not sure whether this is a problem related to the way I use MPI or due > to > the slow interconnection. > > So I was wondering if there are any sample benchmark FEM problems, > which are guaranteed to give good speedups. Different architectures just do not work that way. That is why it is hard. The idea of a "good" code does not even make sense. You should run with -log_summary and then compare to the model you have for your performance. Matt > > Thanks > Ravi > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Toby D. Young > Sent: Thursday, March 19, 2009 8:31 AM > To: PETSc users list > Subject: possible error error message on "make tests" > > > > > Hello, > > I configure and compile successfully petsc-3.0.0-p3 with: > ./config/configure.py --with-blas-lib=libblas.so > --with-lapack-lib=liblapack.so --with-dynamic=1 --with-shared=1 > --with-mpi=0 > (linux-gnu-c-debug x86_64). I get this worrying "possible error" message > running the tests. > > Running test examples to verify correct installation > Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 > MPI process See > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html > ././ex19: > symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops > Possible error running Graphics examples > src/snes/examples/tutorials/ex19 1 MPI process See > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html > ././ex19: > symbol lookup error: ././ex19: undefined symbol: petsc_tmp_flops Error > running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI > process See > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html > ././ex5f: > symbol lookup error: ././ex5f: undefined symbol: dmgetlocalvector_ > Completed test examples > > I have looked in the documentation and in the troubleshooting guide and > found nothing. What does this "possible error" mean? ie; can I ignore it > safely? I am not competent enough to understand why this happens. > > Thanks. > > Best, > Toby > > -- > > Toby D. Young > Adiunkt (Assistant Professor) > Philosopher-Physicist > Department of Computational Science > Institute of Fundamental Technological Research > Polish Academy of Sciences > Room 206, ul. Swietokrzyska 21 > 00-049 Warszawa, Polska > > +48 22 826 12 81 ext. 184 > http://rav.ippt.gov.pl/~tyoung > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelsantoscoelho at gmail.com Mon Mar 23 11:06:21 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 13:06:21 -0300 Subject: A different kind of lagged preconditioner Message-ID: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> Hello everyone, Lately I've been running some tests with a matrix-free LU-SGS-like preconditioner and I've noticed that although, broadly speaking, it has shown very good improvements on the convergence rate of my program, it does not decrease the runtime. Quite the opposite, the bigger the problem (mesh size), the more computational costlier it gets to be applied to the system, which is fairly natural to expect. So I've tried using the -snes_lag_preconditioner command-line option, and it did help in a way to alleviate the "numerical effort" of the preconditioner, but the overall runtime, in comparison with the matrix-free unpreconditioned tests, is still prohibitive. Given that, I was thinking of modifying the concept of "lagged preconditioning" in PETSc, I mean, instead of applying the preconditioner every "p" non-linear iterations, I want to apply it every "p" linear iterations within each non-linear iteration. How can I do that? Thanks in advance, Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 23 11:13:43 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2009 11:13:43 -0500 Subject: A different kind of lagged preconditioner In-Reply-To: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> Message-ID: On Mon, Mar 23, 2009 at 11:06 AM, Rafael Santos Coelho < rafaelsantoscoelho at gmail.com> wrote: > Hello everyone, > > Lately I've been running some tests with a matrix-free LU-SGS-like > preconditioner and I've noticed that although, broadly speaking, it has > shown very good improvements on the convergence rate of my program, it does > not decrease the runtime. Quite the opposite, the bigger the problem (mesh > size), the more computational costlier it gets to be applied to the system, > which is fairly natural to expect. So I've tried using the > -snes_lag_preconditioner command-line option, and it did help in a way to > alleviate the "numerical effort" of the preconditioner, but the overall > runtime, in comparison with the matrix-free unpreconditioned tests, is still > prohibitive. > > Given that, I was thinking of modifying the concept of "lagged > preconditioning" in PETSc, I mean, instead of applying the preconditioner > every "p" non-linear iterations, I want to apply it every "p" linear > iterations within each non-linear iteration. How can I do that? You can wrap up your PC in a PCShell and have it only apply every p iterations. Note that if you start changing the PC during the Krylov solve, you will need something like FGMRES. Matt > > Thanks in advance, > > Rafael > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelsantoscoelho at gmail.com Mon Mar 23 11:45:52 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 13:45:52 -0300 Subject: A different kind of lagged preconditioner In-Reply-To: References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> Message-ID: <3b6f83d40903230945x6c704532yf43935a2b850e2f5@mail.gmail.com> Matt, I tried what you suggested but it didn't work. I inserted the following lines in my "PCApply" routine: (...) ierr = SNESGetKSP(snes, &ksp); CHKERRQ(ierr); ierr = KSPGetIterationNumber(ksp, &its); CHKERRQ(ierr); if(its % pc->lag) { // skip the preconditioning phase return 0; } // else apply preconditioner Then I ran a test and here's what happened: $ ./sbratu -xdiv 32 -ydiv 32 -snes_mf -user_precond -snes_converged_reason -snes_monitor -ksp_converged_reason -smfulusgs_lag 2 0 SNES Function norm 1.165810453479e+00 Linear solve did not converge due to DIVERGED_NULL iterations 1 Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE What's wrong with this? Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 23 13:12:51 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Mar 2009 13:12:51 -0500 Subject: A different kind of lagged preconditioner In-Reply-To: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> Message-ID: Lagging the preconditioner does NOT mean applying the preconditioner every p non-linear iterations. It means RECOMPUTING the preconditioner (with LU that means doing a new LU numerical factorization) every p nonlinear iterations. The preconditioner is still APPLIED at every iteration of the Krylov method. Within the linear solve inside Newton there is never a recomputation of the preconditioner (because the matrix stays the same inside the linear solve) so lagging inside the linear solve doesn't make sense. Barry The work and memory requirements of LU factorization scales more than linearly with the problem size so eventually for really large problems the cost becomes prohibative, but if no iterative solver works then you have to use LU and lag as much as you can. Are you running in parallel? You will need to for larger problems. On Mar 23, 2009, at 11:06 AM, Rafael Santos Coelho wrote: > Hello everyone, > > Lately I've been running some tests with a matrix-free LU-SGS-like > preconditioner and I've noticed that although, broadly speaking, it > has shown very good improvements on the convergence rate of my > program, it does not decrease the runtime. Quite the opposite, the > bigger the problem (mesh size), the more computational costlier it > gets to be applied to the system, which is fairly natural to expect. > So I've tried using the -snes_lag_preconditioner command-line > option, and it did help in a way to alleviate the "numerical effort" > of the preconditioner, but the overall runtime, in comparison with > the matrix-free unpreconditioned tests, is still prohibitive. > > Given that, I was thinking of modifying the concept of "lagged > preconditioning" in PETSc, I mean, instead of applying the > preconditioner every "p" non-linear iterations, I want to apply it > every "p" linear iterations within each non-linear iteration. How > can I do that? > > Thanks in advance, > > Rafael From rafaelsantoscoelho at gmail.com Mon Mar 23 13:39:36 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 15:39:36 -0300 Subject: A different kind of lagged preconditioner In-Reply-To: References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> Message-ID: <3b6f83d40903231139j4f20343epde3cb02d36bb1527@mail.gmail.com> Hey, Barry > Lagging the preconditioner does NOT mean applying the preconditioner > every p non-linear iterations. It means RECOMPUTING the preconditioner (with > LU that means doing a new LU numerical factorization) every p nonlinear > iterations. The preconditioner is still APPLIED at every iteration of the > Krylov method. > Thanks for the clarification :D > > Within the linear solve inside Newton there is never a recomputation of > the preconditioner (because the matrix stays the same inside the linear > solve) so lagging inside the linear solve doesn't make sense. > The thing is, as far as I know, that I do have to "recompute" my matrix-free preconditioner every linear iteration inside Newton's method because the input vector changes throughtout the execution of the Krylov solver. Let me give you a clearer and brief explanation of how the preconditioner works. Choosing left preconditioning, we have M^(-1)J(x)s = -M^(-1)F(x), where J(x) = L + D + U and M = (D + wU)^(-1)D(D + wL)^(-1). So, as this is a matrix-free preconditioner, everytime there is a jacobian-vector product J(x)v, PETSc uses the finite-difference approximation (F(x + hM^(-1)v) - F(x)) / h, right? So, the PCApply routine is called every linear iteration. My intention is to make that call periodically in hopes of lowering the runtime. Doesn't that make sense? Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelsantoscoelho at gmail.com Mon Mar 23 13:42:29 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 15:42:29 -0300 Subject: A different kind of lagged preconditioner In-Reply-To: References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> Message-ID: <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> Hey, Barry > Lagging the preconditioner does NOT mean applying the preconditioner > every p non-linear iterations. It means RECOMPUTING the preconditioner (with > LU that means doing a new LU numerical factorization) every p nonlinear > iterations. The preconditioner is still APPLIED at every iteration of the > Krylov method. > Thanks for the clarification :D > > Within the linear solve inside Newton there is never a recomputation of > the preconditioner (because the matrix stays the same inside the linear > solve) so lagging inside the linear solve doesn't make sense. > The thing is, as far as I know, that I do have to "recompute" my matrix-free preconditioner every linear iteration inside Newton's method because the input vector changes throughtout the execution of the Krylov solver. Let me give you a clearer and brief explanation of how the preconditioner works. Choosing left preconditioning, we have M^(-1)J(x)s = -M^(-1)F(x), where J(x) = L + D + U and M = (D + wU)^(-1)D(D + wL)^(-1). So, as this is a matrix-free preconditioner, everytime there is a jacobian-vector product J(x)v, PETSc uses the finite-difference approximation (F(x + hM^(-1)v) - F(x)) / h, right? So, the PCApply routine is called every linear iteration. My intention is to make that call periodically in hopes of lowering the runtime. Doesn't that make sense? Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 23 13:48:59 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Mar 2009 13:48:59 -0500 Subject: A different kind of lagged preconditioner In-Reply-To: <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> Message-ID: <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> On Mar 23, 2009, at 1:42 PM, Rafael Santos Coelho wrote: > Hey, Barry > > > Lagging the preconditioner does NOT mean applying the > preconditioner every p non-linear iterations. It means RECOMPUTING > the preconditioner (with LU that means doing a new LU numerical > factorization) every p nonlinear iterations. The preconditioner is > still APPLIED at every iteration of the Krylov method. > > Thanks for the clarification :D > > > Within the linear solve inside Newton there is never a > recomputation of the preconditioner (because the matrix stays the > same inside the linear solve) so lagging inside the linear solve > doesn't make sense. > > The thing is, as far as I know, that I do have to "recompute" my > matrix-free preconditioner every linear iteration inside Newton's > method because the input vector changes throughtout the execution of > the Krylov solver. > > Let me give you a clearer and brief explanation of how the > preconditioner works. Choosing left preconditioning, we have > M^(-1)J(x)s = -M^(-1)F(x), where J(x) = L + D + U and M = (D + > wU)^(-1)D(D + wL)^(-1). So, as this is a matrix-free preconditioner, > everytime there is a jacobian-vector product J(x)v, PETSc uses the > finite-difference approximation (F(x + hM^(-1)v) - F(x)) / h, > right? So, the PCApply routine is called every linear iteration. My > intention is to make that call periodically in hopes of lowering the > runtime. Doesn't that make sense? NO. If you are using LU to construct a preconditioner then you ARE providing the Jacobian explicitly (as a sparse) matrix. Each time the preconditioner is built this sparse matrix is factored. At PCApply() time the triangular solves (which are cheap compared to the factorization) are applied. Skipping these solves makes no sense. The fact that you applying the matrix-vector product matrix free has nothing to do with the application of the preconditioner. Barry > > > Rafael From rafaelsantoscoelho at gmail.com Mon Mar 23 14:06:07 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 16:06:07 -0300 Subject: A different kind of lagged preconditioner In-Reply-To: <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> Message-ID: <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> Barry, sorry if I'm coming off stubborn or something, but I think I'm not following your reasoning. NO. > > If you are using LU to construct a preconditioner then you ARE providing > the Jacobian explicitly (as a sparse) matrix. Each time the preconditioner > is built this sparse matrix is factored. What do you mean by "providing the jacobian explicity (as a sparse) matrix"? I thought that there was no explicit jacobian matrix, but only a MATMFFD context. Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 23 14:16:54 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2009 14:16:54 -0500 Subject: A different kind of lagged preconditioner In-Reply-To: <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> Message-ID: On Mon, Mar 23, 2009 at 2:06 PM, Rafael Santos Coelho < rafaelsantoscoelho at gmail.com> wrote: > Barry, > > sorry if I'm coming off stubborn or something, but I think I'm not > following your reasoning. > > NO. >> >> If you are using LU to construct a preconditioner then you ARE providing >> the Jacobian explicitly (as a sparse) matrix. Each time the preconditioner >> is built this sparse matrix is factored. > > > What do you mean by "providing the jacobian explicity (as a sparse) > matrix"? I thought that there was no explicit jacobian matrix, but only a > MATMFFD context. > If there is no matrix, what are you factoring? Matt > > Rafael > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 23 14:22:08 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Mar 2009 14:22:08 -0500 Subject: A different kind of lagged preconditioner In-Reply-To: <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> Message-ID: <1B20D63C-220C-4D49-B02C-F02E00570BEF@mcs.anl.gov> On Mar 23, 2009, at 2:06 PM, Rafael Santos Coelho wrote: > Barry, > > sorry if I'm coming off stubborn or something, but I think I'm not > following your reasoning. > > NO. > > If you are using LU to construct a preconditioner then you ARE > providing the Jacobian explicitly (as a sparse) matrix. Each time > the preconditioner is built this sparse matrix is factored. > > What do you mean by "providing the jacobian explicity (as a sparse) > matrix"? I thought that there was no explicit jacobian matrix, but > only a MATMFFD context. If so, then you are not using LU as a preconditioner; in fact you are not using a preconditioner at all. Run with -snes_view and send us the output to see what solver you are actually using. > > > Rafael From fuentesdt at gmail.com Mon Mar 23 17:49:02 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Mon, 23 Mar 2009 17:49:02 -0500 (CDT) Subject: PetscMalloc petsc-3.0.0-p4 Message-ID: In 3.0.0-p4 PetscMalloc doesn't seem to return an error code if the requested memory wasn't allocated ? Is this correct ? I'm trying to allocate a dense matrix for which there was not enough memory and PetscMalloc is returning a null pointer but no errorcode to catch. which is causing seg faults later. thanks, df From knepley at gmail.com Mon Mar 23 18:01:18 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2009 18:01:18 -0500 Subject: PetscMalloc petsc-3.0.0-p4 In-Reply-To: References: Message-ID: On Mon, Mar 23, 2009 at 5:49 PM, David Fuentes wrote: > In 3.0.0-p4 > > PetscMalloc doesn't seem to return an error code if the requested memory > wasn't allocated ? Is this correct ? > > I'm trying to allocate a dense matrix for which there was not enough memory > and PetscMalloc is returning a null pointer but no errorcode to > catch. which is causing seg faults later. No, PETSc definitely returns an error code if the malloc fails. Matt > > thanks, > df > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Mon Mar 23 18:20:20 2009 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Mon, 23 Mar 2009 20:20:20 -0300 Subject: PetscMalloc petsc-3.0.0-p4 In-Reply-To: References: Message-ID: 1) Which system are you running on? 2) are you completely sure BOTH things happens, I mean... that malloc returns null AND the error code is zero? On Mon, Mar 23, 2009 at 7:49 PM, David Fuentes wrote: > In 3.0.0-p4 > > PetscMalloc doesn't seem to return an error code if the requested memory > wasn't allocated ? Is this correct ? > > I'm trying to allocate a dense matrix for which there was not enough memory > and PetscMalloc is returning a null pointer but no errorcode to > catch. which is causing seg faults later. > > > > thanks, > df > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From rafaelsantoscoelho at gmail.com Mon Mar 23 18:41:43 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 20:41:43 -0300 Subject: A different kind of lagged preconditioner In-Reply-To: <1B20D63C-220C-4D49-B02C-F02E00570BEF@mcs.anl.gov> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> <1B20D63C-220C-4D49-B02C-F02E00570BEF@mcs.anl.gov> Message-ID: <3b6f83d40903231641g31ac4c5egc6bd30970dd3376@mail.gmail.com> Well, let me explain myself again. I've implemented a serial matrix-free uniparametric LU-SGS preconditionerfor non-linear problems using the PETSc PCSHELL module. It's matrix-free in the sense that whenever the jacobian matrix (and by that I mean the L, D and U factors of the jacobian matrix) is required within the linear solver in the form of a matrix-vector product, I resort to a finite-difference formula in order to approximate the jacobian entries. So the jacobian matrix is never actually formed. To use my preconditioner, I just have to set the "-user_precond" command-line option along with "-snes_mf". Now, the cost to apply that preconditioner obviously grows according to the problem dimensions, so it is built and applied within the linear solver to form the vectors of the Krylov subspace basis in each linear iteration. I want to change that, and only have it applied every "p" linear iterations to make it less costlier. Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuentesdt at gmail.com Mon Mar 23 18:41:57 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Mon, 23 Mar 2009 18:41:57 -0500 (CDT) Subject: PetscMalloc petsc-3.0.0-p4 In-Reply-To: References: Message-ID: running on ubuntu 8.04. the following code should reproduce the problem. static char help[] = "MatCreateSeqDense\n\n"; #include "petscpc.h" #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **args) { Mat mat; PetscErrorCode ierr; PetscInt npixel = 256*256*5; PetscInt n= npixel * npixel; PetscInitialize(&argc,&args,(char *)0,help); /* Create and assemble matrix */ ierr = MatCreateSeqDense(PETSC_COMM_SELF,n,n,PETSC_NULL,&mat);CHKERRQ(ierr); ierr = MatSetValue(mat,1,1,1.0,INSERT_VALUES);CHKERRQ(ierr); ierr = MatAssemblyBegin(mat,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = MatAssemblyEnd(mat,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); /* Free data structures */ ierr = MatDestroy(mat);CHKERRQ(ierr); ierr = PetscFinalize();CHKERRQ(ierr); return 0; } below is my error message. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Argument out of range! [0]PETSC ERROR: Row too large: row 1 max -1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 4, Fri Mar 6 14:46:08 CST 2009 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex4 on a intel-10. named setebos.ices.utexas.edu by fuentes Mon Mar 23 18:39:32 2009 [0]PETSC ERROR: Libraries linked from /org/groups/oden/LIBRARIES/PETSC/petsc-3.0.0-p4/intel-10.1-mpich2-1.0.7-cxx-dbg/lib [0]PETSC ERROR: Configure run at Fri Mar 20 20:39:04 2009 [0]PETSC ERROR: Configure options --with-mpi-dir=/org/groups/oden/LIBRARIES/MPI/mpich2-1.0.7-intel-10.1 --with-clanguage=C++ --with-shared=1 --with-blas-lapack-dir=/opt/intel/mkl/10.0.3.020 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 --download-blacs=1 --download-plapack=1 --download-superlu_dist=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatSetValues_SeqDense() line 672 in src/mat/impls/dense/seq/dense.c [0]PETSC ERROR: MatSetValues() line 921 in src/mat/interface/matrix.c [0]PETSC ERROR: main() line 21 in src/ksp/pc/examples/tests/ex4.c application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0[unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 df On Mon, 23 Mar 2009, Lisandro Dalcin wrote: > 1) Which system are you running on? > > 2) are you completely sure BOTH things happens, I mean... that malloc > returns null AND the error code is zero? > > > On Mon, Mar 23, 2009 at 7:49 PM, David Fuentes wrote: >> In 3.0.0-p4 >> >> PetscMalloc doesn't seem to return an error code if the requested memory >> wasn't allocated ? Is this correct ? >> >> I'm trying to allocate a dense matrix for which there was not enough memory >> and PetscMalloc is returning a null pointer but no errorcode to >> catch. which is causing seg faults later. >> >> >> >> thanks, >> df >> >> > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > From bsmith at mcs.anl.gov Mon Mar 23 19:39:18 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Mar 2009 19:39:18 -0500 Subject: A different kind of lagged preconditioner In-Reply-To: <3b6f83d40903231641g31ac4c5egc6bd30970dd3376@mail.gmail.com> References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> <1B20D63C-220C-4D49-B02C-F02E00570BEF@mcs.anl.gov> <3b6f83d40903231641g31ac4c5egc6bd30970dd3376@mail.gmail.com> Message-ID: Just use VecCopy() for the times you skip the application of the preconditioner; if you do nothing for those times, it will fail as it does. As Matt says, you need to use KSPFGMRES if you use a different preconditioner on different iterations. BTW: your plan will suck. Barry On Mar 23, 2009, at 6:41 PM, Rafael Santos Coelho wrote: > Well, let me explain myself again. I've implemented a serial matrix- > free uniparametric LU-SGS preconditioner for non-linear problems > using the PETSc PCSHELL module. It's matrix-free in the sense that > whenever the jacobian matrix (and by that I mean the L, D and U > factors of the jacobian matrix) is required within the linear solver > in the form of a matrix-vector product, I resort to a finite- > difference formula in order to approximate the jacobian entries. So > the jacobian matrix is never actually formed. To use my > preconditioner, I just have to set the "-user_precond" command-line > option along with "-snes_mf". > > Now, the cost to apply that preconditioner obviously grows according > to the problem dimensions, so it is built and applied within the > linear solver to form the vectors of the Krylov subspace basis in > each linear iteration. I want to change that, and only have it > applied every "p" linear iterations to make it less costlier. > > Rafael From bsmith at mcs.anl.gov Mon Mar 23 19:47:50 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Mar 2009 19:47:50 -0500 Subject: PetscMalloc petsc-3.0.0-p4 In-Reply-To: References: Message-ID: <51D52299-A732-4E8B-95B6-90C056D8000B@mcs.anl.gov> [anlextwls097-161:~/Documents/RandD-100] barrysmith% python >>> npixel = 256*256*5 >>> print npixel 327,680 >>> print npixel*npixel 107,374,182,400 The problem is that your n is too big to fit into a PetscInt hence bad stuff happens. This is a fatal flaw of C, it doesn't generation exceptions for integer overflow. If you want to have sizes this big you need to config/configure.py PETSc with --with-64-bit-indices then PETSc uses 64 bit integers for PETSc and things will fit. Of course, likely you still won't have enough memory to allocate such a big dense matrix. Barry On Mar 23, 2009, at 6:41 PM, David Fuentes wrote: > > > running on ubuntu 8.04. > > the following code should reproduce the problem. > > > > static char help[] = "MatCreateSeqDense\n\n"; > > #include "petscpc.h" > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > Mat mat; > PetscErrorCode ierr; > PetscInt npixel = 256*256*5; > PetscInt n= npixel * npixel; > > PetscInitialize(&argc,&args,(char *)0,help); > > > /* Create and assemble matrix */ > ierr = > MatCreateSeqDense(PETSC_COMM_SELF,n,n,PETSC_NULL,&mat);CHKERRQ(ierr); > ierr = MatSetValue(mat,1,1,1.0,INSERT_VALUES);CHKERRQ(ierr); > ierr = MatAssemblyBegin(mat,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(mat,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > /* Free data structures */ > ierr = MatDestroy(mat);CHKERRQ(ierr); > ierr = PetscFinalize();CHKERRQ(ierr); > return 0; > } > > > > below is my error message. > > > > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: Row too large: row 1 max -1! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 4, Fri Mar 6 > 14:46:08 CST 2009 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex4 on a intel-10. named setebos.ices.utexas.edu by > fuentes Mon Mar 23 18:39:32 2009 > [0]PETSC ERROR: Libraries linked from > /org/groups/oden/LIBRARIES/PETSC/petsc-3.0.0-p4/intel-10.1- > mpich2-1.0.7-cxx-dbg/lib > [0]PETSC ERROR: Configure run at Fri Mar 20 20:39:04 2009 > [0]PETSC ERROR: Configure options > --with-mpi-dir=/org/groups/oden/LIBRARIES/MPI/mpich2-1.0.7-intel-10.1 > --with-clanguage=C++ --with-shared=1 > --with-blas-lapack-dir=/opt/intel/mkl/10.0.3.020 --download-mumps=1 > --download-parmetis=1 --download-scalapack=1 --download-blacs=1 > --download-plapack=1 --download-superlu_dist=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatSetValues_SeqDense() line 672 in > src/mat/impls/dense/seq/dense.c > [0]PETSC ERROR: MatSetValues() line 921 in src/mat/interface/matrix.c > [0]PETSC ERROR: main() line 21 in src/ksp/pc/examples/tests/ex4.c > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0[unset]: > aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > > > > > > > df > > > > > On Mon, 23 Mar 2009, Lisandro Dalcin wrote: > >> 1) Which system are you running on? >> >> 2) are you completely sure BOTH things happens, I mean... that malloc >> returns null AND the error code is zero? >> >> >> On Mon, Mar 23, 2009 at 7:49 PM, David Fuentes >> wrote: >>> In 3.0.0-p4 >>> >>> PetscMalloc doesn't seem to return an error code if the requested >>> memory >>> wasn't allocated ? Is this correct ? >>> >>> I'm trying to allocate a dense matrix for which there was not >>> enough memory >>> and PetscMalloc is returning a null pointer but no errorcode to >>> catch. which is causing seg faults later. >>> >>> >>> >>> thanks, >>> df >>> >>> >> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> From knepley at gmail.com Mon Mar 23 19:53:47 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2009 19:53:47 -0500 Subject: PetscMalloc petsc-3.0.0-p4 In-Reply-To: <51D52299-A732-4E8B-95B6-90C056D8000B@mcs.anl.gov> References: <51D52299-A732-4E8B-95B6-90C056D8000B@mcs.anl.gov> Message-ID: More specifically, you have overflowed exactly to 0, thus PetscMalloc() returns without an error code, as it says in the docs. Matt On Mon, Mar 23, 2009 at 7:47 PM, Barry Smith wrote: > [anlextwls097-161:~/Documents/RandD-100] barrysmith% python > > >>> npixel = 256*256*5 > >>> print npixel > 327,680 > >>> print npixel*npixel > 107,374,182,400 > > > The problem is that your n is too big to fit into a PetscInt hence bad > stuff happens. > This is a fatal flaw of C, it doesn't generation exceptions for integer > overflow. > > If you want to have sizes this big you need to config/configure.py PETSc > with > --with-64-bit-indices then PETSc uses 64 bit integers for PETSc and things > will fit. > Of course, likely you still won't have enough memory to allocate such a big > dense matrix. > > Barry > > > > On Mar 23, 2009, at 6:41 PM, David Fuentes wrote: > > >> >> running on ubuntu 8.04. >> >> the following code should reproduce the problem. >> >> >> >> static char help[] = "MatCreateSeqDense\n\n"; >> >> #include "petscpc.h" >> >> #undef __FUNCT__ >> #define __FUNCT__ "main" >> int main(int argc,char **args) >> { >> Mat mat; >> PetscErrorCode ierr; >> PetscInt npixel = 256*256*5; >> PetscInt n= npixel * npixel; >> >> PetscInitialize(&argc,&args,(char *)0,help); >> >> >> /* Create and assemble matrix */ >> ierr = >> MatCreateSeqDense(PETSC_COMM_SELF,n,n,PETSC_NULL,&mat);CHKERRQ(ierr); >> ierr = MatSetValue(mat,1,1,1.0,INSERT_VALUES);CHKERRQ(ierr); >> ierr = MatAssemblyBegin(mat,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >> ierr = MatAssemblyEnd(mat,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); >> >> /* Free data structures */ >> ierr = MatDestroy(mat);CHKERRQ(ierr); >> ierr = PetscFinalize();CHKERRQ(ierr); >> return 0; >> } >> >> >> >> below is my error message. >> >> >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Argument out of range! >> [0]PETSC ERROR: Row too large: row 1 max -1! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 4, Fri Mar 6 >> 14:46:08 CST 2009 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./ex4 on a intel-10. named setebos.ices.utexas.edu by >> fuentes Mon Mar 23 18:39:32 2009 >> [0]PETSC ERROR: Libraries linked from >> >> /org/groups/oden/LIBRARIES/PETSC/petsc-3.0.0-p4/intel-10.1-mpich2-1.0.7-cxx-dbg/lib >> [0]PETSC ERROR: Configure run at Fri Mar 20 20:39:04 2009 >> [0]PETSC ERROR: Configure options >> --with-mpi-dir=/org/groups/oden/LIBRARIES/MPI/mpich2-1.0.7-intel-10.1 >> --with-clanguage=C++ --with-shared=1 >> --with-blas-lapack-dir=/opt/intel/mkl/10.0.3.020 --download-mumps=1 >> --download-parmetis=1 --download-scalapack=1 --download-blacs=1 >> --download-plapack=1 --download-superlu_dist=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatSetValues_SeqDense() line 672 in >> src/mat/impls/dense/seq/dense.c >> [0]PETSC ERROR: MatSetValues() line 921 in src/mat/interface/matrix.c >> [0]PETSC ERROR: main() line 21 in src/ksp/pc/examples/tests/ex4.c >> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0[unset]: >> aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >> >> >> >> >> >> >> df >> >> >> >> >> On Mon, 23 Mar 2009, Lisandro Dalcin wrote: >> >> 1) Which system are you running on? >>> >>> 2) are you completely sure BOTH things happens, I mean... that malloc >>> returns null AND the error code is zero? >>> >>> >>> On Mon, Mar 23, 2009 at 7:49 PM, David Fuentes >>> wrote: >>> >>>> In 3.0.0-p4 >>>> >>>> PetscMalloc doesn't seem to return an error code if the requested memory >>>> wasn't allocated ? Is this correct ? >>>> >>>> I'm trying to allocate a dense matrix for which there was not enough >>>> memory >>>> and PetscMalloc is returning a null pointer but no errorcode to >>>> catch. which is causing seg faults later. >>>> >>>> >>>> >>>> thanks, >>>> df >>>> >>>> >>>> >>> >>> >>> -- >>> Lisandro Dalc?n >>> --------------- >>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>> Tel/Fax: +54-(0)342-451.1594 >>> >>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelsantoscoelho at gmail.com Mon Mar 23 20:40:40 2009 From: rafaelsantoscoelho at gmail.com (Rafael Santos Coelho) Date: Mon, 23 Mar 2009 22:40:40 -0300 Subject: A different kind of lagged preconditioner In-Reply-To: References: <3b6f83d40903230906g2826a2ccp8433b73c768b6752@mail.gmail.com> <3b6f83d40903231142u60a7e285i398bf946308640b9@mail.gmail.com> <646589A6-727C-4339-A189-EC9C3EBA7962@mcs.anl.gov> <3b6f83d40903231206x7ee02165qefcaf5a4cc0b4151@mail.gmail.com> <1B20D63C-220C-4D49-B02C-F02E00570BEF@mcs.anl.gov> <3b6f83d40903231641g31ac4c5egc6bd30970dd3376@mail.gmail.com> Message-ID: <3b6f83d40903231840i355ae0c0v6d16a380db9e1fae@mail.gmail.com> Hi, Barry thanks a lot for the help :D P.S.: It did sucked, bit time! -------------- next part -------------- An HTML attachment was scrubbed... URL: From recrusader at gmail.com Mon Mar 23 21:00:00 2009 From: recrusader at gmail.com (Yujie) Date: Mon, 23 Mar 2009 19:00:00 -0700 Subject: KSPSolve() and mutliple rhs Message-ID: <7ff0ee010903231900s5a345ce4o54cb55492a84eaf6@mail.gmail.com> Hi, PETSc developers I know KSPSolve() can't support multiple rhs. If I want to realize it, could you give me some advice? How to revise the codes, which functions need to be paid much attention? thanks a lot. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 23 21:10:20 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Mar 2009 21:10:20 -0500 Subject: KSPSolve() and mutliple rhs In-Reply-To: <7ff0ee010903231900s5a345ce4o54cb55492a84eaf6@mail.gmail.com> References: <7ff0ee010903231900s5a345ce4o54cb55492a84eaf6@mail.gmail.com> Message-ID: I would just write a loop over rhs. Matt On Mon, Mar 23, 2009 at 9:00 PM, Yujie wrote: > Hi, PETSc developers > > I know KSPSolve() can't support multiple rhs. If I want to realize it, > could you give me some advice? How to revise the codes, which functions need > to be paid much attention? thanks a lot. > > Regards, > > Yujie > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Mar 23 22:14:36 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 23 Mar 2009 22:14:36 -0500 (CDT) Subject: KSPSolve() and mutliple rhs In-Reply-To: References: <7ff0ee010903231900s5a345ce4o54cb55492a84eaf6@mail.gmail.com> Message-ID: For an example - check src/ksp/ksp/examples/tutorials/ex16.c Satish On Mon, 23 Mar 2009, Matthew Knepley wrote: > I would just write a loop over rhs. > > Matt > > On Mon, Mar 23, 2009 at 9:00 PM, Yujie wrote: > > > Hi, PETSc developers > > > > I know KSPSolve() can't support multiple rhs. If I want to realize it, > > could you give me some advice? How to revise the codes, which functions need > > to be paid much attention? thanks a lot. > > > > Regards, > > > > Yujie > > > > > > From Andreas.Grassl at student.uibk.ac.at Tue Mar 24 08:45:27 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Tue, 24 Mar 2009 14:45:27 +0100 Subject: PCNN preconditioner and setting the interface Message-ID: <49C8E3F7.9000900@student.uibk.ac.at> Hello, I'm working with a FE-Software where I get out the element stiffness matrices and the element-node correspondency to setup the stiffness matrix for solving with PETSc. I'm currently fighting with the interface definition. My LocalToGlobalMapping for test-purposes was the "identity"-IS, but I guess this is far from the optimum, because nowhere is defined a node set of interface nodes. How do I declare the interface? Is it simply a reordering of the nodes, the inner nodes are numbered first and the interface nodes last? Thank you in advance ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From bsmith at mcs.anl.gov Tue Mar 24 09:03:11 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Mar 2009 09:03:11 -0500 Subject: PCNN preconditioner and setting the interface In-Reply-To: <49C8E3F7.9000900@student.uibk.ac.at> References: <49C8E3F7.9000900@student.uibk.ac.at> Message-ID: <2309C09C-D6F2-4163-B8D6-78EF4A2CEA93@mcs.anl.gov> On Mar 24, 2009, at 8:45 AM, Andreas Grassl wrote: > Hello, > > I'm working with a FE-Software where I get out the element stiffness > matrices and the element-node correspondency to setup the stiffness > matrix for solving with PETSc. > > I'm currently fighting with the interface definition. My > LocalToGlobalMapping for test-purposes was the "identity"-IS, but I > guess this is far from the optimum, because nowhere is defined a node > set of interface nodes. > > How do I declare the interface? Is it simply a reordering of the > nodes, > the inner nodes are numbered first and the interface nodes last? The order that you list the nodes is not important; the interface ones don't have to be listed at the end. Here's the deal. Over all the processors you have to have a single GLOBAL numbering of the nodes. The first process starts with 0 and each process starts off with one more than then previous process had. For example with consider two elements, say the first process owns the first element and the second process the second. global numbering 2 / |\ / | \ / | \ / | \ 0 ------------ 3 1 local numbering process 0 2 / | / | / | / | 0 ------ 1 local to global numbering is 0 1 2 local numbering process 1 1 |\ | \ | \ | \ ---- 2 0 local to global numbering is 1 2 3 BUT the local numbering is arbitrary, for example you could order the local nodes on process zero with 0 / | / | / | / | 2 ------ 1 then the local to global numbering is 2 1 0 on process 0 > > > Thank you in advance > > ando > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 > > From Andreas.Grassl at student.uibk.ac.at Tue Mar 24 11:34:23 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Tue, 24 Mar 2009 17:34:23 +0100 Subject: PCNN preconditioner and setting the interface In-Reply-To: <2309C09C-D6F2-4163-B8D6-78EF4A2CEA93@mcs.anl.gov> References: <49C8E3F7.9000900@student.uibk.ac.at> <2309C09C-D6F2-4163-B8D6-78EF4A2CEA93@mcs.anl.gov> Message-ID: <49C90B8F.5020007@student.uibk.ac.at> Barry Smith schrieb: > > On Mar 24, 2009, at 8:45 AM, Andreas Grassl wrote: > >> Hello, >> >> I'm working with a FE-Software where I get out the element stiffness >> matrices and the element-node correspondency to setup the stiffness >> matrix for solving with PETSc. >> >> I'm currently fighting with the interface definition. My >> LocalToGlobalMapping for test-purposes was the "identity"-IS, but I >> guess this is far from the optimum, because nowhere is defined a node >> set of interface nodes. >> >> How do I declare the interface? Is it simply a reordering of the nodes, >> the inner nodes are numbered first and the interface nodes last? > > Here's the deal. Over all the processors you have to have a single > GLOBAL numbering of the > nodes. The first process starts with 0 and each process starts off with > one more than then previous process had. I am confused now, because after you said to use MatSetValuesLocal() to put the values in the matrix, i thought local means the unique (sequential) numbering independent of the processors in use and global a processor-specific (parallel) numbering. So the single GLOBAL numbering is the numbering obtained from the FE-Software represented by {0,...,23} 0 o o O o 5 | 6 o o O o o | O--O--O--O--O--O | o o o O o 23 And I set the 4 different local numberings {0,...,11}, {0,...,8}, {0,...7}, {0,...,5} with the call of ISLocalToGlobalMappingCreate? How do I set the different indices? {0,1,2,3,6,7,8,9,12,13,14,15} would be the index vector for the upper left subdomain and {3,9,12,13,14,15} the index vector for the interface of it. The struct PC_IS defined in src/ksp/pc/impls/is/pcis.h contains IS holding such an information (I suppose at least), but I have no idea how to use them efficiently. Do I have to manage a PC_IS object for every subdomain? Thanks ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From recrusader at gmail.com Tue Mar 24 11:38:30 2009 From: recrusader at gmail.com (Yujie) Date: Tue, 24 Mar 2009 09:38:30 -0700 Subject: KSPSolve() and mutliple rhs In-Reply-To: References: <7ff0ee010903231900s5a345ce4o54cb55492a84eaf6@mail.gmail.com> Message-ID: <7ff0ee010903240938w797ae622n587122e9c6c1a36d@mail.gmail.com> thanks a lot :). Regards, Yujie On Mon, Mar 23, 2009 at 8:14 PM, Satish Balay wrote: > For an example - check src/ksp/ksp/examples/tutorials/ex16.c > > Satish > > On Mon, 23 Mar 2009, Matthew Knepley wrote: > > > I would just write a loop over rhs. > > > > Matt > > > > On Mon, Mar 23, 2009 at 9:00 PM, Yujie wrote: > > > > > Hi, PETSc developers > > > > > > I know KSPSolve() can't support multiple rhs. If I want to realize it, > > > could you give me some advice? How to revise the codes, which functions > need > > > to be paid much attention? thanks a lot. > > > > > > Regards, > > > > > > Yujie > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 24 14:40:31 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Mar 2009 14:40:31 -0500 Subject: PCNN preconditioner and setting the interface In-Reply-To: <49C90B8F.5020007@student.uibk.ac.at> References: <49C8E3F7.9000900@student.uibk.ac.at> <2309C09C-D6F2-4163-B8D6-78EF4A2CEA93@mcs.anl.gov> <49C90B8F.5020007@student.uibk.ac.at> Message-ID: On Mar 24, 2009, at 11:34 AM, Andreas Grassl wrote: > Barry Smith schrieb: >> >> On Mar 24, 2009, at 8:45 AM, Andreas Grassl wrote: >> >>> Hello, >>> >>> I'm working with a FE-Software where I get out the element stiffness >>> matrices and the element-node correspondency to setup the stiffness >>> matrix for solving with PETSc. >>> >>> I'm currently fighting with the interface definition. My >>> LocalToGlobalMapping for test-purposes was the "identity"-IS, but I >>> guess this is far from the optimum, because nowhere is defined a >>> node >>> set of interface nodes. >>> >>> How do I declare the interface? Is it simply a reordering of the >>> nodes, >>> the inner nodes are numbered first and the interface nodes last? >> >> Here's the deal. Over all the processors you have to have a single >> GLOBAL numbering of the >> nodes. The first process starts with 0 and each process starts off >> with >> one more than then previous process had. > > I am confused now, because after you said to use MatSetValuesLocal() > to > put the values in the matrix, i thought local means the unique > (sequential) numbering independent of the processors in use and > global a > processor-specific (parallel) numbering. No, each process has its own independent local numbering from 0 to nlocal-1 the islocaltoglobalmapping you create gives the global number for each local number. > > > So the single GLOBAL numbering is the numbering obtained from the > FE-Software represented by {0,...,23} > > 0 o o O o 5 > | > 6 o o O o o > | > O--O--O--O--O--O > | > o o o O o 23 > > And I set the 4 different local numberings {0,...,11}, {0,...,8}, > {0,...7}, {0,...,5} with the call of ISLocalToGlobalMappingCreate? > > How do I set the different indices? > {0,1,2,3,6,7,8,9,12,13,14,15} would be the index vector for the upper > left subdomain and {3,9,12,13,14,15} the index vector for the > interface > f it. I don't understand your figure, but I don't think it matters. > > > The struct PC_IS defined in src/ksp/pc/impls/is/pcis.h contains IS > holding such an information (I suppose at least), but I have no idea > how > to use them efficiently. > > Do I have to manage a PC_IS object for every subdomain? In the way it is implemented EACH process has ONE subdomain. Thus each process has ONE local to global mapping. You are getting yourself confused thinking things are more complicated than they really are. Barry > > > Thanks > > ando > > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 From irfan.khan at gatech.edu Wed Mar 25 01:05:19 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Wed, 25 Mar 2009 02:05:19 -0400 (EDT) Subject: Petsc parallel vectors with two communicators In-Reply-To: <1788664442.2094271237960737531.JavaMail.root@mail4.gatech.edu> Message-ID: <1344819757.2094911237961119111.JavaMail.root@mail4.gatech.edu> Hi Can the petsc parallel vectors be used with two different communicators? For instance, I have created two different communicators called FEA_Comm and FSI_Comm. The total number of processes are x+y. FSI_Comm works on x+y but FEA_Comm works only on x. Now I am trying to create parallel vectors a1 and a2 such that a1 has entries from x+y processes but a2 has entries from only y processes. After splitting the communicators I assign PETSC_COMM_WORLD to FEA_Comm which works on only x processes. Subsequently petsc is initialized (PetscInitialize()). But when the parallel vectors are created, the processes hang. Any suggestions will be helpful Thankyou Irfan Graduate Research Assistant Woodruff school of Mechanical Engineering Atlanta, GA (30307) From knepley at gmail.com Wed Mar 25 08:08:24 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Mar 2009 08:08:24 -0500 Subject: Petsc parallel vectors with two communicators In-Reply-To: <1344819757.2094911237961119111.JavaMail.root@mail4.gatech.edu> References: <1788664442.2094271237960737531.JavaMail.root@mail4.gatech.edu> <1344819757.2094911237961119111.JavaMail.root@mail4.gatech.edu> Message-ID: On Wed, Mar 25, 2009 at 1:05 AM, Khan, Irfan wrote: > Hi > Can the petsc parallel vectors be used with two different communicators? > For instance, I have created two different communicators called FEA_Comm and > FSI_Comm. The total number of processes are x+y. FSI_Comm works on x+y but > FEA_Comm works only on x. > > Now I am trying to create parallel vectors a1 and a2 such that a1 has > entries from x+y processes but a2 has entries from only y processes. > > After splitting the communicators I assign PETSC_COMM_WORLD to FEA_Comm > which works on only x processes. Subsequently petsc is initialized > (PetscInitialize()). But when the parallel vectors are created, the > processes hang. PETSC_COMM_WORLD should encompass all processes you wish to use in PETSc, so that means x+y. You can create Vec objects on subcommunicators, like x. Matt > > Any suggestions will be helpful > > Thankyou > Irfan > Graduate Research Assistant > Woodruff school of Mechanical Engineering > Atlanta, GA (30307) > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rxk at cfdrc.com Wed Mar 25 11:26:45 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Wed, 25 Mar 2009 10:26:45 -0600 Subject: superlu_dist doesn't work in peysc-3.0.0-p1 In-Reply-To: Message-ID: Hi, After I upgrade the petsc from 2.3.3 to 3.0.0, I have made the change for the superlu from _ierr = MatSetType(_A,MATSUPERLU_DIST) to _ierr = MatSetType(_A,MAT_SOLVER_SUPERLU_DIST) Is this the only change I need to do? Ravi, X.G -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Matthew Knepley Sent: Wednesday, March 25, 2009 7:08 AM To: PETSc users list Subject: Re: Petsc parallel vectors with two communicators On Wed, Mar 25, 2009 at 1:05 AM, Khan, Irfan wrote: Hi Can the petsc parallel vectors be used with two different communicators? For instance, I have created two different communicators called FEA_Comm and FSI_Comm. The total number of processes are x+y. FSI_Comm works on x+y but FEA_Comm works only on x. Now I am trying to create parallel vectors a1 and a2 such that a1 has entries from x+y processes but a2 has entries from only y processes. After splitting the communicators I assign PETSC_COMM_WORLD to FEA_Comm which works on only x processes. Subsequently petsc is initialized (PetscInitialize()). But when the parallel vectors are created, the processes hang. PETSC_COMM_WORLD should encompass all processes you wish to use in PETSc, so that means x+y. You can create Vec objects on subcommunicators, like x. Matt Any suggestions will be helpful Thankyou Irfan Graduate Research Assistant Woodruff school of Mechanical Engineering Atlanta, GA (30307) -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 25 10:29:53 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Mar 2009 10:29:53 -0500 Subject: superlu_dist doesn't work in peysc-3.0.0-p1 In-Reply-To: References: Message-ID: On Wed, Mar 25, 2009 at 11:26 AM, Ravi Kannan wrote: > Hi, > > After I upgrade the petsc from 2.3.3 to 3.0.0, I have made the change for > the superlu from > _ierr = MatSetType(_A,MATSUPERLU_DIST) > to > _ierr = MatSetType(_A,MAT_SOLVER_SUPERLU_DIST) > > Is this the only change I need to do? > No, this type no longer exists. please see the Mat section in the Changes document: http://www.mcs.anl.gov/petsc/petsc-as/documentation/changes/300.html Matt > > Ravi, X.G > > -----Original Message----- > *From:* petsc-users-bounces at mcs.anl.gov [mailto: > petsc-users-bounces at mcs.anl.gov]*On Behalf Of *Matthew Knepley > *Sent:* Wednesday, March 25, 2009 7:08 AM > *To:* PETSc users list > *Subject:* Re: Petsc parallel vectors with two communicators > > On Wed, Mar 25, 2009 at 1:05 AM, Khan, Irfan wrote: > >> Hi >> Can the petsc parallel vectors be used with two different communicators? >> For instance, I have created two different communicators called FEA_Comm and >> FSI_Comm. The total number of processes are x+y. FSI_Comm works on x+y but >> FEA_Comm works only on x. >> >> Now I am trying to create parallel vectors a1 and a2 such that a1 has >> entries from x+y processes but a2 has entries from only y processes. >> >> After splitting the communicators I assign PETSC_COMM_WORLD to FEA_Comm >> which works on only x processes. Subsequently petsc is initialized >> (PetscInitialize()). But when the parallel vectors are created, the >> processes hang. > > > PETSC_COMM_WORLD should encompass all processes you wish to use in PETSc, > so that means x+y. You can create Vec > objects on subcommunicators, like x. > > Matt > > >> >> Any suggestions will be helpful >> >> Thankyou >> Irfan >> Graduate Research Assistant >> Woodruff school of Mechanical Engineering >> Atlanta, GA (30307) >> > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Mar 25 10:35:19 2009 From: jed at 59A2.org (Jed Brown) Date: Wed, 25 Mar 2009 16:35:19 +0100 Subject: superlu_dist doesn't work in peysc-3.0.0-p1 In-Reply-To: References: Message-ID: <20090325153519.GF22269@brakk.ethz.ch> On Wed 2009-03-25 10:26, Ravi Kannan wrote: > Hi, > > After I upgrade the petsc from 2.3.3 to 3.0.0, I have made the change for the > superlu from > _ierr = MatSetType(_A,MATSUPERLU_DIST) > to > _ierr = MatSetType(_A,MAT_SOLVER_SUPERLU_DIST) Should be PCFactorSetMatSolverPackage(pc,MAT_SOLVER_SUPERLU_DIST); or use -pc_factor_mat_solver_package superlu_dist on the command line. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From irfan.khan at gatech.edu Wed Mar 25 10:45:50 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Wed, 25 Mar 2009 11:45:50 -0400 (EDT) Subject: Petsc parallel vectors with two communicators In-Reply-To: <2087857844.2222811237995924316.JavaMail.root@mail4.gatech.edu> Message-ID: <1842258277.2223221237995950868.JavaMail.root@mail4.gatech.edu> Thanks Matt. That helped a lot. Things seem to be working now. Regards Irfan ----- Original Message ----- From: "Matthew Knepley" To: "PETSc users list" Sent: Wednesday, March 25, 2009 9:08:24 AM GMT -05:00 US/Canada Eastern Subject: Re: Petsc parallel vectors with two communicators On Wed, Mar 25, 2009 at 1:05 AM, Khan, Irfan < irfan.khan at gatech.edu > wrote: Hi Can the petsc parallel vectors be used with two different communicators? For instance, I have created two different communicators called FEA_Comm and FSI_Comm. The total number of processes are x+y. FSI_Comm works on x+y but FEA_Comm works only on x. Now I am trying to create parallel vectors a1 and a2 such that a1 has entries from x+y processes but a2 has entries from only y processes. After splitting the communicators I assign PETSC_COMM_WORLD to FEA_Comm which works on only x processes. Subsequently petsc is initialized (PetscInitialize()). But when the parallel vectors are created, the processes hang. PETSC_COMM_WORLD should encompass all processes you wish to use in PETSc, so that means x+y. You can create Vec objects on subcommunicators, like x. Matt Any suggestions will be helpful Thankyou Irfan Graduate Research Assistant Woodruff school of Mechanical Engineering Atlanta, GA (30307) -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From recrusader at gmail.com Thu Mar 26 01:03:00 2009 From: recrusader at gmail.com (Yujie) Date: Wed, 25 Mar 2009 22:03:00 -0800 Subject: KSPSolve(), multiple rhs and preconditioner Message-ID: <7ff0ee010903252303t16d701a8mf6a67f49445167b1@mail.gmail.com> Hi, PETSc developers I am wondering what the difference is when iterative preconditioners (such as ILU, sparse approximation inverse and so on) are used in single and multiple rhs using KSPSolve(). In multiple rhs case, the preconditioners are made to each rhs? Thanks a lot. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 26 06:49:02 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 26 Mar 2009 06:49:02 -0500 Subject: KSPSolve(), multiple rhs and preconditioner In-Reply-To: <7ff0ee010903252303t16d701a8mf6a67f49445167b1@mail.gmail.com> References: <7ff0ee010903252303t16d701a8mf6a67f49445167b1@mail.gmail.com> Message-ID: On Thu, Mar 26, 2009 at 1:03 AM, Yujie wrote: > Hi, PETSc developers > > I am wondering what the difference is when iterative preconditioners (such > as ILU, sparse approximation inverse and so on) are used in single and > multiple rhs using KSPSolve(). > > In multiple rhs case, the preconditioners are made to each rhs? Thanks a > lot. > Yes, multiple rhs is just a way of saying multiple, simultaneous solves. Matt > Regards, > > Yujie > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rxk at cfdrc.com Tue Mar 31 10:41:43 2009 From: rxk at cfdrc.com (Ravi Kannan) Date: Tue, 31 Mar 2009 09:41:43 -0600 Subject: superlu_dist doesn't work in peysc-3.0.0-p1 In-Reply-To: Message-ID: Hi all, Thanks for the reply. As you suggested I used the following PCFactorSetMatSolverPackage(pc,MAT_SOLVER_SUPERLU_DIST) when setting PC type. It works only in serial mode. For the parallel run, the results are wrong. Have you guys seen the same thing or is there something else I overlooked? The version of my superlu_dist is 2.3. Thanks, XG, RAVI -----Original Message----- From: Ravi Kannan [mailto:rxk at cfdrc.com] Sent: Wednesday, March 25, 2009 10:27 AM To: PETSc users list Subject: superlu_dist doesn't work in peysc-3.0.0-p1 Hi, After I upgrade the petsc from 2.3.3 to 3.0.0, I have made the change for the superlu from _ierr = MatSetType(_A,MATSUPERLU_DIST) to _ierr = MatSetType(_A,MAT_SOLVER_SUPERLU_DIST) Is this the only change I need to do? Ravi, X.G -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Matthew Knepley Sent: Wednesday, March 25, 2009 7:08 AM To: PETSc users list Subject: Re: Petsc parallel vectors with two communicators On Wed, Mar 25, 2009 at 1:05 AM, Khan, Irfan wrote: Hi Can the petsc parallel vectors be used with two different communicators? For instance, I have created two different communicators called FEA_Comm and FSI_Comm. The total number of processes are x+y. FSI_Comm works on x+y but FEA_Comm works only on x. Now I am trying to create parallel vectors a1 and a2 such that a1 has entries from x+y processes but a2 has entries from only y processes. After splitting the communicators I assign PETSC_COMM_WORLD to FEA_Comm which works on only x processes. Subsequently petsc is initialized (PetscInitialize()). But when the parallel vectors are created, the processes hang. PETSC_COMM_WORLD should encompass all processes you wish to use in PETSc, so that means x+y. You can create Vec objects on subcommunicators, like x. Matt Any suggestions will be helpful Thankyou Irfan Graduate Research Assistant Woodruff school of Mechanical Engineering Atlanta, GA (30307) -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Mar 31 10:02:00 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 31 Mar 2009 10:02:00 -0500 (CDT) Subject: superlu_dist doesn't work in peysc-3.0.0-p1 In-Reply-To: References: Message-ID: Comment out //PCFactorSetMatSolverPackage(pc,MAT_SOLVER_SUPERLU_DIST) and use the runtime option '-pc_type lu -pc_factor_mat_solver_package superlu_dist' This option is well tested. Let us know what you get. Hong On Tue, 31 Mar 2009, Ravi Kannan wrote: > Hi all, > > Thanks for the reply. As you suggested I used the following > > PCFactorSetMatSolverPackage(pc,MAT_SOLVER_SUPERLU_DIST) > > when setting PC type. > > It works only in serial mode. For the parallel run, the results are wrong. > > Have you guys seen the same thing or is there something else I overlooked? > > The version of my superlu_dist is 2.3. > > Thanks, > > XG, RAVI > > -----Original Message----- > From: Ravi Kannan [mailto:rxk at cfdrc.com] > Sent: Wednesday, March 25, 2009 10:27 AM > To: PETSc users list > Subject: superlu_dist doesn't work in peysc-3.0.0-p1 > > > Hi, > > After I upgrade the petsc from 2.3.3 to 3.0.0, I have made the change for > the superlu from > _ierr = MatSetType(_A,MATSUPERLU_DIST) > to > _ierr = MatSetType(_A,MAT_SOLVER_SUPERLU_DIST) > > Is this the only change I need to do? > > Ravi, X.G > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Matthew Knepley > Sent: Wednesday, March 25, 2009 7:08 AM > To: PETSc users list > Subject: Re: Petsc parallel vectors with two communicators > > > On Wed, Mar 25, 2009 at 1:05 AM, Khan, Irfan > wrote: > > Hi > Can the petsc parallel vectors be used with two different > communicators? For instance, I have created two different communicators > called FEA_Comm and FSI_Comm. The total number of processes are x+y. > FSI_Comm works on x+y but FEA_Comm works only on x. > > Now I am trying to create parallel vectors a1 and a2 such that a1 has > entries from x+y processes but a2 has entries from only y processes. > > After splitting the communicators I assign PETSC_COMM_WORLD to > FEA_Comm which works on only x processes. Subsequently petsc is initialized > (PetscInitialize()). But when the parallel vectors are created, the > processes hang. > > PETSC_COMM_WORLD should encompass all processes you wish to use in > PETSc, so that means x+y. You can create Vec > objects on subcommunicators, like x. > > Matt > > > Any suggestions will be helpful > > Thankyou > Irfan > Graduate Research Assistant > Woodruff school of Mechanical Engineering > Atlanta, GA (30307) > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > From jfettig at illinois.edu Tue Mar 31 15:41:02 2009 From: jfettig at illinois.edu (John Fettig) Date: Tue, 31 Mar 2009 15:41:02 -0500 Subject: SuperLU_DIST output Message-ID: When you run with SuperLU_DIST, there is some output like: SYMBfact time: 0.00 DISTRIBUTE time 0.00 There doesn't seem to be an option to disable this output. Is this correct? John From Hung.V.Nguyen at usace.army.mil Tue Mar 31 15:58:24 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Tue, 31 Mar 2009 15:58:24 -0500 Subject: Parallel partitioning of the matrix Message-ID: All, I have a test case that each processor reads its owned part of matrix in csr format dumped out by CFD application. Note: the partitions of matrix were done by ParMetis. Code below shows how to insert data into PETSc matrix (gmap is globalmap). The solution from PETSc is very closed to CFD solution so I think it is correct. My question is whether the parallel partitioning of the matrix is determined by PETSc at runtime or is the same as ParMetis? Thank you, -hung --- /* create a matrix object */ MatCreateMPIAIJ(PETSC_COMM_WORLD, my_own, my_own,M,M, mnnz, PETSC_NULL, mnnz, PETSC_NULL, &A); for(i =0; i < my_own; i++) { int row = gmap[i]; for (j = ia[i]; j < ia[i+1]; j++) { int col = ja[j]; jj = gmap[col]; MatSetValues(A,1,&row,1,&jj,&val[j], INSERT_VALUES); } } /* free temporary arrays */ free(val); free(ja); free(ia); /* assemble the matrix and vectors*/ MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); From jfettig at illinois.edu Tue Mar 31 16:11:39 2009 From: jfettig at illinois.edu (John Fettig) Date: Tue, 31 Mar 2009 16:11:39 -0500 Subject: SuperLU_DIST output In-Reply-To: References: Message-ID: To answer my own question, this output has been commented out in the SuperLU_DIST that is downloaded by 3.0.0-p4 (I was running 3.0.0-p0 previously). Thanks, John On Tue, Mar 31, 2009 at 3:41 PM, John Fettig wrote: > When you run with SuperLU_DIST, there is some output like: > > ? ? ? ?SYMBfact time: 0.00 > ? ? ? ?DISTRIBUTE time ? ? ? ?0.00 > > There doesn't seem to be an option to disable this output. ?Is this correct? > > John > From hzhang at mcs.anl.gov Tue Mar 31 20:03:18 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 31 Mar 2009 20:03:18 -0500 (CDT) Subject: SuperLU_DIST output In-Reply-To: References: Message-ID: John, On Tue, 31 Mar 2009, John Fettig wrote: > To answer my own question, this output has been commented out in the > SuperLU_DIST that is downloaded by 3.0.0-p4 (I was running 3.0.0-p0 > previously). Sherry updated the superlu_dist after we sent a request to her few weeks ago. Glad you got the the updated version. Hong > > Thanks, > John > > On Tue, Mar 31, 2009 at 3:41 PM, John Fettig wrote: >> When you run with SuperLU_DIST, there is some output like: >> >> ? ? ? ?SYMBfact time: 0.00 >> ? ? ? ?DISTRIBUTE time ? ? ? ?0.00 >> >> There doesn't seem to be an option to disable this output. ?Is this correct? >> >> John >> >