From martin.raedel at tu-dresden.de Mon Jun 7 06:27:46 2010 From: martin.raedel at tu-dresden.de (Martin =?iso-8859-1?q?R=E4del?=) Date: Mon, 7 Jun 2010 13:27:46 +0200 Subject: [petsc-users] T^t*K*T for SEQSBAIJ matrix type Message-ID: <201006071327.46667.martin.raedel@tu-dresden.de> Dear All, I try to multiply matrices in the following way: K_=T^t*K*T where K is symmetric and positive definite and has the type SEQSBAIJ and T is nonsymmetric and has the type SEQAIJ. I searched the manual for possibilities to compute this task but I just found MatMatMultTranspose which is only applicable for matrices of type SEQAIJ and MatPtAP which only works for AIJ-type matrices. Is there a way to calculate T^t*K*T for the used matrix types (K=SEQSBAIJ) ? Thank you very much, Martin Btw, I am currently using PETSc 3.0 since I work with SLEPc as well. -- Martin R?del ---------------------------------------------------------------------------------------------------- Technische Universit?t Dresden Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering D-01062 Dresden Germany phone : (++49)351 463 38291 fax : (++49)351 463 37263 mail : martin.raedel at tu-dresden.de Website : http://tu-dresden.de/mw/ilr/lft From jroman at dsic.upv.es Mon Jun 7 08:55:03 2010 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 7 Jun 2010 15:55:03 +0200 Subject: [petsc-users] T^t*K*T for SEQSBAIJ matrix type In-Reply-To: <201006071327.46667.martin.raedel@tu-dresden.de> References: <201006071327.46667.martin.raedel@tu-dresden.de> Message-ID: <938DEDEC-B0AF-46CD-9AFC-4BB72DCF5866@dsic.upv.es> On 07/06/2010, Martin R?del wrote: > Dear All, > > I try to multiply matrices in the following way: > > K_=T^t*K*T > > where K is symmetric and positive definite and has the type SEQSBAIJ and T is > nonsymmetric and has the type SEQAIJ. > I searched the manual for possibilities to compute this task but I just found > MatMatMultTranspose which is only applicable for matrices of type SEQAIJ and > MatPtAP which only works for AIJ-type matrices. > > Is there a way to calculate T^t*K*T for the used matrix types (K=SEQSBAIJ) ? > > Thank you very much, > > Martin > > Btw, I am currently using PETSc 3.0 since I work with SLEPc as well. > > -- > Martin R?del I would suggest giving up the SEQSBAIJ and using MatPtAP. Depending on the situation, an alternative could be to use a shell matrix instead of forming T^t*K*T explicitly. If for instance this matrix is the eigenproblem matrix in EPS, you can consider using SLEPc's ex3.c as a basis and define the MatMult operation to perform the sequence MatMult(T), MatMult(K), MatMultTranspose(T). Best, Jose E. Roman From andreas.hauffe at tu-dresden.de Tue Jun 8 02:33:47 2010 From: andreas.hauffe at tu-dresden.de (Andreas Hauffe) Date: Tue, 8 Jun 2010 09:33:47 +0200 Subject: [petsc-users] positive definite MPISBAIJ-Matrix gets singular in the case of more than one MPI-process Message-ID: <201006080933.48878.andreas.hauffe@tu-dresden.de> Dear all, I want to solve a linear System with a symmetric positive definite matrix. I use the external package MUMPS as solver with the following commands call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) call KSPSetOperators(ksp,mat,mat,DIFFERENT_NONZERO_PATTERN,ierr) call KSPSetType(ksp,KSPPREONLY,ierr) call KSPGetPC(ksp,prec,ierr) call PCSetType(prec,PCCHOLESKY,ierr) call PCFactorSetMatSolverPackage(prec,MAT_SOLVER_MUMPS,ierr) call KSPSetFromOptions(ksp,ierr) call KSPSolve(ksp,vec1,vec2,ierr) and give the option "-mat_mumps_sym 1" (symmetric positive definite). If I run the program with one MPI-process everything works fine. For a low number of unknows it also works for more MPI-processes. If I increase the number of unknows (>50000) and use more then one MPI-process on one or more host, I get a MUMPS error that the matrix is singular. With one MPI-process everything is still fine. The critical value for the number of unknows depends also on the use of the debug-version or non-debug-version of PETSc. If I do not use the option "-mat_mumps_sym 1" everything also works. Right now the finite element assembling process (MatSetValue) is only done with rank 0. So the hole matrix gets all its values only from rank 0. Could someone explain this behaviour? Where is my error? Btw, I am currently using PETSc 3.0 since I work with SLEPc as well. Thank you very much, -- Andreas Hauffe ---------------------------------------------------------------------------------------------------- Technische Universit?t Dresden Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering D-01062 Dresden Germany phone : (++49)351 463 38496 fax : (++49)351 463 37263 mail : andreas.hauffe at tu-dresden.de Website : http://tu-dresden.de/mw/ilr/lft ---------------------------------------------------------------------------------------------------- From gdiso at ustc.edu Tue Jun 8 03:04:00 2010 From: gdiso at ustc.edu (Gong Ding) Date: Tue, 8 Jun 2010 16:04:00 +0800 Subject: [petsc-users] Is there some function to do this vec operator Message-ID: Dear all, I have a flux vector f and a vector t for local time step. Both of them are global vectors. And I want to update the flow variable vector x in explicit way x^(n+1) = x^(n) - t*f here t*f is each t's component multiply f's component. I searched the manual of petsc and find no useful information. Can anyone tell me how should I do t*f efficiently? Yours Gong Ding From jed at 59A2.org Tue Jun 8 03:44:51 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 08 Jun 2010 10:44:51 +0200 Subject: [petsc-users] Is there some function to do this vec operator In-Reply-To: References: Message-ID: <878w6pdibg.fsf@59A2.org> On Tue, 8 Jun 2010 16:04:00 +0800, "Gong Ding" wrote: > Dear all, > I have a flux vector f and a vector t for local time step. > Both of them are global vectors. > > And I want to update the flow variable vector x in explicit way > x^(n+1) = x^(n) - t*f > > here t*f is each t's component multiply f's component. > > I searched the manual of petsc and find no useful information. > Can anyone tell me how should I do t*f efficiently? The abstract way is to use VecPointwiseMult followed by VecAXPY, but this makes two passes and needs a temporary. The alternative is VecGetLocalSize, VecGetArray, and then roll the for loop x[i] -= t[i]*f[i]. If you do implicit solves, you will almost certainly find that these updates take a trivial amount of the total time and memory, so you may as well use whichever you find easier to read. Jed From Chun.SUN at 3ds.com Tue Jun 8 10:27:58 2010 From: Chun.SUN at 3ds.com (SUN Chun) Date: Tue, 8 Jun 2010 11:27:58 -0400 Subject: [petsc-users] a preconditioner question In-Reply-To: <878w6pdibg.fsf@59A2.org> References: <878w6pdibg.fsf@59A2.org> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA9EC439@CORP-CLT-EXB01.ds> Hi Petsc Developers, I was using a preconditioner of ASM with subpc ILU, (outside KSP is BCGS). I have an impression that ASM is somewhat close to block-Jacobi in a parallel sense. Then I was thinking to do multiple preconditioning for the whole system to increase the convergence behavior on the boundary equations. However I can't find any options that I can increase the "passes" of preconditioner usage. I was also trying "-pc_type composite", then specifying "-pc_composite_pcs asm,asm", and "-sub_pc_type ilu", this gave me error of ilu can not run in parallel. But I was intended to run ilu as sub_pc inside each domain. I'm not sure if my idea makes sense or I missed something in the manual. Thanks for the help. Chun This email and any attachments are intended solely for the use of the individual or entity to whom it is addressed and may be confidential and/or privileged. If you are not one of the named recipients or have received this email in error, (i) you should not read, disclose, or copy it, (ii) please notify sender of your receipt by reply email and delete this email and all attachments, (iii) Dassault Systemes does not accept or assume any liability or responsibility for any use of or reliance on this email.For other languages, go to http://www.3ds.com/terms/email-disclaimer. From knepley at gmail.com Tue Jun 8 10:59:01 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Jun 2010 08:59:01 -0700 Subject: [petsc-users] positive definite MPISBAIJ-Matrix gets singular in the case of more than one MPI-process In-Reply-To: <201006080933.48878.andreas.hauffe@tu-dresden.de> References: <201006080933.48878.andreas.hauffe@tu-dresden.de> Message-ID: On Tue, Jun 8, 2010 at 12:33 AM, Andreas Hauffe < andreas.hauffe at tu-dresden.de> wrote: > Dear all, > > I want to solve a linear System with a symmetric positive definite matrix. > I > use the external package MUMPS as solver with the following commands > > call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) > call KSPSetOperators(ksp,mat,mat,DIFFERENT_NONZERO_PATTERN,ierr) > call KSPSetType(ksp,KSPPREONLY,ierr) > call KSPGetPC(ksp,prec,ierr) > call PCSetType(prec,PCCHOLESKY,ierr) > call PCFactorSetMatSolverPackage(prec,MAT_SOLVER_MUMPS,ierr) > call KSPSetFromOptions(ksp,ierr) > call KSPSolve(ksp,vec1,vec2,ierr) > > and give the option "-mat_mumps_sym 1" (symmetric positive definite). > > If I run the program with one MPI-process everything works fine. For a low > number of unknows it also works for more MPI-processes. If I increase the > number of unknows (>50000) and use more then one MPI-process on one or more > host, I get a MUMPS error that the matrix is singular. With one MPI-process > everything is still fine. The critical value for the number of unknows > depends also on the use of the debug-version or non-debug-version of PETSc. > If I do not use the option "-mat_mumps_sym 1" everything also works. > It sounds like your matrix is not actually symmetric on multiple processes: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatIsSymmetric.html Matt > Right now the finite element assembling process (MatSetValue) is only done > with rank 0. So the hole matrix gets all its values only from rank 0. > > Could someone explain this behaviour? Where is my error? > > Btw, I am currently using PETSc 3.0 since I work with SLEPc as well. > > Thank you very much, > -- > Andreas Hauffe > > > ---------------------------------------------------------------------------------------------------- > Technische Universit?t Dresden > Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace > Engineering > Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering > > D-01062 Dresden > Germany > > phone : (++49)351 463 38496 > fax : (++49)351 463 37263 > mail : andreas.hauffe at tu-dresden.de > Website : http://tu-dresden.de/mw/ilr/lft > > ---------------------------------------------------------------------------------------------------- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Jun 8 11:02:43 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 08 Jun 2010 18:02:43 +0200 Subject: [petsc-users] a preconditioner question In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA9EC439@CORP-CLT-EXB01.ds> References: <878w6pdibg.fsf@59A2.org> <2545DC7A42DF804AAAB2ADA5043D57DA9EC439@CORP-CLT-EXB01.ds> Message-ID: <87zkz5bjh8.fsf@59A2.org> On Tue, 8 Jun 2010 11:27:58 -0400, "SUN Chun" wrote: > Hi Petsc Developers, > > I was using a preconditioner of ASM with subpc ILU, (outside KSP is > BCGS). I have an impression that ASM is somewhat close to block-Jacobi > in a parallel sense. Then I was thinking to do multiple > preconditioning for the whole system to increase the convergence > behavior on the boundary equations. However I can't find any options > that I can increase the "passes" of preconditioner usage. These inner "passes" are Richardson iterations, you can implement with -pc_type ksp -ksp_ksp_type richardson -ksp_ksp_max_it 4 -ksp_pc_type asm > I was also trying "-pc_type composite", then specifying > "-pc_composite_pcs asm,asm", and "-sub_pc_type ilu", this gave me > error of ilu can not run in parallel. But I was intended to run ilu as > sub_pc inside each domain. Try -sub_sub_pc_type ilu (default, BTW), or -sub_1_sub_pc_type if you only want to apply to one of the "passes". But this method just applies the preconditioners additively, so the left-preconditioned Krylov operator is just (P_0+P_1)A. To combine them multiplicatively use -pc_composite_type multiplicative, but note that this always takes a full step (like Richardson with relaxation constant 1) so your preconditioner would have to have uniform control of the spectrum. So this might possibly do what you want, but you probably want to use a nested Richardson instead of Composite. Be sure to run with -ksp_view when nesting preconditioners deeply, otherwise it's easy to get lost and not be doing quite what you expect. Jed From andreas.hauffe at tu-dresden.de Wed Jun 9 02:02:02 2010 From: andreas.hauffe at tu-dresden.de (Andreas Hauffe) Date: Wed, 9 Jun 2010 09:02:02 +0200 Subject: [petsc-users] positive definite MPISBAIJ-Matrix gets singular in the case of more than one MPI-process In-Reply-To: References: <201006080933.48878.andreas.hauffe@tu-dresden.de> Message-ID: <201006090902.02805.andreas.hauffe@tu-dresden.de> I tried to find out if the matrix gets corrupted in the multiple process case. So I wrote both matrices (single/multiple process) to an ASCII file. Both files contained the same matrix. The diff command printed no difference. I use a MPISBAIJ matrix. So I think it is impossible to create an unsymmetric matrix. Is this right? Andreas Am Dienstag 08 Juni 2010 17:59:01 schrieb Matthew Knepley: > On Tue, Jun 8, 2010 at 12:33 AM, Andreas Hauffe < > > andreas.hauffe at tu-dresden.de> wrote: > > Dear all, > > > > I want to solve a linear System with a symmetric positive definite > > matrix. I > > use the external package MUMPS as solver with the following commands > > > > call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) > > call KSPSetOperators(ksp,mat,mat,DIFFERENT_NONZERO_PATTERN,ierr) > > call KSPSetType(ksp,KSPPREONLY,ierr) > > call KSPGetPC(ksp,prec,ierr) > > call PCSetType(prec,PCCHOLESKY,ierr) > > call PCFactorSetMatSolverPackage(prec,MAT_SOLVER_MUMPS,ierr) > > call KSPSetFromOptions(ksp,ierr) > > call KSPSolve(ksp,vec1,vec2,ierr) > > > > and give the option "-mat_mumps_sym 1" (symmetric positive definite). > > > > If I run the program with one MPI-process everything works fine. For a > > low number of unknows it also works for more MPI-processes. If I increase > > the number of unknows (>50000) and use more then one MPI-process on one > > or more host, I get a MUMPS error that the matrix is singular. With one > > MPI-process everything is still fine. The critical value for the number > > of unknows depends also on the use of the debug-version or > > non-debug-version of PETSc. If I do not use the option "-mat_mumps_sym 1" > > everything also works. > > It sounds like your matrix is not actually symmetric on multiple processes: > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpa >ges/Mat/MatIsSymmetric.html > > Matt > > > Right now the finite element assembling process (MatSetValue) is only > > done with rank 0. So the hole matrix gets all its values only from rank > > 0. > > > > Could someone explain this behaviour? Where is my error? > > > > Btw, I am currently using PETSc 3.0 since I work with SLEPc as well. > > > > Thank you very much, > > -- > > Andreas Hauffe > > > > > > ------------------------------------------------------------------------- > >--------------------------- Technische Universit?t Dresden > > Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace > > Engineering > > Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering > > > > D-01062 Dresden > > Germany > > > > phone : (++49)351 463 38496 > > fax : (++49)351 463 37263 > > mail : andreas.hauffe at tu-dresden.de > > Website : http://tu-dresden.de/mw/ilr/lft > > > > ------------------------------------------------------------------------- > >--------------------------- -- Andreas Hauffe ---------------------------------------------------------------------------------------------------- Technische Universit?t Dresden Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering D-01062 Dresden Germany phone : (++49)351 463 38496 fax : (++49)351 463 37263 mail : andreas.hauffe at tu-dresden.de Website : http://tu-dresden.de/mw/ilr/lft ---------------------------------------------------------------------------------------------------- From knepley at gmail.com Wed Jun 9 07:23:18 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Jun 2010 05:23:18 -0700 Subject: [petsc-users] positive definite MPISBAIJ-Matrix gets singular in the case of more than one MPI-process In-Reply-To: <201006090902.02805.andreas.hauffe@tu-dresden.de> References: <201006080933.48878.andreas.hauffe@tu-dresden.de> <201006090902.02805.andreas.hauffe@tu-dresden.de> Message-ID: On Wed, Jun 9, 2010 at 12:02 AM, Andreas Hauffe < andreas.hauffe at tu-dresden.de> wrote: > I tried to find out if the matrix gets corrupted in the multiple process > case. > So I wrote both matrices (single/multiple process) to an ASCII file. Both > files contained the same matrix. The diff command printed no difference. > > I use a MPISBAIJ matrix. So I think it is impossible to create an > unsymmetric > matrix. Is this right? > Write the matrix as a PETSc binary and send it to petsc-maint at mcs.anl.gov. Thanks, Matt > Andreas > > Am Dienstag 08 Juni 2010 17:59:01 schrieb Matthew Knepley: > > On Tue, Jun 8, 2010 at 12:33 AM, Andreas Hauffe < > > > > andreas.hauffe at tu-dresden.de> wrote: > > > Dear all, > > > > > > I want to solve a linear System with a symmetric positive definite > > > matrix. I > > > use the external package MUMPS as solver with the following commands > > > > > > call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) > > > call KSPSetOperators(ksp,mat,mat,DIFFERENT_NONZERO_PATTERN,ierr) > > > call KSPSetType(ksp,KSPPREONLY,ierr) > > > call KSPGetPC(ksp,prec,ierr) > > > call PCSetType(prec,PCCHOLESKY,ierr) > > > call PCFactorSetMatSolverPackage(prec,MAT_SOLVER_MUMPS,ierr) > > > call KSPSetFromOptions(ksp,ierr) > > > call KSPSolve(ksp,vec1,vec2,ierr) > > > > > > and give the option "-mat_mumps_sym 1" (symmetric positive definite). > > > > > > If I run the program with one MPI-process everything works fine. For a > > > low number of unknows it also works for more MPI-processes. If I > increase > > > the number of unknows (>50000) and use more then one MPI-process on one > > > or more host, I get a MUMPS error that the matrix is singular. With one > > > MPI-process everything is still fine. The critical value for the number > > > of unknows depends also on the use of the debug-version or > > > non-debug-version of PETSc. If I do not use the option "-mat_mumps_sym > 1" > > > everything also works. > > > > It sounds like your matrix is not actually symmetric on multiple > processes: > > > > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpa > >ges/Mat/MatIsSymmetric.html > > > > Matt > > > > > Right now the finite element assembling process (MatSetValue) is only > > > done with rank 0. So the hole matrix gets all its values only from rank > > > 0. > > > > > > Could someone explain this behaviour? Where is my error? > > > > > > Btw, I am currently using PETSc 3.0 since I work with SLEPc as well. > > > > > > Thank you very much, > > > -- > > > Andreas Hauffe > > > > > > > > > > ------------------------------------------------------------------------- > > >--------------------------- Technische Universit?t Dresden > > > Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace > > > Engineering > > > Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering > > > > > > D-01062 Dresden > > > Germany > > > > > > phone : (++49)351 463 38496 > > > fax : (++49)351 463 37263 > > > mail : andreas.hauffe at tu-dresden.de > > > Website : http://tu-dresden.de/mw/ilr/lft > > > > > > > ------------------------------------------------------------------------- > > >--------------------------- > > > > -- > Andreas Hauffe > > > ---------------------------------------------------------------------------------------------------- > Technische Universit?t Dresden > Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace > Engineering > Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering > > D-01062 Dresden > Germany > > phone : (++49)351 463 38496 > fax : (++49)351 463 37263 > mail : andreas.hauffe at tu-dresden.de > Website : http://tu-dresden.de/mw/ilr/lft > > ---------------------------------------------------------------------------------------------------- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From amal.ghamdi at kaust.edu.sa Sat Jun 12 18:45:49 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Sun, 13 Jun 2010 02:45:49 +0300 Subject: [petsc-users] PETSc4Py with Enthough Python Distribution Message-ID: Dear All, I have Mac OS 10.6 and I'd like to ask if any one have tried using PETSc4Py with Enthough Python Distribution, EPD. I have tried to build PETSc with the option --download-petsc4py but it complained that I am not using the system python realpath of /usr/lib/libpython.dylib (/System/Library/Frameworks/Python.framework/Versions/2.6/Python) does not point to Python library path (/Library/Frameworks/Python.framework/Versions/6.1/Python) for current Python; Are you not using the Apple python? Does that mean I should use Apple python or there is a way to make PETSc4Py works on EPD. Thank you very much Amal -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Jun 12 19:14:26 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 12 Jun 2010 19:14:26 -0500 Subject: [petsc-users] PETSc4Py with Enthough Python Distribution In-Reply-To: References: Message-ID: <8D39AFE8-18EB-40D5-93CA-B957E236B78E@mcs.anl.gov> On Jun 12, 2010, at 6:45 PM, Amal Alghamdi wrote: > Dear All, > > I have Mac OS 10.6 and I'd like to ask if any one have tried using PETSc4Py with Enthough Python Distribution, EPD. I have tried to build PETSc with the option --download-petsc4py but it complained that I am not using the system python > > realpath of /usr/lib/libpython.dylib (/System/Library/Frameworks/Python.framework/Versions/2.6/Python) does not point to Python library path (/Library/Frameworks/Python.framework/Versions/6.1/Python) for current Python; > Are you not using the Apple python? > > Does that mean I should use Apple python or there is a way to make PETSc4Py works on EPD. Take a look at config/PETSc/packages/petsc4py.py and the lines if os.path.isfile(os.path.join(prefix,'Python')): for i in ['/usr/lib/libpython.dylib','/opt/local/lib/libpython2.5.dylib','/opt/local/lib/libpython2.6.dylib']: if os.path.realpath(i) == os.path.join(prefix,'Python'): self.addDefine('PYTHON_LIB','"'+os.path.join(i)+'"') return raise RuntimeError('realpath of /usr/lib/libpython.dylib ('+os.path.realpath('/usr/lib/libpython.dylib')+') does not point to Python library path ('+os.path.join(prefix,'Python')+') for current Python;\n Are you not using the Apple python?') The problem is that some python on apples make it a little difficult to find the python shared library (that ends in .dylib) that is needed by PETSc4py. This is our hack to find the right .dylib; perhaps you can add the location of the dylib that you need to the list and get it to work? If you cannot find the right . dylib (which is actually just a link to the os.path.join(prefix,'Python') file you could make your own dummy libpython.dylib and point to that? Good luck and let us know how it goes, Barry > > Thank you very much > Amal -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Sat Jun 12 19:38:18 2010 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Sat, 12 Jun 2010 21:38:18 -0300 Subject: [petsc-users] PETSc4Py with Enthough Python Distribution In-Reply-To: References: Message-ID: On 12 June 2010 20:45, Amal Alghamdi wrote: > Dear All, > I have Mac OS 10.6 and I'd like to ask if any one have tried using PETSc4Py > with Enthough Python Distribution, EPD. I have tried to build PETSc with the > option?--download-petsc4py but it complained that I am not using the system > python > realpath of /usr/lib/libpython.dylib > (/System/Library/Frameworks/Python.framework/Versions/2.6/Python) does not > point to Python library path > (/Library/Frameworks/Python.framework/Versions/6.1/Python) for current > Python; > ?Are you not using the Apple python? > Does that mean I should use Apple python or there is a way to make PETSc4Py > works on EPD. > Thank you very much > Amal You could also try to build petsc4py independently, after building core PETSc. Just download the tarball release, export PETSC_DIR=... PETSC_ARCH=..., and then "python setup.py build". Note however that you may still run into trouble in this OS X + EPD Python combination. It's not my fault, the universal binaries thing in OS X plus the custom way EPD builds Python is hard to deal with, specially as I do not have any OS X box to do proper testing. If you cannot make it work, feel free to contact me privately and be prepared to help me a bit. -- Lisandro Dalcin --------------- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169 From amal.ghamdi at kaust.edu.sa Sun Jun 13 07:03:07 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Sun, 13 Jun 2010 15:03:07 +0300 Subject: [petsc-users] PETSc4Py with Enthough Python Distribution In-Reply-To: References: Message-ID: Dear Mr. Barry and Mr. Lisandro, Thank you very much for the help. I tried installing PETSc4Py independently after installing PETSc and it seems working. Best regards, Amal On Sun, Jun 13, 2010 at 3:38 AM, Lisandro Dalcin wrote: > On 12 June 2010 20:45, Amal Alghamdi wrote: > > Dear All, > > I have Mac OS 10.6 and I'd like to ask if any one have tried using > PETSc4Py > > with Enthough Python Distribution, EPD. I have tried to build PETSc with > the > > option --download-petsc4py but it complained that I am not using the > system > > python > > realpath of /usr/lib/libpython.dylib > > (/System/Library/Frameworks/Python.framework/Versions/2.6/Python) does > not > > point to Python library path > > (/Library/Frameworks/Python.framework/Versions/6.1/Python) for current > > Python; > > Are you not using the Apple python? > > Does that mean I should use Apple python or there is a way to make > PETSc4Py > > works on EPD. > > Thank you very much > > Amal > > You could also try to build petsc4py independently, after building > core PETSc. Just download the tarball release, export PETSC_DIR=... > PETSC_ARCH=..., and then "python setup.py build". Note however that > you may still run into trouble in this OS X + EPD Python combination. > It's not my fault, the universal binaries thing in OS X plus the > custom way EPD builds Python is hard to deal with, specially as I do > not have any OS X box to do proper testing. If you cannot make it > work, feel free to contact me privately and be prepared to help me a > bit. > > > -- > Lisandro Dalcin > --------------- > CIMEC (INTEC/CONICET-UNL) > Predio CONICET-Santa Fe > Colectora RN 168 Km 472, Paraje El Pozo > Tel: +54-342-4511594 (ext 1011) > Tel/Fax: +54-342-4511169 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jun 14 15:42:18 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 14 Jun 2010 15:42:18 -0500 Subject: [petsc-users] variational inequalities in PETSc References: <433607CE-0399-4D80-B31F-C1BA16A52FED@mcs.anl.gov> Message-ID: PETSc users, Lois has gotten some funding to add support for variational inequalities to PETSc's SNES nonlinear solvers (basically nonlinear equations with bounds on the variables). If you have a need for variational inequality solvers and/or are interested in using or helping develop the code please let us know. We have an advertisement for a post-doc to work on these but can consider other possible positions such as visiting faculty or pre-doc etc. Barry > > > http://www.anl.gov/jobsearch/detail.jsp?userreqid=316469+MCS&lsBrowse=POSTDOC > -------------- next part -------------- An HTML attachment was scrubbed... URL: From graziosov at me.queensu.ca Thu Jun 17 23:19:57 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Fri, 18 Jun 2010 00:19:57 -0400 Subject: [petsc-users] Segmentation Violation in a very simple fortran code Message-ID: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with DACreateGlobalVector() So I've written a simple code: **************************************************************************** program PetscTest implicit none Vec q PetscScalar alpha DA da PetscErrorCode err PetscInt i3,i1 i3=4 i1=1 call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) call DACreateGlobalVector(da,q,err) alpha=1.0 call VecSet(q,alpha,err) call vecView(q,PETSC_VIEWER_STDOUT_WORLD) call VecDestroy(q,err); call PetscFinalize(PETSC_NULL_CHARACTER,err) end program **************************************************************************** And this is the output that i get if I run the executable with mpirun -n 2 ./PetscTest -da_view **************************************************************************** Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 Process [0] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Process [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 10883 on node up0001 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages **************************************************************************** If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... Any ideas or suggestions Thanks Valerio -------------- next part -------------- An HTML attachment was scrubbed... URL: From logan.sankaran at hp.com Thu Jun 17 23:24:52 2010 From: logan.sankaran at hp.com (Sankaran, Loganathan (Solution Technology Services)) Date: Fri, 18 Jun 2010 04:24:52 +0000 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD70@GVW1098EXB.americas.hpqcorp.net> I am new to PETSc. What MPI and What compiler ? If it is Intel compiler, add -mcmodel=medium to the compile line and see that helps. Logan From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Valerio Grazioso Sent: Thursday, June 17, 2010 11:20 PM To: petsc-users at mcs.anl.gov Subject: [petsc-users] Segmentation Violation in a very simple fortran code Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with DACreateGlobalVector() So I've written a simple code: **************************************************************************** program PetscTest implicit none Vec q PetscScalar alpha DA da PetscErrorCode err PetscInt i3,i1 i3=4 i1=1 call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) call DACreateGlobalVector(da,q,err) alpha=1.0 call VecSet(q,alpha,err) call vecView(q,PETSC_VIEWER_STDOUT_WORLD) call VecDestroy(q,err); call PetscFinalize(PETSC_NULL_CHARACTER,err) end program **************************************************************************** And this is the output that i get if I run the executable with mpirun -n 2 ./PetscTest -da_view **************************************************************************** Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 Process [0] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Process [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 10883 on node up0001 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages **************************************************************************** If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... Any ideas or suggestions Thanks Valerio -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jun 17 23:30:19 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 17 Jun 2010 23:30:19 -0500 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: The VecView() has no 'err' argument. Matt On Thu, Jun 17, 2010 at 11:19 PM, Valerio Grazioso wrote: > Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in > fortran90. > I was getting strange Segmentation Violation and after some debugging I > realized that the the problem was in the vector created with > > DACreateGlobalVector() > > So I've written a simple code: > > > **************************************************************************** > > program PetscTest > > implicit none > > Vec q > PetscScalar alpha > DA da > PetscErrorCode err > > PetscInt i3,i1 > > i3=4 > i1=1 > > call > DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, > & > > PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) > > > call DACreateGlobalVector(da,q,err) > > alpha=1.0 > > call VecSet(q,alpha,err) > call vecView(q,PETSC_VIEWER_STDOUT_WORLD) > > call VecDestroy(q,err); > > call PetscFinalize(PETSC_NULL_CHARACTER,err) > > end program > > > **************************************************************************** > > > And this is the output that i get if I run the executable with > > mpirun -n 2 ./PetscTest -da_view > > > **************************************************************************** > > Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 > Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 > Process [0] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > Process [1] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: > ------------------------------------------------------------------------ > Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 > CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun > 17 23:58:50 2010 > [1]PETSC ERROR: Libraries linked from > /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 > --with-fortran-interfaces=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [0]PETSC ERROR: > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 > CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun > 17 23:58:50 2010 > [0]PETSC ERROR: Libraries linked from > /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 > --with-fortran-interfaces=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > -------------------------------------------------------------------------- > mpirun has exited due to process rank 1 with PID 10883 on > node up0001 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > [up0001:10881] 1 more process has sent help message help-mpi-api.txt / > mpi-abort > [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all > help / error messages > > > **************************************************************************** > > If been looking trough the troubleshooting and FAQ with no success and now > I'm stuck .... > > Any ideas or suggestions > > Thanks > > Valerio > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From graziosov at me.queensu.ca Thu Jun 17 23:43:52 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Fri, 18 Jun 2010 00:43:52 -0400 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: Ok... but adding it I get in the same position a different error: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun 18 00:34:57 2010 [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib Invalid argument! [1]PETSC ERROR: Wrong type of object: Parameter # 1! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun 18 00:34:57 2010 [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: PetscViewerDestroy() line 99 in src/sys/viewer/interface/view.c [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscViewerDestroy() line 99 in src/sys/viewer/interface/view.c Valerio On 2010-06-18, at 12:30 AM, Matthew Knepley wrote: > The VecView() has no 'err' argument. > > Matt > > On Thu, Jun 17, 2010 at 11:19 PM, Valerio Grazioso wrote: > Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. > I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with > > DACreateGlobalVector() > > So I've written a simple code: > > **************************************************************************** > > program PetscTest > > implicit none > > Vec q > PetscScalar alpha > DA da > PetscErrorCode err > PetscInt i3,i1 > > i3=4 > i1=1 > > call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & > PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) > > call DACreateGlobalVector(da,q,err) > > alpha=1.0 > > call VecSet(q,alpha,err) > call vecView(q,PETSC_VIEWER_STDOUT_WORLD) > > call VecDestroy(q,err); > > call PetscFinalize(PETSC_NULL_CHARACTER,err) > > end program > > **************************************************************************** > > > And this is the output that i get if I run the executable with > > mpirun -n 2 ./PetscTest -da_view > > **************************************************************************** > > Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 > Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 > Process [0] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > Process [1] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ > Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > -------------------------------------------------------------------------- > mpirun has exited due to process rank 1 with PID 10883 on > node up0001 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort > [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > **************************************************************************** > > If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... > > Any ideas or suggestions > > Thanks > > Valerio > > > > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From graziosov at me.queensu.ca Thu Jun 17 23:57:46 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Fri, 18 Jun 2010 00:57:46 -0400 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD70@GVW1098EXB.americas.hpqcorp.net> References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD70@GVW1098EXB.americas.hpqcorp.net> Message-ID: I am new to PETSc too. OpenMPI and Intel compiler (ifort). Unfortunately -mcmodel=medium doesn't help. Thanks anyway Valerio On 2010-06-18, at 12:24 AM, Sankaran, Loganathan (Solution Technology Services) wrote: > I am new to PETSc. What MPI and What compiler ? If it is Intel compiler, add ?mcmodel=medium to the compile line and see that helps. > > Logan > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Valerio Grazioso > Sent: Thursday, June 17, 2010 11:20 PM > To: petsc-users at mcs.anl.gov > Subject: [petsc-users] Segmentation Violation in a very simple fortran code > > Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. > I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with > > DACreateGlobalVector() > > So I've written a simple code: > > **************************************************************************** > > program PetscTest > > implicit none > > Vec q > PetscScalar alpha > DA da > PetscErrorCode err > PetscInt i3,i1 > > i3=4 > i1=1 > > call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & > PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) > > call DACreateGlobalVector(da,q,err) > > alpha=1.0 > > call VecSet(q,alpha,err) > call vecView(q,PETSC_VIEWER_STDOUT_WORLD) > > call VecDestroy(q,err); > > call PetscFinalize(PETSC_NULL_CHARACTER,err) > > end program > > **************************************************************************** > > > And this is the output that i get if I run the executable with > > mpirun -n 2 ./PetscTest -da_view > > **************************************************************************** > > Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 > Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 > Process [0] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > Process [1] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ > Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > -------------------------------------------------------------------------- > mpirun has exited due to process rank 1 with PID 10883 on > node up0001 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort > [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > **************************************************************************** > > If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... > > Any ideas or suggestions > > Thanks > > Valerio > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 18 00:01:09 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 18 Jun 2010 00:01:09 -0500 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: This is obviously not the whole code because you are calling PetscViewerDestroy() somewhere with a bad viewer. Please check the return codes of all PETSc calls. THanks, Matt On Thu, Jun 17, 2010 at 11:43 PM, Valerio Grazioso wrote: > Ok... but adding it I get in the same position a different error: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: Invalid argument! > [0]PETSC ERROR: Wrong type of object: Parameter # 1! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 > CDT 2010 > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun > 18 00:34:57 2010 > [0]PETSC ERROR: Libraries linked from > /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > Invalid argument! > [1]PETSC ERROR: Wrong type of object: Parameter # 1! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 > CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun > 18 00:34:57 2010 > [1]PETSC ERROR: Libraries linked from > /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 > --with-fortran-interfaces=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: PetscViewerDestroy() line 99 in > src/sys/viewer/interface/view.c > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 > --with-fortran-interfaces=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscViewerDestroy() line 99 in > src/sys/viewer/interface/view.c > > > Valerio > > > On 2010-06-18, at 12:30 AM, Matthew Knepley wrote: > > The VecView() has no 'err' argument. > > Matt > > On Thu, Jun 17, 2010 at 11:19 PM, Valerio Grazioso < > graziosov at me.queensu.ca> wrote: > >> Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in >> fortran90. >> I was getting strange Segmentation Violation and after some debugging I >> realized that the the problem was in the vector created with >> >> DACreateGlobalVector() >> >> So I've written a simple code: >> >> >> **************************************************************************** >> >> program PetscTest >> >> implicit none >> >> Vec q >> PetscScalar alpha >> DA da >> PetscErrorCode err >> PetscInt i3,i1 >> >> i3=4 >> i1=1 >> >> call >> DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, >> & >> >> PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) >> >> call DACreateGlobalVector(da,q,err) >> >> alpha=1.0 >> >> call VecSet(q,alpha,err) >> call vecView(q,PETSC_VIEWER_STDOUT_WORLD) >> >> call VecDestroy(q,err); >> >> call PetscFinalize(PETSC_NULL_CHARACTER,err) >> >> end program >> >> >> **************************************************************************** >> >> >> And this is the output that i get if I run the executable with >> >> mpirun -n 2 ./PetscTest -da_view >> >> >> **************************************************************************** >> >> Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 >> Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 >> Process [0] >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> Process [1] >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> [1]PETSC ERROR: likely location of problem given in stack below >> [1]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [1]PETSC ERROR: [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> Note: The EXACT line numbers in the stack are not available, >> [1]PETSC ERROR: INSTEAD the line number of the start of the function >> [1]PETSC ERROR: is given. >> [1]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: Signal received! >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 >> CDT 2010 >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun >> 17 23:58:50 2010 >> [1]PETSC ERROR: Libraries linked from >> /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 >> --with-fortran-interfaces=1 >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> [0]PETSC ERROR: >> -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD >> with errorcode 59. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> -------------------------------------------------------------------------- >> likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 >> CDT 2010 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun >> 17 23:58:50 2010 >> [0]PETSC ERROR: Libraries linked from >> /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 >> --with-fortran-interfaces=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file >> -------------------------------------------------------------------------- >> mpirun has exited due to process rank 1 with PID 10883 on >> node up0001 exiting without calling "finalize". This may >> have caused other processes in the application to be >> terminated by signals sent by mpirun (as reported here). >> -------------------------------------------------------------------------- >> [up0001:10881] 1 more process has sent help message help-mpi-api.txt / >> mpi-abort >> [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see >> all help / error messages >> >> >> **************************************************************************** >> >> If been looking trough the troubleshooting and FAQ with no success and now >> I'm stuck .... >> >> Any ideas or suggestions >> >> Thanks >> >> Valerio >> >> >> >> >> >> >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From logan.sankaran at hp.com Fri Jun 18 00:04:02 2010 From: logan.sankaran at hp.com (Sankaran, Loganathan (Solution Technology Services)) Date: Fri, 18 Jun 2010 05:04:02 +0000 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD70@GVW1098EXB.americas.hpqcorp.net> Message-ID: <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD79@GVW1098EXB.americas.hpqcorp.net> What about trying this flag -heap-arrays 1024 ? Logan From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Valerio Grazioso Sent: Thursday, June 17, 2010 11:58 PM To: PETSc users list Subject: Re: [petsc-users] Segmentation Violation in a very simple fortran code I am new to PETSc too. OpenMPI and Intel compiler (ifort). Unfortunately -mcmodel=medium doesn't help. Thanks anyway Valerio On 2010-06-18, at 12:24 AM, Sankaran, Loganathan (Solution Technology Services) wrote: I am new to PETSc. What MPI and What compiler ? If it is Intel compiler, add -mcmodel=medium to the compile line and see that helps. Logan From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Valerio Grazioso Sent: Thursday, June 17, 2010 11:20 PM To: petsc-users at mcs.anl.gov Subject: [petsc-users] Segmentation Violation in a very simple fortran code Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with DACreateGlobalVector() So I've written a simple code: **************************************************************************** program PetscTest implicit none Vec q PetscScalar alpha DA da PetscErrorCode err PetscInt i3,i1 i3=4 i1=1 call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) call DACreateGlobalVector(da,q,err) alpha=1.0 call VecSet(q,alpha,err) call vecView(q,PETSC_VIEWER_STDOUT_WORLD) call VecDestroy(q,err); call PetscFinalize(PETSC_NULL_CHARACTER,err) end program **************************************************************************** And this is the output that i get if I run the executable with mpirun -n 2 ./PetscTest -da_view **************************************************************************** Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 Process [0] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Process [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 10883 on node up0001 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages **************************************************************************** If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... Any ideas or suggestions Thanks Valerio -------------- next part -------------- An HTML attachment was scrubbed... URL: From graziosov at me.queensu.ca Fri Jun 18 00:23:39 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Fri, 18 Jun 2010 01:23:39 -0400 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: You are right, that was a leftover of a previous test made with a viewer. Commenting that line I get the previous result SEGV: Segmentation Violation Sorry for the previous mistake... Thanks Valerio On 2010-06-18, at 1:01 AM, Matthew Knepley wrote: > This is obviously not the whole code because you are calling PetscViewerDestroy() > somewhere with a bad viewer. Please check the return codes of all PETSc calls. > > THanks, > > Matt > > On Thu, Jun 17, 2010 at 11:43 PM, Valerio Grazioso wrote: > Ok... but adding it I get in the same position a different error: > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: Invalid argument! > [0]PETSC ERROR: Wrong type of object: Parameter # 1! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun 18 00:34:57 2010 > [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > Invalid argument! > [1]PETSC ERROR: Wrong type of object: Parameter # 1! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun 18 00:34:57 2010 > [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: PetscViewerDestroy() line 99 in src/sys/viewer/interface/view.c > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscViewerDestroy() line 99 in src/sys/viewer/interface/view.c > > > Valerio > > > On 2010-06-18, at 12:30 AM, Matthew Knepley wrote: > >> The VecView() has no 'err' argument. >> >> Matt >> >> On Thu, Jun 17, 2010 at 11:19 PM, Valerio Grazioso wrote: >> Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. >> I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with >> >> DACreateGlobalVector() >> >> So I've written a simple code: >> >> **************************************************************************** >> >> program PetscTest >> >> implicit none >> >> Vec q >> PetscScalar alpha >> DA da >> PetscErrorCode err >> PetscInt i3,i1 >> >> i3=4 >> i1=1 >> >> call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & >> PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) >> >> call DACreateGlobalVector(da,q,err) >> >> alpha=1.0 >> >> call VecSet(q,alpha,err) >> call vecView(q,PETSC_VIEWER_STDOUT_WORLD) >> >> call VecDestroy(q,err); >> >> call PetscFinalize(PETSC_NULL_CHARACTER,err) >> >> end program >> >> **************************************************************************** >> >> >> And this is the output that i get if I run the executable with >> >> mpirun -n 2 ./PetscTest -da_view >> >> **************************************************************************** >> >> Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 >> Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 >> Process [0] >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> Process [1] >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> 1 >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [1]PETSC ERROR: likely location of problem given in stack below >> [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ >> Note: The EXACT line numbers in the stack are not available, >> [1]PETSC ERROR: INSTEAD the line number of the start of the function >> [1]PETSC ERROR: is given. >> [1]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [1]PETSC ERROR: Signal received! >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 >> [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [0]PETSC ERROR: -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD >> with errorcode 59. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> -------------------------------------------------------------------------- >> likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 >> [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >> -------------------------------------------------------------------------- >> mpirun has exited due to process rank 1 with PID 10883 on >> node up0001 exiting without calling "finalize". This may >> have caused other processes in the application to be >> terminated by signals sent by mpirun (as reported here). >> -------------------------------------------------------------------------- >> [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort >> [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >> >> **************************************************************************** >> >> If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... >> >> Any ideas or suggestions >> >> Thanks >> >> Valerio >> >> >> >> >> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From graziosov at me.queensu.ca Fri Jun 18 00:31:35 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Fri, 18 Jun 2010 01:31:35 -0400 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD79@GVW1098EXB.americas.hpqcorp.net> References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD70@GVW1098EXB.americas.hpqcorp.net> <4C6CAE54C846144981BCE6F26B203FFA58DCD4BD79@GVW1098EXB.americas.hpqcorp.net> Message-ID: <78DCCD2E-2CC3-40ED-912B-7ABB1C810ECD@me.queensu.ca> Tried .. doesn't work. Thanks Valerio On 2010-06-18, at 1:04 AM, Sankaran, Loganathan (Solution Technology Services) wrote: > What about trying this flag -heap-arrays 1024 ? > > Logan > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Valerio Grazioso > Sent: Thursday, June 17, 2010 11:58 PM > To: PETSc users list > Subject: Re: [petsc-users] Segmentation Violation in a very simple fortran code > > I am new to PETSc too. > OpenMPI and Intel compiler (ifort). > Unfortunately -mcmodel=medium doesn't help. > > Thanks anyway > Valerio > > > On 2010-06-18, at 12:24 AM, Sankaran, Loganathan (Solution Technology Services) wrote: > > > I am new to PETSc. What MPI and What compiler ? If it is Intel compiler, add ?mcmodel=medium to the compile line and see that helps. > > Logan > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Valerio Grazioso > Sent: Thursday, June 17, 2010 11:20 PM > To: petsc-users at mcs.anl.gov > Subject: [petsc-users] Segmentation Violation in a very simple fortran code > > Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. > I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with > > DACreateGlobalVector() > > So I've written a simple code: > > **************************************************************************** > > program PetscTest > > implicit none > > Vec q > PetscScalar alpha > DA da > PetscErrorCode err > PetscInt i3,i1 > > i3=4 > i1=1 > > call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & > PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) > > call DACreateGlobalVector(da,q,err) > > alpha=1.0 > > call VecSet(q,alpha,err) > call vecView(q,PETSC_VIEWER_STDOUT_WORLD) > > call VecDestroy(q,err); > > call PetscFinalize(PETSC_NULL_CHARACTER,err) > > end program > > **************************************************************************** > > > And this is the output that i get if I run the executable with > > mpirun -n 2 ./PetscTest -da_view > > **************************************************************************** > > Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 > Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 > Process [0] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > Process [1] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ > Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > -------------------------------------------------------------------------- > mpirun has exited due to process rank 1 with PID 10883 on > node up0001 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort > [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > **************************************************************************** > > If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... > > Any ideas or suggestions > > Thanks > > Valerio > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jun 18 07:32:04 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 18 Jun 2010 07:32:04 -0500 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: I suspect that a) Not all arguments are correct, Fortran does not checking b) Some object was not created Use the debugger, -start_in_debugger, to investigate the SEGV Matt On Fri, Jun 18, 2010 at 12:23 AM, Valerio Grazioso wrote: > You are right, that was a leftover of a previous test made with a viewer. > Commenting that line I get the previous result > > SEGV: Segmentation Violation > > Sorry for the previous mistake... > Thanks > Valerio > > On 2010-06-18, at 1:01 AM, Matthew Knepley wrote: > > This is obviously not the whole code because you are calling > PetscViewerDestroy() > somewhere with a bad viewer. Please check the return codes of all PETSc > calls. > > THanks, > > Matt > > On Thu, Jun 17, 2010 at 11:43 PM, Valerio Grazioso < > graziosov at me.queensu.ca> wrote: > >> Ok... but adding it I get in the same position a different error: >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: [0]PETSC ERROR: Invalid argument! >> [0]PETSC ERROR: Wrong type of object: Parameter # 1! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 >> CDT 2010 >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun >> 18 00:34:57 2010 >> [0]PETSC ERROR: Libraries linked from >> /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> Invalid argument! >> [1]PETSC ERROR: Wrong type of object: Parameter # 1! >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 >> CDT 2010 >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun >> 18 00:34:57 2010 >> [1]PETSC ERROR: Libraries linked from >> /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 >> --with-fortran-interfaces=1 >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: PetscViewerDestroy() line 99 in >> src/sys/viewer/interface/view.c >> [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 >> --with-fortran-interfaces=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: PetscViewerDestroy() line 99 in >> src/sys/viewer/interface/view.c >> >> >> Valerio >> >> >> On 2010-06-18, at 12:30 AM, Matthew Knepley wrote: >> >> The VecView() has no 'err' argument. >> >> Matt >> >> On Thu, Jun 17, 2010 at 11:19 PM, Valerio Grazioso < >> graziosov at me.queensu.ca> wrote: >> >>> Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in >>> fortran90. >>> I was getting strange Segmentation Violation and after some debugging I >>> realized that the the problem was in the vector created with >>> >>> DACreateGlobalVector() >>> >>> So I've written a simple code: >>> >>> >>> **************************************************************************** >>> >>> program PetscTest >>> >>> implicit none >>> >>> Vec q >>> PetscScalar alpha >>> DA da >>> PetscErrorCode err >>> PetscInt i3,i1 >>> >>> i3=4 >>> i1=1 >>> >>> call >>> DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, >>> & >>> >>> PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) >>> >>> call DACreateGlobalVector(da,q,err) >>> >>> alpha=1.0 >>> >>> call VecSet(q,alpha,err) >>> call vecView(q,PETSC_VIEWER_STDOUT_WORLD) >>> >>> call VecDestroy(q,err); >>> >>> call PetscFinalize(PETSC_NULL_CHARACTER,err) >>> >>> end program >>> >>> >>> **************************************************************************** >>> >>> >>> And this is the output that i get if I run the executable with >>> >>> mpirun -n 2 ./PetscTest -da_view >>> >>> >>> **************************************************************************** >>> >>> Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >>> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 >>> Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >>> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 >>> Process [0] >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> Process [1] >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [1]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [1]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSCERROR: or try >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>> corruption errors >>> [1]PETSC ERROR: likely location of problem given in stack below >>> [1]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [1]PETSC ERROR: [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> Note: The EXACT line numbers in the stack are not available, >>> [1]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [1]PETSC ERROR: is given. >>> [1]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [1]PETSC ERROR: Signal received! >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 >>> CDT 2010 >>> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [1]PETSC ERROR: See docs/index.html for manual pages. >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu >>> Jun 17 23:58:50 2010 >>> [1]PETSC ERROR: Libraries linked from >>> /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >>> [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >>> [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 >>> --with-fortran-interfaces=1 >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: User provided function() line 0 in unknown directory >>> unknown file >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>> corruption errors >>> [0]PETSC ERROR: >>> -------------------------------------------------------------------------- >>> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD >>> with errorcode 59. >>> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>> You may or may not see output from other processes, depending on >>> exactly when Open MPI kills them. >>> >>> -------------------------------------------------------------------------- >>> likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 >>> CDT 2010 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu >>> Jun 17 23:58:50 2010 >>> [0]PETSC ERROR: Libraries linked from >>> /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >>> [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >>> [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 >>> --with-fortran-interfaces=1 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory >>> unknown file >>> >>> -------------------------------------------------------------------------- >>> mpirun has exited due to process rank 1 with PID 10883 on >>> node up0001 exiting without calling "finalize". This may >>> have caused other processes in the application to be >>> terminated by signals sent by mpirun (as reported here). >>> >>> -------------------------------------------------------------------------- >>> [up0001:10881] 1 more process has sent help message help-mpi-api.txt / >>> mpi-abort >>> [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see >>> all help / error messages >>> >>> >>> **************************************************************************** >>> >>> If been looking trough the troubleshooting and FAQ with no success and >>> now I'm stuck .... >>> >>> Any ideas or suggestions >>> >>> Thanks >>> >>> Valerio >>> >>> >>> >>> >>> >>> >>> >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Jun 18 09:01:40 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 18 Jun 2010 09:01:40 -0500 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: <1869293B-D3E3-45A6-BA7F-AAE34710976F@mcs.anl.gov> > call PetscFinalize(PETSC_NULL_CHARACTER,err) PetscFinalize() does not take a string argument, you should have > call PetscFinalize(err) On Jun 17, 2010, at 11:19 PM, Valerio Grazioso wrote: > Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. > I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with > > DACreateGlobalVector() > > So I've written a simple code: > > **************************************************************************** > > program PetscTest > > implicit none > > Vec q > PetscScalar alpha > DA da > PetscErrorCode err > PetscInt i3,i1 > > i3=4 > i1=1 > > call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & > PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) > > call DACreateGlobalVector(da,q,err) > > alpha=1.0 > > call VecSet(q,alpha,err) > call vecView(q,PETSC_VIEWER_STDOUT_WORLD) > > call VecDestroy(q,err); > > call PetscFinalize(PETSC_NULL_CHARACTER,err) > > end program > > **************************************************************************** > > > And this is the output that i get if I run the executable with > > mpirun -n 2 ./PetscTest -da_view > > **************************************************************************** > > Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 > Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 > X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 > Process [0] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > Process [1] > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > 1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ > Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 > [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 > [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > -------------------------------------------------------------------------- > mpirun has exited due to process rank 1 with PID 10883 on > node up0001 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort > [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > **************************************************************************** > > If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... > > Any ideas or suggestions > > Thanks > > Valerio > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.kramer at imperial.ac.uk Fri Jun 18 11:35:27 2010 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Fri, 18 Jun 2010 17:35:27 +0100 Subject: [petsc-users] reporting failing pcksp solves Message-ID: <4C1BA04F.1030009@imperial.ac.uk> Dear all, Is there a way in petsc, when performing inner solves like PCKSP or MatSchurComplementGetKSP, to make the outer solve stop immediately and report back a negative convergence reason? I find that often when such "inner solves" fail, the outer solve happily continues and sometimes falsely reports convergence due to the preconditioner becoming rank deficient. I'd like our code, using petsc, to be able to trap that sort of situations and give a suitable error message. Cheers Stephan From knepley at gmail.com Fri Jun 18 17:09:03 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 18 Jun 2010 17:09:03 -0500 Subject: [petsc-users] reporting failing pcksp solves In-Reply-To: <4C1BA04F.1030009@imperial.ac.uk> References: <4C1BA04F.1030009@imperial.ac.uk> Message-ID: You can install a convergence test that just calls the default test, and then does SETERRQ if it fails. Matt On Fri, Jun 18, 2010 at 11:35 AM, Stephan Kramer wrote: > Dear all, > > Is there a way in petsc, when performing inner solves like PCKSP or > MatSchurComplementGetKSP, to make the outer solve stop immediately and > report back a negative convergence reason? I find that often when such > "inner solves" fail, the outer solve happily continues and sometimes falsely > reports convergence due to the preconditioner becoming rank deficient. I'd > like our code, using petsc, to be able to trap that sort of situations and > give a suitable error message. > > Cheers > Stephan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From graziosov at me.queensu.ca Fri Jun 18 17:16:06 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Fri, 18 Jun 2010 18:16:06 -0400 Subject: [petsc-users] Segmentation Violation in a very simple fortran code In-Reply-To: References: <87F40E81-07D5-48B1-9561-DCAAE59D991C@me.queensu.ca> Message-ID: <0E862E58-F64C-4369-8CC2-C74E5AD0F0B9@me.queensu.ca> Hi Matt, you were right, there was a stupid error with a missing argument in the VecView() routine. Thanks a lot Valerio On 2010-06-18, at 8:32 AM, Matthew Knepley wrote: > I suspect that > > a) Not all arguments are correct, Fortran does not checking > > b) Some object was not created > > Use the debugger, -start_in_debugger, to investigate the SEGV > > Matt > > On Fri, Jun 18, 2010 at 12:23 AM, Valerio Grazioso wrote: > You are right, that was a leftover of a previous test made with a viewer. > Commenting that line I get the previous result > > SEGV: Segmentation Violation > > Sorry for the previous mistake... > Thanks > Valerio > > On 2010-06-18, at 1:01 AM, Matthew Knepley wrote: > >> This is obviously not the whole code because you are calling PetscViewerDestroy() >> somewhere with a bad viewer. Please check the return codes of all PETSc calls. >> >> THanks, >> >> Matt >> >> On Thu, Jun 17, 2010 at 11:43 PM, Valerio Grazioso wrote: >> Ok... but adding it I get in the same position a different error: >> >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [1]PETSC ERROR: [0]PETSC ERROR: Invalid argument! >> [0]PETSC ERROR: Wrong type of object: Parameter # 1! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun 18 00:34:57 2010 >> [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> Invalid argument! >> [1]PETSC ERROR: Wrong type of object: Parameter # 1! >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Fri Jun 18 00:34:57 2010 >> [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >> [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: PetscViewerDestroy() line 99 in src/sys/viewer/interface/view.c >> [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >> [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: PetscViewerDestroy() line 99 in src/sys/viewer/interface/view.c >> >> >> Valerio >> >> >> On 2010-06-18, at 12:30 AM, Matthew Knepley wrote: >> >>> The VecView() has no 'err' argument. >>> >>> Matt >>> >>> On Thu, Jun 17, 2010 at 11:19 PM, Valerio Grazioso wrote: >>> Hi, I'm stuck with an implementation of a 3D Poisson solver with PETSc in fortran90. >>> I was getting strange Segmentation Violation and after some debugging I realized that the the problem was in the vector created with >>> >>> DACreateGlobalVector() >>> >>> So I've written a simple code: >>> >>> **************************************************************************** >>> >>> program PetscTest >>> >>> implicit none >>> >>> Vec q >>> PetscScalar alpha >>> DA da >>> PetscErrorCode err >>> PetscInt i3,i1 >>> >>> i3=4 >>> i1=1 >>> >>> call DACreate3d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_STAR,i3,i3,i3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,i1,i1, & >>> PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da,err) >>> >>> call DACreateGlobalVector(da,q,err) >>> >>> alpha=1.0 >>> >>> call VecSet(q,alpha,err) >>> call vecView(q,PETSC_VIEWER_STDOUT_WORLD) >>> >>> call VecDestroy(q,err); >>> >>> call PetscFinalize(PETSC_NULL_CHARACTER,err) >>> >>> end program >>> >>> **************************************************************************** >>> >>> >>> And this is the output that i get if I run the executable with >>> >>> mpirun -n 2 ./PetscTest -da_view >>> >>> **************************************************************************** >>> >>> Processor [0] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >>> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 0 2 >>> Processor [1] M 4 N 4 P 4 m 1 n 1 p 2 w 1 s 1 >>> X range of indices: 0 4, Y range of indices: 0 4, Z range of indices: 2 4 >>> Process [0] >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> Process [1] >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> 1 >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [1]PETSC ERROR: likely location of problem given in stack below >>> [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [1]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ >>> Note: The EXACT line numbers in the stack are not available, >>> [1]PETSC ERROR: INSTEAD the line number of the start of the function >>> [1]PETSC ERROR: is given. >>> [1]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [1]PETSC ERROR: Signal received! >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 >>> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [1]PETSC ERROR: See docs/index.html for manual pages. >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 >>> [1]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >>> [1]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >>> [1]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [0]PETSC ERROR: -------------------------------------------------------------------------- >>> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD >>> with errorcode 59. >>> >>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >>> You may or may not see output from other processes, depending on >>> exactly when Open MPI kills them. >>> -------------------------------------------------------------------------- >>> likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./PetscTest on a linux-gnu named up0001 by hpc2231 Thu Jun 17 23:58:50 2010 >>> [0]PETSC ERROR: Libraries linked from /home/hpc2231/lib/petsc-3.1-p3/linux-gnu-c-debug/lib >>> [0]PETSC ERROR: Configure run at Thu Jun 17 23:00:54 2010 >>> [0]PETSC ERROR: Configure options LIBS="-limf -lm" --download-hypre=1 --with-fortran-interfaces=1 >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>> -------------------------------------------------------------------------- >>> mpirun has exited due to process rank 1 with PID 10883 on >>> node up0001 exiting without calling "finalize". This may >>> have caused other processes in the application to be >>> terminated by signals sent by mpirun (as reported here). >>> -------------------------------------------------------------------------- >>> [up0001:10881] 1 more process has sent help message help-mpi-api.txt / mpi-abort >>> [up0001:10881] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >>> >>> **************************************************************************** >>> >>> If been looking trough the troubleshooting and FAQ with no success and now I'm stuck .... >>> >>> Any ideas or suggestions >>> >>> Thanks >>> >>> Valerio >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Jun 19 23:07:25 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 19 Jun 2010 23:07:25 -0500 Subject: [petsc-users] reporting failing pcksp solves In-Reply-To: <4C1BA04F.1030009@imperial.ac.uk> References: <4C1BA04F.1030009@imperial.ac.uk> Message-ID: <0AA50AF0-7E24-47B7-ADCE-BE140EBFABF9@mcs.anl.gov> Hmm, this is a good question. There are tons of places where some sort of "inner solve" is imbedded in an "outer solver", in fact many levels of nesting. We should handle this in a systematic way which likely means carefully checks put in various places in the code. We could instead (or in addition) add an option to have KSPSolve() generate an error on "not-convergence" instead of returning with a negative converged reason. The drawback to this is that you cannot "handle" the error and continue running, it just ends. Would this be useful? Barry On Jun 18, 2010, at 11:35 AM, Stephan Kramer wrote: > Dear all, > > Is there a way in petsc, when performing inner solves like PCKSP or MatSchurComplementGetKSP, to make the outer solve stop immediately and report back a negative convergence reason? I find that often when such "inner solves" fail, the outer solve happily continues and sometimes falsely reports convergence due to the preconditioner becoming rank deficient. I'd like our code, using petsc, to be able to trap that sort of situations and give a suitable error message. > > Cheers > Stephan From s.kramer at imperial.ac.uk Sun Jun 20 10:21:32 2010 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Sun, 20 Jun 2010 16:21:32 +0100 Subject: [petsc-users] reporting failing pcksp solves In-Reply-To: <0AA50AF0-7E24-47B7-ADCE-BE140EBFABF9@mcs.anl.gov> References: <4C1BA04F.1030009@imperial.ac.uk> <0AA50AF0-7E24-47B7-ADCE-BE140EBFABF9@mcs.anl.gov> Message-ID: <4C1E31FC.3060704@imperial.ac.uk> On 20/06/10 05:07, Barry Smith wrote: > > > Hmm, this is a good question. There are tons of places where some sort of "inner solve" is imbedded in an "outer solver", in fact many levels of nesting. We should handle this in a systematic way which likely means carefully checks put in various places in the code. > > We could instead (or in addition) add an option to have KSPSolve() generate an error on "not-convergence" instead of returning with a negative converged reason. The drawback to this is that you cannot "handle" the error and continue running, it just ends. Would this be useful? > > Barry I can see the logistical nightmare of having to check return values of every PCApply, MatMult, etc. in petsc code. I guess this is where exception handling would be a great thing to have. I think for our own purposes, we will follow Matt's suggestion of wrapping KSPDefaultConverged() and set a global flag that can be checked afterwards. Our code checks successful completion convergence of every solve afterwards. It always handles solver failures as a fatal error, but the the code continues for a bit (end of timestep) to ensure final diagnostics, mesh and fields are written out - it is an adaptive mesh model, so users may not have even seen the mesh that was used to solve on. For simpler applications a petsc error generated on the spot would be a useful option, yes, but a more robust general way of handling nested solver failures would be greatly appreciated. I was slightly surprised to be honest to discover petsc happily continuing its ksp solve even though the MatSchurComplement solves were silently failing. A drawback of the suggested workaround, is that even if the MatSchurComplement solve fails in the first iteration (for instance reaching max iterations), the outer solve will still continue, trying and failing to solve the MatSchurComplement inner solve each iteration, until it finally hits the maximum number of outer iterations. Cheers Stephan > > On Jun 18, 2010, at 11:35 AM, Stephan Kramer wrote: > >> Dear all, >> >> Is there a way in petsc, when performing inner solves like PCKSP or MatSchurComplementGetKSP, to make the outer solve stop immediately and report back a negative convergence reason? I find that often when such "inner solves" fail, the outer solve happily continues and sometimes falsely reports convergence due to the preconditioner becoming rank deficient. I'd like our code, using petsc, to be able to trap that sort of situations and give a suitable error message. >> >> Cheers >> Stephan > > From s.kramer at imperial.ac.uk Sun Jun 20 10:50:27 2010 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Sun, 20 Jun 2010 16:50:27 +0100 Subject: [petsc-users] reporting failing pcksp solves In-Reply-To: <4C1E31FC.3060704@imperial.ac.uk> References: <4C1BA04F.1030009@imperial.ac.uk> <0AA50AF0-7E24-47B7-ADCE-BE140EBFABF9@mcs.anl.gov> <4C1E31FC.3060704@imperial.ac.uk> Message-ID: <4C1E38C3.20201@imperial.ac.uk> On 20/06/10 16:21, Stephan Kramer wrote: > On 20/06/10 05:07, Barry Smith wrote: >> >> >> Hmm, this is a good question. There are tons of places where some sort of "inner solve" is imbedded in an "outer solver", in fact many levels of nesting. We should handle this in a systematic way which likely means carefully checks put in various places in the code. >> >> We could instead (or in addition) add an option to have KSPSolve() generate an error on "not-convergence" instead of returning with a negative converged reason. The drawback to this is that you cannot "handle" the error and continue running, it just ends. Would this be useful? >> >> Barry > > I can see the logistical nightmare of having to check return values of every PCApply, MatMult, etc. in petsc code. I guess this is where exception handling would be a great thing to have. > > I think for our own purposes, we will follow Matt's suggestion of wrapping KSPDefaultConverged() and set a global flag that can be checked afterwards. Our code checks successful completion convergence > of every solve afterwards. It always handles solver failures as a fatal error, but the the code continues for a bit (end of timestep) to ensure final diagnostics, mesh and fields are written out - it > is an adaptive mesh model, so users may not have even seen the mesh that was used to solve on. > > For simpler applications a petsc error generated on the spot would be a useful option, yes, but a more robust general way of handling nested solver failures would be greatly appreciated. I was > slightly surprised to be honest to discover petsc happily continuing its ksp solve even though the MatSchurComplement solves were silently failing. A drawback of the suggested workaround, is that even > if the MatSchurComplement solve fails in the first iteration (for instance reaching max iterations), the outer solve will still continue, trying and failing to solve the MatSchurComplement inner solve > each iteration, until it finally hits the maximum number of outer iterations. > > Cheers > Stephan Actually, looking a little closer, I see a lot of negative KSPConvergedReasons are not actually raised by KSPDefaultConvergence(). In particular I still won't be able to trap KSP_DIVERGED_ITS this way, except for me adding an extra n < ksp->max_it check of course, but that still leaves out a lot of krylov method-specific reasons... > > > >> >> On Jun 18, 2010, at 11:35 AM, Stephan Kramer wrote: >> >>> Dear all, >>> >>> Is there a way in petsc, when performing inner solves like PCKSP or MatSchurComplementGetKSP, to make the outer solve stop immediately and report back a negative convergence reason? I find that often when such "inner solves" fail, the outer solve happily continues and sometimes falsely reports convergence due to the preconditioner becoming rank deficient. I'd like our code, using petsc, to be able to trap that sort of situations and give a suitable error message. >>> >>> Cheers >>> Stephan >> >> > -- Stephan Kramer Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London From jed at 59A2.org Sun Jun 20 15:18:23 2010 From: jed at 59A2.org (Jed Brown) Date: Sun, 20 Jun 2010 22:18:23 +0200 Subject: [petsc-users] reporting failing pcksp solves In-Reply-To: <0AA50AF0-7E24-47B7-ADCE-BE140EBFABF9@mcs.anl.gov> References: <4C1BA04F.1030009@imperial.ac.uk> <0AA50AF0-7E24-47B7-ADCE-BE140EBFABF9@mcs.anl.gov> Message-ID: On Sun, Jun 20, 2010 at 06:07, Barry Smith wrote: > > > Hmm, this is a good question. There are tons of places where some sort of > "inner solve" is imbedded in an "outer solver", in fact many levels of > nesting. We should handle this in a systematic way which likely means > carefully checks put in various places in the code. > > We could instead (or in addition) add an option to have KSPSolve() > generate an error on "not-convergence" instead of returning with a negative > converged reason. The drawback to this is that you cannot "handle" the error > and continue running, it just ends. Would this be useful? > I don't know, but I don't think it's a substitute for propagating convergence failures without raising an error condition. You don't want the error handler called for a non-exceptional occurrence. You really want to unwind the stack nomally to whatever level can modify the problem (or report the convergence failure to the end-user). Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andrew.Barker at Colorado.EDU Mon Jun 21 10:40:48 2010 From: Andrew.Barker at Colorado.EDU (Andrew T Barker) Date: Mon, 21 Jun 2010 09:40:48 -0600 (MDT) Subject: [petsc-users] small dense solves Message-ID: <20100621094048.ARX42015@batman.int.colorado.edu> As one part of a larger parallel code, I want to do many small, dense, local linear solves. Small means 6 by 6 or 9 by 9. Building a KSP / Mat for this seems like overkill. I could probably link to LAPACK and use dgesv_(), but I'm looking for a way to use "Petsc's LAPACK" if that makes sense. A quick look at $PETSC_DIR/include/petscblaslapack* didn't help me. What's the best way to do this? Thanks, Andrew From jed at 59A2.org Mon Jun 21 10:54:48 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 21 Jun 2010 17:54:48 +0200 Subject: [petsc-users] small dense solves In-Reply-To: <20100621094048.ARX42015@batman.int.colorado.edu> References: <20100621094048.ARX42015@batman.int.colorado.edu> Message-ID: <87vd9c2xfb.fsf@59A2.org> On Mon, 21 Jun 2010 09:40:48 -0600 (MDT), Andrew T Barker wrote: > As one part of a larger parallel code, I want to do many small, dense, > local linear solves. Small means 6 by 6 or 9 by 9. Building a KSP / > Mat for this seems like overkill. Definitely, it's also on the small size for LAPACK (assuming it's in a very performance-sensitive place). You could use src/mat/blockinvert.h (provides unrolled versions used internally by MATBAIJ), C++ template libraries (e.g. Eigen2) are also worth considering for small dense linear algebra. Jed From balay at mcs.anl.gov Mon Jun 21 10:56:50 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 21 Jun 2010 10:56:50 -0500 (CDT) Subject: [petsc-users] small dense solves In-Reply-To: <20100621094048.ARX42015@batman.int.colorado.edu> References: <20100621094048.ARX42015@batman.int.colorado.edu> Message-ID: PETSc just uses native lapack. There is nothing special here. petscblaslapack have c-prototypes for the native blas/lapack - so there is some error checking in petsc usage of blas/lapack. Satish On Mon, 21 Jun 2010, Andrew T Barker wrote: > As one part of a larger parallel code, I want to do many small, dense, local linear solves. Small means 6 by 6 or 9 by 9. Building a KSP / Mat for this seems like overkill. > > I could probably link to LAPACK and use dgesv_(), but I'm looking for a way to use "Petsc's LAPACK" if that makes sense. A quick look at $PETSC_DIR/include/petscblaslapack* didn't help me. What's the best way to do this? > > Thanks, > > Andrew > From xxy113 at psu.edu Mon Jun 21 16:30:10 2010 From: xxy113 at psu.edu (XUAN YU) Date: Mon, 21 Jun 2010 17:30:10 -0400 Subject: [petsc-users] help about my first petsc program Message-ID: <1277155810l.798760l.0l@psu.edu> I want to learn petsc by solving a small time-dependent problem: The problem is from chemical kinetics, and consists of the following three rate equations: dy1/dt = -.04*y1 + 1.e4*y2*y3 dy2/dt = .04*y1 - 1.e4*y2*y3 - 3.e7*(y2)^2 dy3/dt = 3.e7*(y2)^2 on the interval from t = 0.0 to t = 4.e10, with initial conditions: y1 = 1.0, y2 = y3 = 0. I passed the compiling, and generated an executable file. But I got a lot of - Error Message - when I run the file. I appreciate your help and suggestions! Many thanks! My program is : #include "petscts.h" /*The problem is from * chemical kinetics, and consists of the following three rate * equations: * dy1/dt = -.04*y1 + 1.e4*y2*y3 * dy2/dt = .04*y1 - 1.e4*y2*y3 - 3.e7*(y2)^2 * dy3/dt = 3.e7*(y2)^2 * on the interval from t = 0.0 to t = 4.e10, with initial * conditions: y1 = 1.0, y2 = y3 = 0. */ typedef struct { PetscInt m; }AppCtx; extern PetscErrorCode FormJacobian(TS,PetscReal,Vec,Mat*,Mat*,MatStructure*,void*), FormFunction(TS,PetscReal,Vec,Vec,void*); int main(int argc,char **argv) { TS ts; Vec u,r; PetscScalar *u_localptr; Mat J; AppCtx user; PetscInt its,m; PetscReal dt,ftime; PetscErrorCode ierr; PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL); m=3; dt=0.1; ierr = PetscOptionsGetInt(PETSC_NULL,"-m",&m,PETSC_NULL);CHKERRQ(ierr); user.m=m; u_localptr[0]=1.0; u_localptr[1]=0.0; u_localptr[2]=0.0; ierr = VecRestoreArray(u,&u_localptr);CHKERRQ(ierr); ierr = VecCreateSeq(PETSC_COMM_SELF,m,&u);CHKERRQ(ierr); ierr = TSCreate(PETSC_COMM_SELF,&ts);CHKERRQ(ierr); ierr = TSSetProblemType(ts,TS_NONLINEAR);CHKERRQ(ierr); ierr = TSSetSolution(ts,u);CHKERRQ(ierr); ierr = TSSetRHSFunction(ts,FormFunction,&user);CHKERRQ(ierr); ierr = TSSetRHSJacobian(ts,J,J,FormJacobian,&user);CHKERRQ(ierr); ierr = TSSetType(ts,TSPSEUDO);CHKERRQ(ierr); ierr = TSSetInitialTimeStep(ts,0.0,dt);CHKERRQ(ierr); ierr = TSPseudoSetTimeStep(ts,TSPseudoDefaultTimeStep,0);CHKERRQ(ierr); ierr = TSSetFromOptions(ts);CHKERRQ(ierr); ierr = TSSetUp(ts);CHKERRQ(ierr); ierr = TSStep(ts,&its,&ftime);CHKERRQ(ierr); printf("Number of pseudo timesteps = %d final time %4.2e\n",(int)its,ftime); ierr = VecDestroy(u);CHKERRQ(ierr); ierr = VecDestroy(r);CHKERRQ(ierr); ierr = MatDestroy(J);CHKERRQ(ierr); ierr = TSDestroy(ts);CHKERRQ(ierr); ierr = PetscFinalize();CHKERRQ(ierr); return 0; } PetscErrorCode FormFunction(TS ts,PetscReal t,Vec X,Vec F,void *ptr) { PetscErrorCode ierr; PetscScalar *x,*f; PetscInt n; n=3; ierr = VecGetArray(X,&x);CHKERRQ(ierr); ierr = VecGetArray(F,&f);CHKERRQ(ierr); f[0]=-0.04*x[0]+1.0e4*x[1]*x[2]; f[1]=0.04-1.0e4*x[1]*x[2]-3*1.0e7*x[1]*x[1]; f[2]=3*1.0e7*x[1]*x[1]; ierr = VecRestoreArray(X,&x);CHKERRQ(ierr); ierr = VecRestoreArray(F,&f);CHKERRQ(ierr); return 0; } PetscErrorCode FormJacobian(TS ts,PetscReal t,Vec X,Mat *J,Mat *B,MatStructure *flag,void *ptr) { Mat jac=*B; PetscScalar v[3],*x; PetscInt row,col; PetscErrorCode ierr; ierr = VecGetArray(X,&x);CHKERRQ(ierr); v[0]=-0.04; v[1]=0.04; v[2]=0; row=1; col=1; ierr = MatSetValues(jac,1,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); v[0]=1.0e4*x[2]; v[1]=-1.0e4*x[2]-6.0e7*x[1]; v[2]=6.0e7*x[1]; col=2; ierr = MatSetValues(jac,1,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); v[0]=1.0e4*x[1]; v[1]=-1.0e4*x[1]; v[2]=0; col=3; ierr = MatSetValues(jac,1,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = VecRestoreArray(X,&x);CHKERRQ(ierr); ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1.c Type: application/octet-stream Size: 3181 bytes Desc: not available URL: From jed at 59A2.org Mon Jun 21 17:00:42 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 22 Jun 2010 00:00:42 +0200 Subject: [petsc-users] help about my first petsc program In-Reply-To: <1277155810l.798760l.0l@psu.edu> References: <1277155810l.798760l.0l@psu.edu> Message-ID: <87tyow11x1.fsf@59A2.org> On Mon, 21 Jun 2010 17:30:10 -0400, "XUAN YU" wrote: > I want to learn petsc by solving a small time-dependent problem: > The problem is from chemical kinetics, and consists of the following three > rate equations: > dy1/dt = -.04*y1 + 1.e4*y2*y3 > dy2/dt = .04*y1 - 1.e4*y2*y3 - 3.e7*(y2)^2 > dy3/dt = 3.e7*(y2)^2 > on the interval from t = 0.0 to t = 4.e10, with initial > conditions: y1 = 1.0, y2 = y3 = 0. > > I passed the compiling, and generated an executable file. But I got a lot of - > Error Message - when I run the file. I appreciate your help and suggestions! First, always send the error message. Second, if you get a seg-fault (what's certainly happening here), you should run in a debugger ('./ex1 -start_in_debugger' or 'gdb --args ./ex1') or valgrind, either of which will immediately show your error. > int main(int argc,char **argv) > { > TS ts; > Vec u,r; > PetscScalar *u_localptr; ^^^^^^^^^^^ This is just a pointer, it's value is undefined. > Mat J; > AppCtx user; > PetscInt its,m; > PetscReal dt,ftime; > PetscErrorCode ierr; > PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL); > m=3; > dt=0.1; > ierr = PetscOptionsGetInt(PETSC_NULL,"-m",&m,PETSC_NULL);CHKERRQ(ierr); > user.m=m; > u_localptr[0]=1.0; It is still undefined here, so you try to write to an invalid memory location. Jed From graziosov at me.queensu.ca Tue Jun 22 03:02:29 2010 From: graziosov at me.queensu.ca (Valerio Grazioso) Date: Tue, 22 Jun 2010 04:02:29 -0400 Subject: [petsc-users] DA matrices and vectors ordering In-Reply-To: <87tyow11x1.fsf@59A2.org> References: <1277155810l.798760l.0l@psu.edu> <87tyow11x1.fsf@59A2.org> Message-ID: Hi everybody, I'm building a Poisson solver for a cfd code in a structured grid using fortran 90. Substantially I need to solve a linear system in a cycle with a fixed matrix and a variable rhs. I'm working with DAs and (being new to Petsc!) I have a problem. I have built my matrix with : ... call DACreate3d(...) .... call DAGetMatrix(da,MATAIJ,A,err) ...... call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,err) call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,err) ...... and I've got a matrix A ordered in a Petsc ordering: rows and variables have a sequential numbering relative to each processor. On the other hand building the rhs with ...... call DACreateLocalVector(da,qloc,err) call DAGlobalToLocalBegin(da,q,INSERT_VALUES,qloc,err) call DAGlobalToLocalEnd(da,q,INSERT_VALUES,qloc,err) call VecGetArrayF90(qloc,qloc_a,err) cont=0 do k=gzs,gzs+gzm-1 do j=gys,gys+gym-1 do i=gxs,gxs+gxm-1 if ( ...... ) then cont=cont+1 qloc_a(cont)=.... else ...... endif enddo enddo enddo call VecRestoreArrayF90(qloc,qloc_a,err) call DALocalToGlobal(da,qloc,INSERT_VALUES,q,err) I end up with a vector q ordered in the natural ordering (as if it is built with one processor). The questions are: Is there a way to end up with the same ordering for both the matrix and the rhs ? If the only way is to use the AO mapping obtained with DAGetAO(), is there an example code that shows how to use the resulting ao to remap the vector (or the matrix) in the Petsc (or the natural) order? I've seen that I can have the remapping indices but then I haven't understood how to use them to remap the vector (or the matrix). Regards Valerio Grazioso -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Jun 22 05:03:43 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 22 Jun 2010 12:03:43 +0200 Subject: [petsc-users] DA matrices and vectors ordering In-Reply-To: References: <1277155810l.798760l.0l@psu.edu> <87tyow11x1.fsf@59A2.org> Message-ID: <87pqzj1j0g.fsf@59A2.org> On Tue, 22 Jun 2010 04:02:29 -0400, Valerio Grazioso wrote: > and I've got a matrix A ordered in a Petsc ordering: rows and variables have a sequential numbering relative to each processor. This is always the order used internally, but your application code doesn't have to "see" this ordering. I suggest using MatSetValuesStencil() which allows you to effectively insert values in the natural ordering (as coordinates k,j,i) which will get mapped appropriately to the internal storage. > call DACreateLocalVector(da,qloc,err) This also uses the "PETSc ordering" internally. > call VecGetArrayF90(qloc,qloc_a,err) But this does some pointer tricks so it looks like you are working in the natural ordering. > cont=cont+1 > qloc_a(cont)=.... Can use qloc_a(i,j,k) here. Jed From mark.cheeseman at kaust.edu.sa Tue Jun 22 07:59:03 2010 From: mark.cheeseman at kaust.edu.sa (Mark Cheeseman) Date: Tue, 22 Jun 2010 15:59:03 +0300 Subject: [petsc-users] updating values in a DA Global array Message-ID: Hi, I am trying to write a PETSc program in FORTRAN90 where I need to update a single value in a global distributed array. I know the global coordinates of the position that needs to be updated in the global array but I cannot get the mapping from the local vector correct. In this case, I am working on a domain with global dimensions [arraysize(1),arraysize(2),arraysize(3)] and I want to alter a single point in the global distributed array, uGLOBAL, at the global position [arraysize(1)/2-1,arraysize(2)-1,3]. I cannot seem to be able to do this... what am I doing wrong? ... DA da Vec uGLOBAL, uLOCAL, tmp PetscErrorCode ierr PetscScalar, pointer :: xx PetscInt rank, source_rank, i,j,k, row .... call MPI_Comm_rank( PETSC_COMM_WORLD, rank, ierr ) call DACreate3d( PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, & arraysize(1), arraysize(2), arraysize(3), PETSC_DECIDE, & PETSC_DECIDE, PETSC_DECIDE, 1, 5, PETSC_NULL_INTEGER, & PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) call DACreateGlobalVector( da, pNOW, ierr ) call DAGetCorners( da, xs, ys, zs, xl, yl, zl, ierr ) do i = xs,xs+xl-1 if ( i.eq.arraysize(1)/2-1 ) then src_loc(1) = i do j = ys,ys+yl-1 if ( j.eq.arraysize(2)/2-1 ) then src_loc(2) = j do k = zs,zs+zl-1 if ( k.eq.3 ) then src_loc(2) = j source_rank = rank endif enddo endif enddo endif enddo call DAGetLocalVector( da, uLOCAL, ierr ) call VecGetArrayF90( uLOCAL, xx, ierr ) if ( rank.eq.source_rank ) then row = 15 xx(row) = pressure endif call VecRestoreArrayF90( uLOCAL, xx, ierr ) call DALocalToGlobal( da, uLOCAL, ADD_VALUES, uGLOBAL, ierr ) call DARestoreLocalVector( da, uLOCAL, ierr ) Any help would be greatly appreciated. Is there a way to directly access the global distributed array (uGLOBAL) instead of working through the intermediate local array (uLOCAL)? I have tried the approach call DAVecGetArray( da, uGLOBAL, xx, ierr ) xx = .... call DAVecRestoreArray( da, uGLOBAL, xx, ierr ) Unfortunately, the compile always fails with the message that the DAVecRestoreArray function cannot be found. Thank you, Mark -- Mark Patrick Cheeseman Computational Scientist KSL (KAUST Supercomputing Laboratory) Building 1, Office #126 King Abdullah University of Science & Technology Thuwal 23955-6900 Kingdom of Saudi Arabia EMAIL : mark.cheeseman at kaust.edu.sa PHONE : +966 (2) 808 0221 (office) +966 (54) 470 1082 (mobile) SKYPE : mark.patrick.cheeseman -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Jun 22 09:11:09 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 22 Jun 2010 16:11:09 +0200 Subject: [petsc-users] DA matrices and vectors ordering In-Reply-To: <726D589D-1CB0-4F80-B8CC-722EBBB6F237@me.queensu.ca> References: <1277155810l.798760l.0l@psu.edu> <87tyow11x1.fsf@59A2.org> <87pqzj1j0g.fsf@59A2.org> <726D589D-1CB0-4F80-B8CC-722EBBB6F237@me.queensu.ca> Message-ID: <87pqzjtawy.fsf@59A2.org> On Tue, 22 Jun 2010 09:41:15 -0400, Valerio Grazioso wrote: > Hi Jed, > > On 2010-06-22, at 6:03 AM, Jed Brown wrote: > > > On Tue, 22 Jun 2010 04:02:29 -0400, Valerio Grazioso wrote: > >> and I've got a matrix A ordered in a Petsc ordering: rows and variables have a sequential numbering relative to each processor. > > > > This is always the order used internally, but your application code > > doesn't have to "see" this ordering. I suggest using > > MatSetValuesStencil() which allows you to effectively insert values in > > the natural ordering (as coordinates k,j,i) which will get mapped > > appropriately to the internal storage. > > Yes I've used MatSetValuesStencil() to insert the values and it worked perfectly. > When I say that I've got a matrix in Petsc ordering is > because when I see it with -mat_view (at run time) that is what is printed at screen. Yeah, that's PETSc's ordering (the one used internally). There isn't much point outputting a matrix in a different ordering, it's just nice to assemble it using natural indexing (MatSetValuesStencil). > Also in this case, when I say that I get a naturally ordered vector is because when I use > > > ..... > > call DALocalToGlobal(da,qloc,INSERT_VALUES,q,err) > call vecView(q,PETSC_VIEWER_STDOUT_WORLD,err) > > ..... > > that what is printed at screen (a naturally ordered vector). This is because it's often convenient to have a vector in the natural format (because you might read it in on a different number of processes). This is only done for DA vectors, you can PetscViewerPushFormat(viewer,PETSC_VIEWER_NATIVE) to write it in PETSc's ordering (consistent with the matrix). > But if I'm understanding well what you suggest, at the end, no matter > what I see printed at screen (for the matrix), I should be already > working in natural ordering for both the matrix and the rhs? This is easiest, unstructured codes usually require you to deal with both a local and global ordering, and there usually isn't any concept of a "natural ordering", just a concatenation of the owned local spaces (equivalent to "PETSc ordering" for structured grids). > > > >> cont=cont+1 > >> qloc_a(cont)=.... > > > > Can use qloc_a(i,j,k) here. > > I didn't manage to do this. I get compiling errors (I'm using ifort) if I define a qloc_a(:,:,:) pointer array: PetscScalar, pointer :: qloc_a(:,:,:) Define the qloc_a(:) as usual, but index it with i,j,k (looks weird, I know), see src/snes/examples/tutorials/ex5f90.F. Jed From bsmith at mcs.anl.gov Tue Jun 22 14:23:40 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Jun 2010 14:23:40 -0500 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: References: Message-ID: <0479AD41-5181-4493-A9A0-46814D5EF23E@mcs.anl.gov> On Jun 22, 2010, at 7:59 AM, Mark Cheeseman wrote: > Hi, > > I am trying to write a PETSc program in FORTRAN90 where I need to update a single value in a global distributed array. I know the global coordinates of the position that needs to be updated in the global array but I cannot get the mapping from the local vector correct. In this case, I am working on a domain with global dimensions [arraysize(1),arraysize(2),arraysize(3)] and I want to alter a single point in the global distributed array, uGLOBAL, at the global position [arraysize(1)/2-1,arraysize(2)-1,3]. I cannot seem to be able to do this... what am I doing wrong? > > ... > DA da > Vec uGLOBAL, uLOCAL, tmp > PetscErrorCode ierr > PetscScalar, pointer :: xx > PetscInt rank, source_rank, i,j,k, row > > .... > > call MPI_Comm_rank( PETSC_COMM_WORLD, rank, ierr ) > call DACreate3d( PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, & > arraysize(1), arraysize(2), arraysize(3), PETSC_DECIDE, & > PETSC_DECIDE, PETSC_DECIDE, 1, 5, PETSC_NULL_INTEGER, & > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) > call DACreateGlobalVector( da, pNOW, ierr ) > call DAGetCorners( da, xs, ys, zs, xl, yl, zl, ierr ) > > do i = xs,xs+xl-1 > if ( i.eq.arraysize(1)/2-1 ) then > src_loc(1) = i > do j = ys,ys+yl-1 > if ( j.eq.arraysize(2)/2-1 ) then > src_loc(2) = j > do k = zs,zs+zl-1 > if ( k.eq.3 ) then > src_loc(2) = j > source_rank = rank > endif > enddo > endif > enddo > endif > enddo > > call DAGetLocalVector( da, uLOCAL, ierr ) > call VecGetArrayF90( uLOCAL, xx, ierr ) > Use VecGetArrayF90() directly on the global vector and set the value in there. Or instead of this you can determine the global location of the entry in the "natural ordering" and then use DAGetAO() followed by AOApplicationToPetsc() with the global location in the natural ordering to convert to the PETSc ordering and then call VecSetValues() with the global vector followed by VecAssemblyBegin() and VecAssemblyEnd(). > if ( rank.eq.source_rank ) then > row = 15 > xx(row) = pressure > endif > > call VecRestoreArrayF90( uLOCAL, xx, ierr ) > call DALocalToGlobal( da, uLOCAL, ADD_VALUES, uGLOBAL, ierr ) > call DARestoreLocalVector( da, uLOCAL, ierr ) > > > Any help would be greatly appreciated. Is there a way to directly access the global distributed array (uGLOBAL) instead of working through the intermediate local array (uLOCAL)? I have tried the approach > > call DAVecGetArray( da, uGLOBAL, xx, ierr ) > xx = .... > call DAVecRestoreArray( da, uGLOBAL, xx, ierr ) > > Unfortunately, the compile always fails with the message that the DAVecRestoreArray function cannot be found. These are not supported in Fortran. Barry > > Thank you, > Mark > > -- > Mark Patrick Cheeseman > > Computational Scientist > KSL (KAUST Supercomputing Laboratory) > Building 1, Office #126 > King Abdullah University of Science & Technology > Thuwal 23955-6900 > Kingdom of Saudi Arabia > > EMAIL : mark.cheeseman at kaust.edu.sa > PHONE : +966 (2) 808 0221 (office) > +966 (54) 470 1082 (mobile) > SKYPE : mark.patrick.cheeseman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ventejuy at yahoo.es Tue Jun 22 14:53:19 2010 From: ventejuy at yahoo.es (Bentejui Medina) Date: Tue, 22 Jun 2010 19:53:19 +0000 (GMT) Subject: [petsc-users] PETSc, free software and using cost Message-ID: <141372.24462.qm@web27002.mail.ukl.yahoo.com> Hi all, i'm using PETSc for a project at the university. My software is opensource (GPL) , can I distribute my software including PETSc? What license use PETSc? I need to create a teorical invoice for the project. Has PETSc any cost?, any cost if a use it for a commercial product? Thanks a lot. From bsmith at mcs.anl.gov Tue Jun 22 14:57:41 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Jun 2010 14:57:41 -0500 Subject: [petsc-users] PETSc, free software and using cost In-Reply-To: <141372.24462.qm@web27002.mail.ukl.yahoo.com> References: <141372.24462.qm@web27002.mail.ukl.yahoo.com> Message-ID: http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#license On Jun 22, 2010, at 2:53 PM, Bentejui Medina wrote: > Hi all, > i'm using PETSc for a project at the university. My software is opensource (GPL) , can I distribute my software including PETSc? What license use PETSc? > > I need to create a teorical invoice for the project. Has PETSc any cost?, any cost if a use it for a commercial product? > > Thanks a lot. > > > > From ventejuy at yahoo.es Tue Jun 22 17:33:23 2010 From: ventejuy at yahoo.es (Bentejui Medina) Date: Tue, 22 Jun 2010 22:33:23 +0000 (GMT) Subject: [petsc-users] PETSc, free software and using cost In-Reply-To: References: <141372.24462.qm@web27002.mail.ukl.yahoo.com> Message-ID: <735544.69820.qm@web27008.mail.ukl.yahoo.com> thanks ----- Mensaje original ---- De: Barry Smith Para: PETSc users list Enviado: mar,22 junio, 2010 19:57 Asunto: Re: [petsc-users] PETSc, free software and using cost http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#license On Jun 22, 2010, at 2:53 PM, Bentejui Medina wrote: > Hi all, > i'm using PETSc for a project at the university. My software is opensource (GPL) , can I distribute my software including PETSc? What license use PETSc? > > I need to create a teorical invoice for the project. Has PETSc any cost?, any cost if a use it for a commercial product? > > Thanks a lot. > > > > From xxy113 at psu.edu Tue Jun 22 18:09:32 2010 From: xxy113 at psu.edu (XUAN YU) Date: Tue, 22 Jun 2010 19:09:32 -0400 Subject: [petsc-users] help about my first petsc program Message-ID: <1277248172l.811066l.0l@psu.edu> I got my program to run. But the results doesn't make any sence. At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00, At t = 0.01 y =9.996000e-01 4.000000e-04 0.000000e+00, At t = 0.02 y =9.992002e-01 -4.720016e-02 4.800000e-02, At t = 0.03 y =7.722397e-01 -6.681768e+02 6.684045e+02, At t = 0.04 y =-4.466124e+07 -1.338934e+11 1.339381e+11, At t = 0.05 y =-1.793342e+24 -5.376439e+27 5.378233e+27, At t = 0.06 y =-2.891574e+57 -8.668938e+60 8.671830e+60, At t = 0.07 y =-7.517556e+123 -2.253763e+127 2.254515e+127, At t = 0.08 y =-5.081142e+256 -1.523326e+260 1.523834e+260, At t = 0.09 y =-inf nan inf, At t = 0.1 y =nan nan nan, At t = 0.11 y =nan nan nan, At t = 0.12 y =nan nan nan, At t = 0.13 y =nan nan nan, At t = 0.14 y =nan nan nan, Would you please help me check what's wrong with my code? #include "petscts.h" /*The problem is from * chemical kinetics, and consists of the following three rate * equations: * dy1/dt = -.04*y1 + 1.e4*y2*y3 * dy2/dt = .04*y1 - 1.e4*y2*y3 - 3.e7*(y2)^2 * dy3/dt = 3.e7*(y2)^2 * on the interval from t = 0.0 to t = 4.e10, with initial * conditions: y1 = 1.0, y2 = y3 = 0.*/ typedef struct { PetscInt m; }AppCtx; extern PetscErrorCode FormJacobian(TS,PetscReal,Vec,Mat*,Mat*,MatStructure*,void*),Monitor(TS ts,PetscInt step,PetscReal time,Vec u,void*ctx), FormFunction(TS,PetscReal,Vec,Vec,void*); int main(int argc,char **argv) { TS ts; Vec u; PetscScalar *u_localptr; Mat J; AppCtx user; PetscInt its,m; PetscReal dt,ftime; PetscErrorCode ierr; PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL); m=3; dt=0.01; ierr = PetscOptionsGetInt(PETSC_NULL,"-m",&m,PETSC_NULL);CHKERRQ(ierr); user.m=m; ierr=VecCreateSeq(PETSC_COMM_SELF,m,&u);CHKERRQ(ierr); ierr=VecGetArray(u,&u_localptr);CHKERRQ(ierr); u_localptr[0]=1.0; u_localptr[1]=0.0; u_localptr[2]=0.0; ierr = VecRestoreArray(u,&u_localptr);CHKERRQ(ierr); ierr = TSCreate(PETSC_COMM_WORLD,&ts);CHKERRQ(ierr); ierr = TSSetProblemType(ts,TS_NONLINEAR);CHKERRQ(ierr); ierr = TSMonitorSet(ts,Monitor,&user,PETSC_NULL); ierr = TSSetSolution(ts,u);CHKERRQ(ierr); ierr = TSSetRHSFunction(ts,FormFunction,&user);CHKERRQ(ierr); ierr = MatCreate(PETSC_COMM_SELF,&J);CHKERRQ(ierr); ierr = MatSetSizes(J,PETSC_DECIDE,PETSC_DECIDE,m,m);CHKERRQ(ierr); ierr = MatSetFromOptions(J); ierr = TSSetRHSJacobian(ts,J,J,FormJacobian,&user);CHKERRQ(ierr); ierr = TSSetInitialTimeStep(ts,0.0,dt);CHKERRQ(ierr); ierr = TSSetDuration(ts,1000,8.e-1); ierr = TSSetFromOptions(ts);CHKERRQ(ierr); ierr = TSSetUp(ts);CHKERRQ(ierr); ierr = TSStep(ts,&its,&ftime);CHKERRQ(ierr); printf("Number of timesteps = %d final time %4.2e\n",(int)its,ftime); ierr = VecDestroy(u);CHKERRQ(ierr); ierr = MatDestroy(J);CHKERRQ(ierr); ierr = TSDestroy(ts);CHKERRQ(ierr); ierr = PetscFinalize();CHKERRQ(ierr); return 0; } PetscErrorCode FormFunction(TS ts,PetscReal t,Vec X,Vec F,void *ptr) { PetscErrorCode ierr; PetscScalar *x,*f; PetscInt n; n=3; ierr = VecGetArray(X,&x);CHKERRQ(ierr); ierr = VecGetArray(F,&f);CHKERRQ(ierr); f[0]=-0.04*x[0]+1.0e4*x[1]*x[2]; f[1]=0.04*x[0]-1.0e4*x[1]*x[2]-3*1.0e7*x[1]*x[1]; f[2]=3*1.0e7*x[1]*x[1]; ierr = VecRestoreArray(X,&x);CHKERRQ(ierr); ierr = VecRestoreArray(F,&f);CHKERRQ(ierr); return 0; } PetscErrorCode FormJacobian(TS ts,PetscReal t,Vec X,Mat *J,Mat *B,MatStructure *flag,void *ptr) { Mat jac=*J; PetscScalar v[3],*x; PetscInt row,col; PetscErrorCode ierr; ierr = VecGetArray(X,&x);CHKERRQ(ierr); v[0]=-0.04; v[1]=0.04; v[2]=0.0; row=0; col=0; ierr = MatSetValues(jac,3,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); v[0]=1.0e4*x[2]; v[1]=-1.0e4*x[2]-6.0e7*x[1]; v[2]=6.0e7*x[1]; col=1; ierr = MatSetValues(jac,3,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); v[0]=1.0e4*x[1]; v[1]=-1.0e4*x[1]; v[2]=0.0; col=2; ierr = MatSetValues(jac,3,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = VecRestoreArray(X,&x);CHKERRQ(ierr); ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); return 0; } PetscErrorCode Monitor(TS ts,PetscInt step,PetscReal time,Vec u, void *ctx) { PetscScalar *y; VecGetArray(u,&y); PetscPrintf(PETSC_COMM_WORLD,"At t =%11g y =%e %e %e, \n",time,y[0],y[1],y[2]); VecRestoreArray(u,&y); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Jun 22 18:19:08 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 01:19:08 +0200 Subject: [petsc-users] help about my first petsc program In-Reply-To: <1277248172l.811066l.0l@psu.edu> References: <1277248172l.811066l.0l@psu.edu> Message-ID: <87631au043.fsf@59A2.org> On Tue, 22 Jun 2010 19:09:32 -0400, "XUAN YU" wrote: > I got my program to run. But the results doesn't make any sence. > > At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00, > At t = 0.01 y =9.996000e-01 4.000000e-04 0.000000e+00, > At t = 0.02 y =9.992002e-01 -4.720016e-02 4.800000e-02, > At t = 0.03 y =7.722397e-01 -6.681768e+02 6.684045e+02, > At t = 0.04 y =-4.466124e+07 -1.338934e+11 1.339381e+11, > At t = 0.05 y =-1.793342e+24 -5.376439e+27 5.378233e+27, > At t = 0.06 y =-2.891574e+57 -8.668938e+60 8.671830e+60, > At t = 0.07 y =-7.517556e+123 -2.253763e+127 2.254515e+127, > At t = 0.08 y =-5.081142e+256 -1.523326e+260 1.523834e+260, > At t = 0.09 y =-inf nan inf, There is a reason people don't use forward Euler for stiff systems. Run with -ts_type beuler or -ts_type theta, after applying the following patch (which fixes your assembly bug). Jed diff --git a/ex1.c b/ex1.c index ed78cd7..e4bc7f2 100644 --- a/ex1.c +++ b/ex1.c @@ -88,25 +88,24 @@ PetscErrorCode FormJacobian(TS ts,PetscReal t,Vec X,Mat *J,Mat *B,MatStructure { Mat jac=*J; PetscScalar v[3],*x; - PetscInt row,col; + PetscInt row[3] = {0,1,2},col; PetscErrorCode ierr; ierr = VecGetArray(X,&x);CHKERRQ(ierr); v[0]=-0.04; v[1]=0.04; v[2]=0.0; - row=0; col=0; - ierr = MatSetValues(jac,3,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); + ierr = MatSetValues(jac,3,row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); v[0]=1.0e4*x[2]; v[1]=-1.0e4*x[2]-6.0e7*x[1]; v[2]=6.0e7*x[1]; col=1; - ierr = MatSetValues(jac,3,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); + ierr = MatSetValues(jac,3,row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); v[0]=1.0e4*x[1]; v[1]=-1.0e4*x[1]; v[2]=0.0; col=2; - ierr = MatSetValues(jac,3,&row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); + ierr = MatSetValues(jac,3,row,1,&col,v,INSERT_VALUES);CHKERRQ(ierr); ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = VecRestoreArray(X,&x);CHKERRQ(ierr); ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); From knepley at gmail.com Tue Jun 22 18:21:11 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Jun 2010 23:21:11 +0000 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 12:59 PM, Mark Cheeseman < mark.cheeseman at kaust.edu.sa> wrote: > Hi, > > I am trying to write a PETSc program in FORTRAN90 where I need to update a > single value in a global distributed array. I know the global coordinates > of the position that needs to be updated in the global array but I cannot > get the mapping from the local vector correct. In this case, I am working > on a domain with global dimensions [arraysize(1),arraysize(2),arraysize(3)] > and I want to alter a single point in the global distributed array, uGLOBAL, > at the global position [arraysize(1)/2-1,arraysize(2)-1,3]. I cannot seem > to be able to do this... what am I doing wrong? > > ... > DA da > Vec uGLOBAL, uLOCAL, tmp > PetscErrorCode ierr > PetscScalar, pointer :: xx > PetscInt rank, source_rank, i,j,k, row > > .... > > call MPI_Comm_rank( PETSC_COMM_WORLD, rank, ierr ) > call DACreate3d( PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, & > arraysize(1), arraysize(2), arraysize(3), > PETSC_DECIDE, & > PETSC_DECIDE, PETSC_DECIDE, 1, 5, > PETSC_NULL_INTEGER, & > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) > call DACreateGlobalVector( da, pNOW, ierr ) > call DAGetCorners( da, xs, ys, zs, xl, yl, zl, ierr ) > call DAVecGetArrayF90() > do i = xs,xs+xl-1 > if ( i.eq.arraysize(1)/2-1 ) then > do j = ys,ys+yl-1 > if ( j.eq.arraysize(2)/2-1 ) then > do k = zs,zs+zl-1 > if ( k.eq.3 ) then > array(k,j,i) = pressure > endif > enddo > endif > enddo > endif > enddo > > call DAVecRestoreArrayF90() That should work. Matt Thank you, > Mark > > -- > Mark Patrick Cheeseman > > Computational Scientist > KSL (KAUST Supercomputing Laboratory) > Building 1, Office #126 > King Abdullah University of Science & Technology > Thuwal 23955-6900 > Kingdom of Saudi Arabia > > EMAIL : mark.cheeseman at kaust.edu.sa > PHONE : +966 (2) 808 0221 (office) > +966 (54) 470 1082 (mobile) > SKYPE : mark.patrick.cheeseman > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jun 22 18:30:47 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Jun 2010 18:30:47 -0500 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: References: Message-ID: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> Matt, Actually DAVecRestoreArrayF90() only works for 1 dimensional DAs. The F90 interface has to be written for 2 and 3d and the creation of Fortran 3d arrays (and 4d for dof > 1) stuff written. Barry On Jun 22, 2010, at 6:21 PM, Matthew Knepley wrote: > On Tue, Jun 22, 2010 at 12:59 PM, Mark Cheeseman wrote: > Hi, > > I am trying to write a PETSc program in FORTRAN90 where I need to update a single value in a global distributed array. I know the global coordinates of the position that needs to be updated in the global array but I cannot get the mapping from the local vector correct. In this case, I am working on a domain with global dimensions [arraysize(1),arraysize(2),arraysize(3)] and I want to alter a single point in the global distributed array, uGLOBAL, at the global position [arraysize(1)/2-1,arraysize(2)-1,3]. I cannot seem to be able to do this... what am I doing wrong? > > ... > DA da > Vec uGLOBAL, uLOCAL, tmp > PetscErrorCode ierr > PetscScalar, pointer :: xx > PetscInt rank, source_rank, i,j,k, row > > .... > > call MPI_Comm_rank( PETSC_COMM_WORLD, rank, ierr ) > call DACreate3d( PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, & > arraysize(1), arraysize(2), arraysize(3), PETSC_DECIDE, & > PETSC_DECIDE, PETSC_DECIDE, 1, 5, PETSC_NULL_INTEGER, & > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) > call DACreateGlobalVector( da, pNOW, ierr ) > call DAGetCorners( da, xs, ys, zs, xl, yl, zl, ierr ) > > call DAVecGetArrayF90() > > do i = xs,xs+xl-1 > if ( i.eq.arraysize(1)/2-1 ) then > do j = ys,ys+yl-1 > if ( j.eq.arraysize(2)/2-1 ) then > do k = zs,zs+zl-1 > if ( k.eq.3 ) then > > array(k,j,i) = pressure > > endif > enddo > endif > enddo > endif > enddo > > > call > > That should work. > > Matt > > Thank you, > Mark > > -- > Mark Patrick Cheeseman > > Computational Scientist > KSL (KAUST Supercomputing Laboratory) > Building 1, Office #126 > King Abdullah University of Science & Technology > Thuwal 23955-6900 > Kingdom of Saudi Arabia > > EMAIL : mark.cheeseman at kaust.edu.sa > PHONE : +966 (2) 808 0221 (office) > +966 (54) 470 1082 (mobile) > SKYPE : mark.patrick.cheeseman > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Jun 22 18:32:20 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 01:32:20 +0200 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: References: Message-ID: <8739wetzi3.fsf@59A2.org> On Tue, 22 Jun 2010 23:21:11 +0000, Matthew Knepley wrote: > That should work. I thought *GetArrayF90 requires you have to call a "local function" for multidimensional arrays, as in snes ex5f90.F. If you declare the pointer as multi-dimensional, you get errors about shape-matching rules, if you try to use a usual pointer (1D) you get errors about incorrect subscripting. Is there a workaround, or do you always need to call the local version to get it to type-check? Jed From bsmith at mcs.anl.gov Tue Jun 22 18:36:25 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Jun 2010 18:36:25 -0500 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: <8739wetzi3.fsf@59A2.org> References: <8739wetzi3.fsf@59A2.org> Message-ID: <490C642A-8D99-43E4-97EE-8CEC408EAB40@mcs.anl.gov> On Jun 22, 2010, at 6:32 PM, Jed Brown wrote: > On Tue, 22 Jun 2010 23:21:11 +0000, Matthew Knepley wrote: >> That should work. > > I thought *GetArrayF90 requires you have to call a "local function" for > multidimensional arrays, as in snes ex5f90.F. VecGetArrayF90 requires 1d array (since Vec has no concept of multi-dimension) > If you declare the > pointer as multi-dimensional, you get errors about shape-matching rules, > if you try to use a usual pointer (1D) you get errors about incorrect > subscripting. Is there a workaround, or do you always need to call the > local version to get it to type-check? In theory someone could take the energy to finish the DAVecGetArrayF90() for 2 and 3 dimensions and get what is needed. In practice the only people who know enough to finish it despise Fortran so much you need to bash them on the head to get it done (and that includes me). See my other email for directions. Barry > > Jed From knepley at gmail.com Tue Jun 22 18:53:59 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Jun 2010 23:53:59 +0000 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> References: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> Message-ID: On Tue, Jun 22, 2010 at 11:30 PM, Barry Smith wrote: > > Matt, > > Actually DAVecRestoreArrayF90() only works for 1 dimensional DAs. The > F90 interface has to be written for 2 and 3d and the creation of Fortran 3d > arrays (and 4d for dof > 1) stuff written. > Ah, I see. Mark, if you can bear C for 1 function, it will work. The problem for the F90 is that we are cutting out a section of a big array, and making it look multidimensional. It is possible with F90, but a pain to get everything right in the array descriptor and no one has had to stamina to do it yet. Sorry about that, Matt > Barry > > > On Jun 22, 2010, at 6:21 PM, Matthew Knepley wrote: > > On Tue, Jun 22, 2010 at 12:59 PM, Mark Cheeseman < > mark.cheeseman at kaust.edu.sa> wrote: > >> Hi, >> >> I am trying to write a PETSc program in FORTRAN90 where I need to update a >> single value in a global distributed array. I know the global coordinates >> of the position that needs to be updated in the global array but I cannot >> get the mapping from the local vector correct. In this case, I am working >> on a domain with global dimensions [arraysize(1),arraysize(2),arraysize(3)] >> and I want to alter a single point in the global distributed array, uGLOBAL, >> at the global position [arraysize(1)/2-1,arraysize(2)-1,3]. I cannot seem >> to be able to do this... what am I doing wrong? >> >> ... >> DA da >> Vec uGLOBAL, uLOCAL, tmp >> PetscErrorCode ierr >> PetscScalar, pointer :: xx >> PetscInt rank, source_rank, i,j,k, row >> >> .... >> >> call MPI_Comm_rank( PETSC_COMM_WORLD, rank, ierr ) >> call DACreate3d( PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, & >> arraysize(1), arraysize(2), arraysize(3), >> PETSC_DECIDE, & >> PETSC_DECIDE, PETSC_DECIDE, 1, 5, >> PETSC_NULL_INTEGER, & >> PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) >> call DACreateGlobalVector( da, pNOW, ierr ) >> call DAGetCorners( da, xs, ys, zs, xl, yl, zl, ierr ) >> > > call DAVecGetArrayF90() > > >> do i = xs,xs+xl-1 >> if ( i.eq.arraysize(1)/2-1 ) then >> do j = ys,ys+yl-1 >> if ( j.eq.arraysize(2)/2-1 ) then >> do k = zs,zs+zl-1 >> if ( k.eq.3 ) then >> > > array(k,j,i) = pressure > > >> endif >> enddo >> endif >> enddo >> endif >> enddo >> >> > call > > > > > That should work. > > Matt > > Thank you, >> Mark >> >> -- >> Mark Patrick Cheeseman >> >> Computational Scientist >> KSL (KAUST Supercomputing Laboratory) >> Building 1, Office #126 >> King Abdullah University of Science & Technology >> Thuwal 23955-6900 >> Kingdom of Saudi Arabia >> >> EMAIL : mark.cheeseman at kaust.edu.sa >> PHONE : +966 (2) 808 0221 (office) >> +966 (54) 470 1082 (mobile) >> SKYPE : mark.patrick.cheeseman >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jun 22 23:35:08 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 22 Jun 2010 23:35:08 -0500 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: <490C642A-8D99-43E4-97EE-8CEC408EAB40@mcs.anl.gov> References: <8739wetzi3.fsf@59A2.org> <490C642A-8D99-43E4-97EE-8CEC408EAB40@mcs.anl.gov> Message-ID: <83A15822-792D-4840-A682-D0C1B9455273@mcs.anl.gov> On Jun 22, 2010, at 6:36 PM, Barry Smith wrote: > > On Jun 22, 2010, at 6:32 PM, Jed Brown wrote: > >> On Tue, 22 Jun 2010 23:21:11 +0000, Matthew Knepley wrote: >>> That should work. >> >> I thought *GetArrayF90 requires you have to call a "local function" for >> multidimensional arrays, as in snes ex5f90.F. > > VecGetArrayF90 requires 1d array (since Vec has no concept of multi-dimension) > >> If you declare the >> pointer as multi-dimensional, you get errors about shape-matching rules, >> if you try to use a usual pointer (1D) you get errors about incorrect >> subscripting. Is there a workaround, or do you always need to call the >> local version to get it to type-check? > > In theory someone could take the energy to finish the DAVecGetArrayF90() for 2 and 3 dimensions and get what is needed. In practice the only people who know enough to finish it despise Fortran so much you need to bash them on the head to get it done (and that includes me). See my other email for directions. > Looking at this more closely and trying to do it, I think it may not be possible in F90. You don't seem to be able to set the range of indices in the ptr array (it seems it must always be 1:somevalue) so we can't do the trick we do in C and F77 of embedding a smaller 2d array inside an imaginary larger array and accessing values out of the smaller array by using indices of the larger array. Barry > Barry > >> >> Jed > From jed at 59A2.org Wed Jun 23 04:01:33 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 11:01:33 +0200 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: <83A15822-792D-4840-A682-D0C1B9455273@mcs.anl.gov> References: <8739wetzi3.fsf@59A2.org> <490C642A-8D99-43E4-97EE-8CEC408EAB40@mcs.anl.gov> <83A15822-792D-4840-A682-D0C1B9455273@mcs.anl.gov> Message-ID: <87y6e6ruky.fsf@59A2.org> On Tue, 22 Jun 2010 23:35:08 -0500, Barry Smith wrote: > Looking at this more closely and trying to do it, I think it may > not be possible in F90. You don't seem to be able to set the range > of indices in the ptr array (it seems it must always be > 1:somevalue) so we can't do the trick we do in C and F77 of > embedding a smaller 2d array inside an imaginary larger array and > accessing values out of the smaller array by using indices of the > larger array. Is it possible to declare the bounds when you declare the pointer? Even if that worked, it's not clearly less awkward than using a separate local function, which is not completely unreasonable. Slightly awkward to work with, but it does direct addressing which we can't do for multi-dimensional non-0-based arrays in C (C99 offers direct indexing for runtime-dimensioned 0-based arrays). Jed From fpacull at fluorem.com Wed Jun 23 04:45:56 2010 From: fpacull at fluorem.com (francois pacull) Date: Wed, 23 Jun 2010 11:45:56 +0200 Subject: [petsc-users] MatGetSubMatrix and memory usage Message-ID: <4C21D7D4.4080602@fluorem.com> Dear PETSc team, I have a question regarding the MatGetSubMatrix routine (release 3.0.0-p7). I am using it to change the partitioning of a MATMPIAIJ matrix: ierr = MatGetSubMatrix(Aold,ISrow,IScol,PETSC_DECIDE,MAT_INITIAL_MATRIX,&Anew); Using the PetscMemoryGetCurrentUsage routine, I noticed that the memory size is increased by about 100% when calling MatGetSubMatrix: - memory usage : 267 MB ierr = MatGetSubMatrix(Aold,ISrow,IScol,PETSC_DECIDE,MAT_INITIAL_MATRIX,&Anew); - memory usage : 780 MB ierr = MatDestroy(Aold); - memory usage : 528 MB In order to reduce the memory, I have to duplicate Anew in the following way: - memory usage : 267 MB ierr = MatGetSubMatrix(Aold,ISrow,IScol,PETSC_DECIDE,MAT_INITIAL_MATRIX,&Anew);CHKERRQ(ierr); - memory usage : 780 MB ierr = MatDestroy(Aold); - memory usage : 528 MB ierr = MatDuplicate(Anew,MAT_COPY_VALUES,&Atemp);CHKERRQ(ierr); - memory usage : 780 MB ierr = MatDestroy(Anew);CHKERRQ(ierr); - memory usage : 278 MB Anew = Atemp; However, the peak memory is still about three times the size of Aold. Am I doing something wrong? Why is it using so much memory? Is there a way to avoid this memory peak? I noticed the same thing when using the release 3.1-p1 Thanks for your help, francois pacull. From jed at 59A2.org Wed Jun 23 05:31:20 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 12:31:20 +0200 Subject: [petsc-users] MatGetSubMatrix and memory usage In-Reply-To: <4C21D7D4.4080602@fluorem.com> References: <4C21D7D4.4080602@fluorem.com> Message-ID: <87pqzirqfb.fsf@59A2.org> On Wed, 23 Jun 2010 11:45:56 +0200, francois pacull wrote: > Dear PETSc team, > > I have a question regarding the MatGetSubMatrix routine (release > 3.0.0-p7). I am using it to change the partitioning of a MATMPIAIJ matrix: How many processes are you using to provide the numbers below? The MPIAIJ implementation does a non-scalable operation ISAllGather and caches the result so that all future calls (with MAT_REUSE_MATRIX) will be fast. If you won't be doing the operation again, you could destroy this cache (it is PetscObjectCompose'd with the submatrix under the name "ISAllGather"), but it won't change the peak usage since that index set is needed when extracting the submatrix. It is possible to write a scalable MatGetSubMatrix, but it would be significantly more complicated than the current implementation and hasn't been done. Is this issue coming up with your use of PCFieldSplit? In that case, I would suggest living with the memory use until you find a split that you like, then create a MatShell that holds the blocks separately and implement MatGetSubMatrix (by matching against your distributed index sets). Dmitry is working on a new matrix type in petsc-dev that may be able to do this for you, but it's not ready yet. Jed From bsmith at mcs.anl.gov Wed Jun 23 09:16:59 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Jun 2010 09:16:59 -0500 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: <87y6e6ruky.fsf@59A2.org> References: <8739wetzi3.fsf@59A2.org> <490C642A-8D99-43E4-97EE-8CEC408EAB40@mcs.anl.gov> <83A15822-792D-4840-A682-D0C1B9455273@mcs.anl.gov> <87y6e6ruky.fsf@59A2.org> Message-ID: <64D28DFA-D72F-42C7-A913-2895457D78DA@mcs.anl.gov> On Jun 23, 2010, at 4:01 AM, Jed Brown wrote: > On Tue, 22 Jun 2010 23:35:08 -0500, Barry Smith wrote: >> Looking at this more closely and trying to do it, I think it may >> not be possible in F90. You don't seem to be able to set the range >> of indices in the ptr array (it seems it must always be >> 1:somevalue) so we can't do the trick we do in C and F77 of >> embedding a smaller 2d array inside an imaginary larger array and >> accessing values out of the smaller array by using indices of the >> larger array. > > Is it possible to declare the bounds when you declare the pointer? I cannot find any F90 syntax that allows that. Barry > Even > if that worked, it's not clearly less awkward than using a separate > local function, which is not completely unreasonable. Slightly awkward > to work with, but it does direct addressing which we can't do for > multi-dimensional non-0-based arrays in C (C99 offers direct indexing > for runtime-dimensioned 0-based arrays). > > Jed From xxy113 at psu.edu Wed Jun 23 09:22:19 2010 From: xxy113 at psu.edu (Xuan YU) Date: Wed, 23 Jun 2010 10:22:19 -0400 Subject: [petsc-users] help about my first petsc program In-Reply-To: <87631au043.fsf@59A2.org> References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> Message-ID: I got the correct results! Thanks! Could you please tell me whether I can use TS to solve ODE without providing Jacobian matrix? On Jun 22, 2010, at 7:19 PM, Jed Brown wrote: > On Tue, 22 Jun 2010 19:09:32 -0400, "XUAN YU" wrote: >> I got my program to run. But the results doesn't make any sence. >> >> At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00, >> At t = 0.01 y =9.996000e-01 4.000000e-04 0.000000e+00, >> At t = 0.02 y =9.992002e-01 -4.720016e-02 4.800000e-02, >> At t = 0.03 y =7.722397e-01 -6.681768e+02 6.684045e+02, >> At t = 0.04 y =-4.466124e+07 -1.338934e+11 1.339381e+11, >> At t = 0.05 y =-1.793342e+24 -5.376439e+27 5.378233e+27, >> At t = 0.06 y =-2.891574e+57 -8.668938e+60 8.671830e+60, >> At t = 0.07 y =-7.517556e+123 -2.253763e+127 2.254515e+127, >> At t = 0.08 y =-5.081142e+256 -1.523326e+260 1.523834e+260, >> At t = 0.09 y =-inf nan inf, > > There is a reason people don't use forward Euler for stiff systems. > Run > with -ts_type beuler or -ts_type theta, after applying the following > patch (which fixes your assembly bug). > > Jed > > diff --git a/ex1.c b/ex1.c > index ed78cd7..e4bc7f2 100644 > --- a/ex1.c > +++ b/ex1.c > @@ -88,25 +88,24 @@ PetscErrorCode FormJacobian(TS ts,PetscReal > t,Vec X,Mat *J,Mat *B,MatStructure > { > Mat jac=*J; > PetscScalar v[3],*x; > - PetscInt row,col; > + PetscInt row[3] = {0,1,2},col; > PetscErrorCode ierr; > ierr = VecGetArray(X,&x);CHKERRQ(ierr); > v[0]=-0.04; > v[1]=0.04; > v[2]=0.0; > - row=0; > col=0; > - ierr = MatSetValues(jac,3,&row, > 1,&col,v,INSERT_VALUES);CHKERRQ(ierr); > + ierr = MatSetValues(jac,3,row, > 1,&col,v,INSERT_VALUES);CHKERRQ(ierr); > v[0]=1.0e4*x[2]; > v[1]=-1.0e4*x[2]-6.0e7*x[1]; > v[2]=6.0e7*x[1]; > col=1; > - ierr = MatSetValues(jac,3,&row, > 1,&col,v,INSERT_VALUES);CHKERRQ(ierr); > + ierr = MatSetValues(jac,3,row, > 1,&col,v,INSERT_VALUES);CHKERRQ(ierr); > v[0]=1.0e4*x[1]; > v[1]=-1.0e4*x[1]; > v[2]=0.0; > col=2; > - ierr = MatSetValues(jac,3,&row, > 1,&col,v,INSERT_VALUES);CHKERRQ(ierr); > + ierr = MatSetValues(jac,3,row, > 1,&col,v,INSERT_VALUES);CHKERRQ(ierr); > ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = VecRestoreArray(X,&x);CHKERRQ(ierr); > ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > Xuan YU (??) xxy113 at psu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Wed Jun 23 09:32:53 2010 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 23 Jun 2010 14:32:53 +0000 Subject: [petsc-users] MatShell & PCShell Message-ID: Hi, For my linear problem Ax=b I have available the vector b, the action Ax of the matrix and the action Px of a given preconditioner. Is this enough to use the Krylov solvers in PETSc? So far I've looked at ex14f and ex15f which use either MatShell or PCShell but not both. I'm not sure whether it's at all possible, so I would greatly appreciate your advice before starting some trial and error. Regards, Chris dr. ir.ChristiaanKlaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44M MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl ----------------------------- [Insert your disclaimer here] ----------------------------- From jed at 59A2.org Wed Jun 23 09:44:27 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 16:44:27 +0200 Subject: [petsc-users] help about my first petsc program In-Reply-To: References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> Message-ID: <8739wdst9w.fsf@59A2.org> On Wed, 23 Jun 2010 10:22:19 -0400, Xuan YU wrote: > I got the correct results! Thanks! > > Could you please tell me whether I can use TS to solve ODE without > providing Jacobian matrix? Sure, use -snes_mf to solve it without assembling any matrix (linear systems are solved with an unpreconditioned Krylov method). Note that this usually will not converge for larger problems. To assemble a dense matrix by finite differencing, you can use -snes_fd. To use coloring to compute a sparse matrix efficiently, you have to know the sparsity pattern of the Jacobian, try something like (this is somewhat different in petsc-dev than release). ISColoring iscoloring; ierr = DAGetColoring(rd->da,IS_COLORING_GLOBAL,MATAIJ,&iscoloring);CHKERRQ(ierr); ierr = MatFDColoringCreate(B,iscoloring,&matfdcoloring);CHKERRQ(ierr); ierr = ISColoringDestroy(iscoloring);CHKERRQ(ierr); ierr = MatFDColoringSetFunction(matfdcoloring,(PetscErrorCode(*)(void))SNESTSFormFunction,ts);CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(matfdcoloring);CHKERRQ(ierr); ierr = SNESSetJacobian(snes,A,B,SNESDefaultComputeJacobianColor,matfdcoloring);CHKERRQ(ierr); Jed From jed at 59A2.org Wed Jun 23 09:46:30 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 16:46:30 +0200 Subject: [petsc-users] MatShell & PCShell In-Reply-To: References: Message-ID: <87zkylrem1.fsf@59A2.org> On Wed, 23 Jun 2010 14:32:53 +0000, "Klaij, Christiaan" wrote: > Hi, > > For my linear problem Ax=b I have available the vector b, the action > Ax of the matrix and the action Px of a given preconditioner. Is this > enough to use the Krylov solvers in PETSc? So far I've looked at ex14f > and ex15f which use either MatShell or PCShell but not both. I'm not > sure whether it's at all possible, so I would greatly appreciate your > advice before starting some trial and error. You can certainly use both, and you pretty much have to unless you have an assembled matrix that is "similar" to your MatShell because, for obvious reasons, PETSc's preconditioners can't do much with your MatShell. Jed From fpacull at fluorem.com Wed Jun 23 10:17:41 2010 From: fpacull at fluorem.com (francois pacull) Date: Wed, 23 Jun 2010 17:17:41 +0200 Subject: [petsc-users] MatGetSubMatrix and memory usage In-Reply-To: <87pqzirqfb.fsf@59A2.org> References: <4C21D7D4.4080602@fluorem.com> <87pqzirqfb.fsf@59A2.org> Message-ID: <4C222595.2010202@fluorem.com> Thanks Jed for your quick and helpful answer! On 23/06/2010 12:31, Jed Brown wrote: > On Wed, 23 Jun 2010 11:45:56 +0200, francois pacull wrote: > >> Dear PETSc team, >> >> I have a question regarding the MatGetSubMatrix routine (release >> 3.0.0-p7). I am using it to change the partitioning of a MATMPIAIJ matrix: >> > How many processes are you using to provide the numbers below? > I used only 2 processes in that test. > The MPIAIJ implementation does a non-scalable operation ISAllGather and > caches the result so that all future calls (with MAT_REUSE_MATRIX) will > be fast. If you won't be doing the operation again, you could destroy > this cache (it is PetscObjectCompose'd with the submatrix under the name > "ISAllGather"), but it won't change the peak usage since that index set > is needed when extracting the submatrix. > > It is possible to write a scalable MatGetSubMatrix, but it would be > significantly more complicated than the current implementation and > hasn't been done. Is this issue coming up with your use of > PCFieldSplit? No, I was just trying to understand the results I got from a call to PetscMemoryGetMaximumUsage. I am dealing with a very large matrix and trying to save memory... > In that case, I would suggest living with the memory use > until you find a split that you like, then create a MatShell that holds > the blocks separately and implement MatGetSubMatrix (by matching against > your distributed index sets). Dmitry is working on a new matrix type in > petsc-dev that may be able to do this for you, but it's not ready yet. > > Jed > > francois. From jed at 59A2.org Wed Jun 23 10:28:05 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 17:28:05 +0200 Subject: [petsc-users] MatGetSubMatrix and memory usage In-Reply-To: <4C222595.2010202@fluorem.com> References: <4C21D7D4.4080602@fluorem.com> <87pqzirqfb.fsf@59A2.org> <4C222595.2010202@fluorem.com> Message-ID: <87wrtprcoq.fsf@59A2.org> On Wed, 23 Jun 2010 17:17:41 +0200, francois pacull wrote: > I used only 2 processes in that test. I would have predicted lower, but there is also a matrix (used to collect local parts) composed under "SubMatrix". See MatGetSubMatrix_MPIAIJ and MatGetSubMatrix_MPIAIJ_Private for details. You can destroy them if you want, but it won't change peak usage, unfortunately. Jed From jed at 59A2.org Wed Jun 23 11:14:43 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 18:14:43 +0200 Subject: [petsc-users] help about my first petsc program In-Reply-To: References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> Message-ID: <87ocf1raj0.fsf@59A2.org> On Wed, 23 Jun 2010 10:22:19 -0400, Xuan YU wrote: > I got the correct results! Thanks! If you are feeling like experimenting, you could try -ts_type gl which is a very new adaptive order adaptive step method for stiff systems. It seems to be okay on your problem, but the controller is currently pretty fragile (it's possible for step size to go to zero due to poor error estimates, or for length/order to oscillate spuriously with several step period). Jed From xxy113 at psu.edu Wed Jun 23 11:24:36 2010 From: xxy113 at psu.edu (Xuan YU) Date: Wed, 23 Jun 2010 12:24:36 -0400 Subject: [petsc-users] help about my first petsc program In-Reply-To: <87ocf1raj0.fsf@59A2.org> References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> <87ocf1raj0.fsf@59A2.org> Message-ID: <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> On Jun 23, 2010, at 12:14 PM, Jed Brown wrote: > On Wed, 23 Jun 2010 10:22:19 -0400, Xuan YU wrote: >> I got the correct results! Thanks! > > If you are feeling like experimenting, you could try -ts_type gl which > is a very new adaptive order adaptive step method for stiff > systems. It > seems to be okay on your problem, but the controller is currently > pretty > fragile (it's possible for step size to go to zero due to poor error > estimates, or for length/order to oscillate spuriously with several > step > period). > > Jed > I tried these method. Pseudo and Beuler work well. gl kept running after t=1.665007e+01. Actually, I will solve a global ODE system in form of y'=f(t,y;x), x is the forcing. The problem has been solved by cvode(sundials). I want to use petsc to improve efficiency. From jed at 59A2.org Wed Jun 23 11:38:10 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 18:38:10 +0200 Subject: [petsc-users] help about my first petsc program In-Reply-To: <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> <87ocf1raj0.fsf@59A2.org> <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> Message-ID: <87lja5r9fx.fsf@59A2.org> On Wed, 23 Jun 2010 12:24:36 -0400, Xuan YU wrote: > I tried these method. Pseudo and Beuler work well. gl kept running > after t=1.665007e+01. Different code from the kinetics problem you pasted? GL worked when I ran it [1], but I'm not surprised if it fails on your real problem. > Actually, I will solve a global ODE system in form of y'=f(t,y;x), x > is the forcing. The problem has been solved by cvode(sundials). I want > to use petsc to improve efficiency. That is likely largely about preconditioning, note that you can also use CVODE through PETSc, with PETSc preconditioners, but that doesn't use SNES or KSP so you have fewer choices. Jed [1] $ ./a.out -ts_type gl At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00 At t = 0.01 y =9.996023e-01 4.536940e-05 3.523190e-04 At t = 0.012 y =9.995227e-01 3.460238e-05 4.427435e-04 At t = 0.0128944 y =9.994870e-01 3.630127e-05 4.766579e-04 At t = 0.0132594 y =9.994725e-01 3.636046e-05 4.911255e-04 At t = 0.0140389 y =9.994415e-01 3.638138e-05 5.221217e-04 At t = 0.0167059 y =9.993355e-01 3.639729e-05 6.281372e-04 At t = 0.0239153 y =9.990496e-01 3.638965e-05 9.140576e-04 At t = 0.0447892 y =9.982260e-01 3.619797e-05 1.737826e-03 At t = 0.0731174 y =9.971190e-01 3.581689e-05 2.845193e-03 At t = 0.109233 y =9.957254e-01 3.583264e-05 4.238734e-03 At t = 0.131687 y =9.948689e-01 3.554250e-05 5.095594e-03 At t = 0.15262 y =9.940770e-01 3.477519e-05 5.888207e-03 At t = 0.189042 y =9.927143e-01 3.556983e-05 7.250171e-03 At t = 0.210466 y =9.919214e-01 3.496951e-05 8.043646e-03 At t = 0.216991 y =9.916812e-01 3.499105e-05 8.283760e-03 At t = 0.228628 y =9.912544e-01 3.495839e-05 8.710620e-03 At t = 0.249814 y =9.904822e-01 3.480390e-05 9.482953e-03 At t = 0.284509 y =9.892306e-01 3.457380e-05 1.073483e-02 At t = 0.329694 y =9.876293e-01 3.428339e-05 1.233642e-02 At t = 0.453362 y =9.833627e-01 3.346441e-05 1.660382e-02 At t = 0.632794 y =9.774858e-01 3.258959e-05 2.248158e-02 At t = 0.763647 y =9.734140e-01 3.178560e-05 2.655426e-02 At t = 0.8 y =9.723125e-01 3.169671e-05 2.765585e-02 Number of timesteps = 23 final time 8.00e-01 $ ./a.out -ts_type sundials At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00 At t = 0.8 y =9.723021e-01 3.169016e-05 2.766620e-02 Number of timesteps = 1 final time 8.00e-01 TS Object: type: sundials Sundials integrater does not use SNES! Sundials integrater type BDF: backward differentiation formula Sundials abs tol 1e-06 rel tol 1e-06 Sundials linear solver tolerance factor 0.05 Sundials GMRES max iterations (same as restart in SUNDIALS) 5 Sundials using unmodified (classical) Gram-Schmidt for orthogonalization in GMRES Sundials suggested factor for tolerance scaling 1 Sundials cumulative number of internal steps 28 Sundials no. of calls to rhs function 44 Sundials no. of calls to linear solver setup function 14 Sundials no. of error test failures 2 Sundials no. of nonlinear solver iterations 40 Sundials no. of nonlinear convergence failure 0 Sundials no. of linear iterations 51 Sundials no. of linear convergence failures 0 Sundials no. of preconditioner evaluations 1 Sundials no. of preconditioner solves 88 Sundials no. of Jacobian-vector product evaluations 51 Sundials no. of rhs calls for finite diff. Jacobian-vector evals 51 From xxy113 at psu.edu Wed Jun 23 11:45:37 2010 From: xxy113 at psu.edu (Xuan YU) Date: Wed, 23 Jun 2010 12:45:37 -0400 Subject: [petsc-users] help about my first petsc program In-Reply-To: <87lja5r9fx.fsf@59A2.org> References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> <87ocf1raj0.fsf@59A2.org> <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> <87lja5r9fx.fsf@59A2.org> Message-ID: On Jun 23, 2010, at 12:38 PM, Jed Brown wrote: > On Wed, 23 Jun 2010 12:24:36 -0400, Xuan YU wrote: >> I tried these method. Pseudo and Beuler work well. gl kept running >> after t=1.665007e+01. > > Different code from the kinetics problem you pasted? GL worked when I > ran it [1], but I'm not surprised if it fails on your real problem. > >> Actually, I will solve a global ODE system in form of y'=f(t,y;x), x >> is the forcing. The problem has been solved by cvode(sundials). I >> want >> to use petsc to improve efficiency. > > That is likely largely about preconditioning, note that you can also > use > CVODE through PETSc, with PETSc preconditioners, but that doesn't use > SNES or KSP so you have fewer choices. > > Jed > > > [1] > > $ ./a.out -ts_type gl > At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00 > At t = 0.01 y =9.996023e-01 4.536940e-05 3.523190e-04 > At t = 0.012 y =9.995227e-01 3.460238e-05 4.427435e-04 > At t = 0.0128944 y =9.994870e-01 3.630127e-05 4.766579e-04 > At t = 0.0132594 y =9.994725e-01 3.636046e-05 4.911255e-04 > At t = 0.0140389 y =9.994415e-01 3.638138e-05 5.221217e-04 > At t = 0.0167059 y =9.993355e-01 3.639729e-05 6.281372e-04 > At t = 0.0239153 y =9.990496e-01 3.638965e-05 9.140576e-04 > At t = 0.0447892 y =9.982260e-01 3.619797e-05 1.737826e-03 > At t = 0.0731174 y =9.971190e-01 3.581689e-05 2.845193e-03 > At t = 0.109233 y =9.957254e-01 3.583264e-05 4.238734e-03 > At t = 0.131687 y =9.948689e-01 3.554250e-05 5.095594e-03 > At t = 0.15262 y =9.940770e-01 3.477519e-05 5.888207e-03 > At t = 0.189042 y =9.927143e-01 3.556983e-05 7.250171e-03 > At t = 0.210466 y =9.919214e-01 3.496951e-05 8.043646e-03 > At t = 0.216991 y =9.916812e-01 3.499105e-05 8.283760e-03 > At t = 0.228628 y =9.912544e-01 3.495839e-05 8.710620e-03 > At t = 0.249814 y =9.904822e-01 3.480390e-05 9.482953e-03 > At t = 0.284509 y =9.892306e-01 3.457380e-05 1.073483e-02 > At t = 0.329694 y =9.876293e-01 3.428339e-05 1.233642e-02 > At t = 0.453362 y =9.833627e-01 3.346441e-05 1.660382e-02 > At t = 0.632794 y =9.774858e-01 3.258959e-05 2.248158e-02 > At t = 0.763647 y =9.734140e-01 3.178560e-05 2.655426e-02 > At t = 0.8 y =9.723125e-01 3.169671e-05 2.765585e-02 > Number of timesteps = 23 final time 8.00e-01 > $ ./a.out -ts_type sundials > At t = 0 y =1.000000e+00 0.000000e+00 0.000000e+00 > At t = 0.8 y =9.723021e-01 3.169016e-05 2.766620e-02 > Number of timesteps = 1 final time 8.00e-01 > > TS Object: > type: sundials > Sundials integrater does not use SNES! > Sundials integrater type BDF: backward differentiation formula > Sundials abs tol 1e-06 rel tol 1e-06 > Sundials linear solver tolerance factor 0.05 > Sundials GMRES max iterations (same as restart in SUNDIALS) 5 > Sundials using unmodified (classical) Gram-Schmidt for > orthogonalization in GMRES > Sundials suggested factor for tolerance scaling 1 > Sundials cumulative number of internal steps 28 > Sundials no. of calls to rhs function 44 > Sundials no. of calls to linear solver setup function 14 > Sundials no. of error test failures 2 > Sundials no. of nonlinear solver iterations 40 > Sundials no. of nonlinear convergence failure 0 > Sundials no. of linear iterations 51 > Sundials no. of linear convergence failures 0 > Sundials no. of preconditioner evaluations 1 > Sundials no. of preconditioner solves 88 > Sundials no. of Jacobian-vector product evaluations 51 > Sundials no. of rhs calls for finite diff. Jacobian-vector evals 51 > I changed dt=0.4 and the Monitor function, so GL doesn't work. PetscErrorCode Monitor(TS ts,PetscInt step,PetscReal time,Vec u, void *ctx) { PetscScalar *y; PetscReal dt; TSGetTimeStep(ts,&dt); VecGetArray(u,&y); PetscPrintf(PETSC_COMM_WORLD,"At t =%e y =%e %e %e, \n",time,y[0],y[1],y[2]); VecRestoreArray(u,&y); if(step>0)dt=time*9; TSSetTimeStep(ts,dt); return 0; } Why Petsc is faster than cvode? How fast it will be? Xuan YU xxy113 at psu.edu From jed at 59A2.org Wed Jun 23 12:17:21 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 23 Jun 2010 19:17:21 +0200 Subject: [petsc-users] help about my first petsc program In-Reply-To: References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> <87ocf1raj0.fsf@59A2.org> <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> <87lja5r9fx.fsf@59A2.org> Message-ID: <87fx0dr7mm.fsf@59A2.org> On Wed, 23 Jun 2010 12:45:37 -0400, Xuan YU wrote: > I changed dt=0.4 and the Monitor function, so GL doesn't work. Choosing a smaller dt (only an initial value in this case) fixes that problem, but I still see some difficulties unless I give -ts_gl_max_order 2 (seems like I don't have enough stabilization for the higher-order error estimators). But feel free to keep using the simpler methods, I'm sure they will be more predictable. > PetscErrorCode Monitor(TS ts,PetscInt step,PetscReal time,Vec u, void > *ctx) > { > PetscScalar *y; > PetscReal dt; > TSGetTimeStep(ts,&dt); > VecGetArray(u,&y); > PetscPrintf(PETSC_COMM_WORLD,"At t =%e y =%e %e %e, > \n",time,y[0],y[1],y[2]); > VecRestoreArray(u,&y); > if(step>0)dt=time*9; > TSSetTimeStep(ts,dt); > return 0; > } TSGL has an adaptive controller, based on a local error estimator. It surely won't work right if the monitor changes it (because it doesn't happen in the correct place; GL can't change the step size arbitrarily, it needs to know the size of the next step when it "completes" the current one). You should be able to TSGLAdaptRegisterDynamic() you own adaptive scheme, there is currently NONE, SIZE (only step size, not order), and BOTH (default). If you do this, and it works better than the builtin ones, let me know and I'll add it to the library. > Why Petsc is faster than cvode? How fast it will be? It might not be faster, but at least you can get back CVODE as a runtime option. And since PETSc offers a lot more preconditioning options, you can probably beat it that way. Whether PETSc-native integrators (e.g. TSTHETA, TSGL) are faster or not is very problem dependent. Jed From C.Klaij at marin.nl Thu Jun 24 03:08:37 2010 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 24 Jun 2010 08:08:37 +0000 Subject: [petsc-users] MatShell & PCShell Message-ID: Thanks! I modified ex15f so that it uses a new MatShell AA with associated multiplication Ax. Only problem is that MatCreateShell expects the local matrix dimensions which is somewhat contrary to the spirit of ex15f, but at least I got it working for one proc. Chris Date: Wed, 23 Jun 2010 16:46:30 +0200 From: Jed Brown Subject: Re: [petsc-users] MatShell & PCShell To: "Klaij\, Christiaan" , "petsc-users\@mcs.anl.gov" Message-ID: <87zkylrem1.fsf at 59A2.org> Content-Type: text/plain; charset=us-ascii On Wed, 23 Jun 2010 14:32:53 +0000, "Klaij, Christiaan" wrote: > Hi, > > For my linear problem Ax=b I have available the vector b, the action > Ax of the matrix and the action Px of a given preconditioner. Is this > enough to use the Krylov solvers in PETSc? So far I've looked at ex14f > and ex15f which use either MatShell or PCShell but not both. I'm not > sure whether it's at all possible, so I would greatly appreciate your > advice before starting some trial and error. You can certainly use both, and you pretty much have to unless you have an assembled matrix that is "similar" to your MatShell because, for obvious reasons, PETSc's preconditioners can't do much with your MatShell. Jed dr. ir.ChristiaanKlaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44M MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl ----------------------------- [Insert your disclaimer here] ----------------------------- From jed at 59A2.org Thu Jun 24 06:00:54 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 24 Jun 2010 13:00:54 +0200 Subject: [petsc-users] MatShell & PCShell In-Reply-To: References: Message-ID: <87vd98pue1.fsf@59A2.org> On Thu, 24 Jun 2010 08:08:37 +0000, "Klaij, Christiaan" wrote: > Thanks! I modified ex15f so that it uses a new MatShell AA with > associated multiplication Ax. Only problem is that MatCreateShell > expects the local matrix dimensions which is somewhat contrary to the > spirit of ex15f, but at least I got it working for one proc. How is this contrary to the spirit? It needs to be determined for compatibility with the vector you will multiply against. Perhaps it should determine this automatically, but it's easy to provide, and allows PETSc to check for this compatibility before calling your function. If it was determined automatically, then you would have to check what PETSc decided so you can set up your internal data structures (which you usually need in your implementation of MatMult) appropriately. I think it's more common to create these structures first, then call MatCreateShell, in which case you know the local sizes. Jed From bsmith at mcs.anl.gov Thu Jun 24 15:46:05 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Jun 2010 15:46:05 -0500 Subject: [petsc-users] updating values in a DA Global array In-Reply-To: References: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> Message-ID: <2FFB08BF-C960-4884-ADA7-BF0E0706286B@mcs.anl.gov> It turns out there was a bug in gfortran versions prior to 2.5 that made DAVecGetArrayF90() unworkable. After Satish told me this, ... I have implemented DAVecGetArrayF90() for 1, 2, and 3 dimensions with dof = 1 and dof > 1 in petsc-dev. The test example is src/dm/da/examples/tutorials/ex11f90.F Please report any problems with it to petsc-maint at mcs.anl.gov and if you use gfortran make sure it is version 2.5. Not that if your dof passed to DACreate is greater than one then you need another dimension in the array pointer you declare. Also, in keeping with the Fortran tradition indices for it start at 1 not zero. This can lead to much simplier structured grid Fortran codes. Barry On Jun 22, 2010, at 6:53 PM, Matthew Knepley wrote: > On Tue, Jun 22, 2010 at 11:30 PM, Barry Smith wrote: > > Matt, > > Actually DAVecRestoreArrayF90() only works for 1 dimensional DAs. The F90 interface has to be written for 2 and 3d and the creation of Fortran 3d arrays (and 4d for dof > 1) stuff written. > > Ah, I see. Mark, if you can bear C for 1 function, it will work. The problem for the F90 is that we > are cutting out a section of a big array, and making it look multidimensional. It is possible with F90, > but a pain to get everything right in the array descriptor and no one has had to stamina to do it yet. > > Sorry about that, > > Matt > > Barry > > > On Jun 22, 2010, at 6:21 PM, Matthew Knepley wrote: > >> On Tue, Jun 22, 2010 at 12:59 PM, Mark Cheeseman wrote: >> Hi, >> >> I am trying to write a PETSc program in FORTRAN90 where I need to update a single value in a global distributed array. I know the global coordinates of the position that needs to be updated in the global array but I cannot get the mapping from the local vector correct. In this case, I am working on a domain with global dimensions [arraysize(1),arraysize(2),arraysize(3)] and I want to alter a single point in the global distributed array, uGLOBAL, at the global position [arraysize(1)/2-1,arraysize(2)-1,3]. I cannot seem to be able to do this... what am I doing wrong? >> >> ... >> DA da >> Vec uGLOBAL, uLOCAL, tmp >> PetscErrorCode ierr >> PetscScalar, pointer :: xx >> PetscInt rank, source_rank, i,j,k, row >> >> .... >> >> call MPI_Comm_rank( PETSC_COMM_WORLD, rank, ierr ) >> call DACreate3d( PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_BOX, & >> arraysize(1), arraysize(2), arraysize(3), PETSC_DECIDE, & >> PETSC_DECIDE, PETSC_DECIDE, 1, 5, PETSC_NULL_INTEGER, & >> PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, da, ierr) >> call DACreateGlobalVector( da, pNOW, ierr ) >> call DAGetCorners( da, xs, ys, zs, xl, yl, zl, ierr ) >> >> call DAVecGetArrayF90() >> >> do i = xs,xs+xl-1 >> if ( i.eq.arraysize(1)/2-1 ) then >> do j = ys,ys+yl-1 >> if ( j.eq.arraysize(2)/2-1 ) then >> do k = zs,zs+zl-1 >> if ( k.eq.3 ) then >> >> array(k,j,i) = pressure >> >> endif >> enddo >> endif >> enddo >> endif >> enddo >> >> >> call > >> >> That should work. >> >> Matt >> >> Thank you, >> Mark >> >> -- >> Mark Patrick Cheeseman >> >> Computational Scientist >> KSL (KAUST Supercomputing Laboratory) >> Building 1, Office #126 >> King Abdullah University of Science & Technology >> Thuwal 23955-6900 >> Kingdom of Saudi Arabia >> >> EMAIL : mark.cheeseman at kaust.edu.sa >> PHONE : +966 (2) 808 0221 (office) >> +966 (54) 470 1082 (mobile) >> SKYPE : mark.patrick.cheeseman >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Jun 24 15:49:38 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 24 Jun 2010 22:49:38 +0200 Subject: [petsc-users] [petsc-dev] updating values in a DA Global array In-Reply-To: <2FFB08BF-C960-4884-ADA7-BF0E0706286B@mcs.anl.gov> References: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> <2FFB08BF-C960-4884-ADA7-BF0E0706286B@mcs.anl.gov> Message-ID: <87zkyknokd.fsf@59A2.org> On Thu, 24 Jun 2010 15:46:05 -0500, Barry Smith wrote: > > It turns out there was a bug in gfortran versions prior to 2.5 Presumably you mean gfortran-4.5, gfortran did not exist prior to 4.0. Jed From bsmith at mcs.anl.gov Thu Jun 24 15:51:09 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 24 Jun 2010 15:51:09 -0500 Subject: [petsc-users] [petsc-dev] updating values in a DA Global array In-Reply-To: <87zkyknokd.fsf@59A2.org> References: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> <2FFB08BF-C960-4884-ADA7-BF0E0706286B@mcs.anl.gov> <87zkyknokd.fsf@59A2.org> Message-ID: <48044840-247E-4961-8545-0FF8D11F845A@mcs.anl.gov> Yes. On Jun 24, 2010, at 3:49 PM, Jed Brown wrote: > On Thu, 24 Jun 2010 15:46:05 -0500, Barry Smith wrote: >> >> It turns out there was a bug in gfortran versions prior to 2.5 > > Presumably you mean gfortran-4.5, gfortran did not exist prior to 4.0. > > Jed From balay at mcs.anl.gov Thu Jun 24 15:52:25 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 Jun 2010 15:52:25 -0500 (CDT) Subject: [petsc-users] [petsc-dev] updating values in a DA Global array In-Reply-To: <48044840-247E-4961-8545-0FF8D11F845A@mcs.anl.gov> References: <2581B7C1-2F65-4646-842B-5A4F0AB73D0D@mcs.anl.gov> <2FFB08BF-C960-4884-ADA7-BF0E0706286B@mcs.anl.gov> <87zkyknokd.fsf@59A2.org> <48044840-247E-4961-8545-0FF8D11F845A@mcs.anl.gov> Message-ID: Actually gfortran-4.3+ should work [gfortran 4.2 is broken] Satish On Thu, 24 Jun 2010, Barry Smith wrote: > > Yes. > > On Jun 24, 2010, at 3:49 PM, Jed Brown wrote: > > > On Thu, 24 Jun 2010 15:46:05 -0500, Barry Smith wrote: > >> > >> It turns out there was a bug in gfortran versions prior to 2.5 > > > > Presumably you mean gfortran-4.5, gfortran did not exist prior to 4.0. > > > > Jed > > From lizs at mail.uc.edu Thu Jun 24 18:25:15 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Thu, 24 Jun 2010 23:25:15 +0000 Subject: [petsc-users] Segmentation Violation, is this a bug? Message-ID: <88D7E3BB7E1960428303E760100374510FA31B7B@BL2PRD0103MB055.prod.exchangelabs.com> Hi, Petsc Team, Recently I encounter a weird problem for segmentation violation. I wrote a simple test code to describe it. Here the line " pp = sk[j+1][i].p; " causes segmentation violation trouble when I try to invoke values of ghost points in j direction. If I change it into "pp = sk[j][i+1].p;" invoking ghost point values in i diection, then it works smoothly. I check previous archives about segmentation violation, but cannot find any clue for this. Can you point out where is wrong here or is it a bug? Thank you. Zhisong Li static char help[] = "test"; #include "petscda.h" typedef struct { PetscScalar p; } Field; int main(int argc, char **args) { Vec xx; PetscInt dof = 1, m = 24, n= 32, i, j, xs, ys, xm, ym, yints, yinte, xints, xinte; PetscScalar pp; DA da; Field **sk; PetscInitialize(&argc, &args, (char *)0, help) ; DACreate2d(PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_STAR, m, n, PETSC_DECIDE, PETSC_DECIDE, dof, 1, PETSC_NULL, PETSC_NULL, &da); DACreateGlobalVector(da, &xx); DAGetCorners(da, &xs, &ys, 0, &xm, &ym, 0); xints = xs; xinte = xs+xm; yints = ys; yinte = ys+ym; VecSet(xx,1.0); DAVecGetArray(da, xx, &sk); if (xints == 0){ xints = xints + 1; } if (yints == 0){ yints = yints + 1; } if (xinte == m){ xinte = xinte - 1; } if (yinte == n){ yinte = yinte - 1; } for (j=yints; j From knepley at gmail.com Thu Jun 24 18:39:21 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 25 Jun 2010 07:39:21 +0800 Subject: [petsc-users] Segmentation Violation, is this a bug? In-Reply-To: <88D7E3BB7E1960428303E760100374510FA31B7B@BL2PRD0103MB055.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E760100374510FA31B7B@BL2PRD0103MB055.prod.exchangelabs.com> Message-ID: On Fri, Jun 25, 2010 at 7:25 AM, Li, Zhisong (lizs) wrote: > > Hi, Petsc Team, > > Recently I encounter a weird problem for segmentation violation. I wrote a > simple test code to describe it. Here the line " pp = sk[j+1][i].p; " > causes segmentation violation trouble when I try to invoke values of ghost > points in j direction. If I change it into "pp = sk[j][i+1].p;" invoking > ghost point values in i diection, then it works smoothly. I check previous > archives about segmentation violation, but cannot find any clue for this. > Can you point out where is wrong here or is it a bug? > When you call DAVecGetArray() on a GLOBAL vector, there are no ghost values. You need a LOCAL vector. The i+1 is still wrong, but does not SEGV since you look into memory you own. Matt > Thank you. > > > Zhisong Li > > > > static char help[] = "test"; > #include "petscda.h" > > typedef struct { PetscScalar p; } Field; > > int main(int argc, char **args) > { Vec xx; > PetscInt dof = 1, m = 24, n= 32, i, j, xs, ys, xm, ym, yints, yinte, > xints, xinte; > PetscScalar pp; > DA da; > Field **sk; > PetscInitialize(&argc, &args, (char *)0, help) ; > > DACreate2d(PETSC_COMM_WORLD, DA_NONPERIODIC, DA_STENCIL_STAR, m, n, > PETSC_DECIDE, PETSC_DECIDE, dof, 1, PETSC_NULL, PETSC_NULL, &da); > DACreateGlobalVector(da, &xx); > DAGetCorners(da, &xs, &ys, 0, &xm, &ym, 0); > xints = xs; xinte = xs+xm; yints = ys; yinte = ys+ym; > > VecSet(xx,1.0); > > DAVecGetArray(da, xx, &sk); > if (xints == 0){ xints = xints + 1; } > if (yints == 0){ yints = yints + 1; > } > > if (xinte == m){ xinte = xinte - 1; } > if (yinte == n){ yinte = yinte - 1; } > > for (j=yints; j for (i=xints; i } > > DAVecRestoreArray(da, xx, > &sk); > VecDestroy(xx); > DADestroy(da); > PetscFinalize(); > PetscFunctionReturn(0); > } > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Fri Jun 25 01:39:52 2010 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 25 Jun 2010 06:39:52 +0000 Subject: [petsc-users] MatShell & PCShell Message-ID: I thought the spirit of ex15f was to only specify global dimensions and let PETSc take care of the partitioning at runtime. So I was expecting call MatCreateShell(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE, m*n,m*n,PETSC_NULL_INTEGER,AA,ierr) which is illegal. But thanks to your explanation I understand why. Of course it's much beter to check what PETSc decided and use it for my own MatMult. Thanks for clearing this up! Chris Date: Thu, 24 Jun 2010 13:00:54 +0200 From: Jed Brown Subject: Re: [petsc-users] MatShell & PCShell To: "Klaij\, Christiaan" , "petsc-users\@mcs.anl.gov" Message-ID: <87vd98pue1.fsf at 59A2.org> Content-Type: text/plain; charset=us-ascii On Thu, 24 Jun 2010 08:08:37 +0000, "Klaij, Christiaan" wrote: > Thanks! I modified ex15f so that it uses a new MatShell AA with > associated multiplication Ax. Only problem is that MatCreateShell > expects the local matrix dimensions which is somewhat contrary to the > spirit of ex15f, but at least I got it working for one proc. How is this contrary to the spirit? It needs to be determined for compatibility with the vector you will multiply against. Perhaps it should determine this automatically, but it's easy to provide, and allows PETSc to check for this compatibility before calling your function. If it was determined automatically, then you would have to check what PETSc decided so you can set up your internal data structures (which you usually need in your implementation of MatMult) appropriately. I think it's more common to create these structures first, then call MatCreateShell, in which case you know the local sizes. Jed dr. ir.ChristiaanKlaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44M MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl ----------------------------- [Insert your disclaimer here] ----------------------------- From bsmith at mcs.anl.gov Fri Jun 25 09:34:30 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 25 Jun 2010 09:34:30 -0500 Subject: [petsc-users] MatShell & PCShell In-Reply-To: References: Message-ID: The check is simply if (A->rmap->n == PETSC_DECIDE || A->cmap->n == PETSC_DECIDE) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_WRONG,"Must give local row and column count for matrix"); it is basically a "make sure the user doesn't shoot himself in the foot" check. I can see arguments both ways but since PETSc is suppose to be as flexible as possible and give as much choice to the user as possible I am going to take that check out of petsc-dev (you can just remove this one line from your copy of src/mat/impls/shell/shell.c ) and give the choice to the user. Barry On Jun 25, 2010, at 1:39 AM, Klaij, Christiaan wrote: > I thought the spirit of ex15f was to only specify global dimensions and let PETSc take care of the partitioning at runtime. So I was expecting > > call MatCreateShell(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE, m*n,m*n,PETSC_NULL_INTEGER,AA,ierr) > > which is illegal. But thanks to your explanation I understand why. Of course it's much beter to check what PETSc decided and use it for my own MatMult. Thanks for clearing this up! > > Chris > > > Date: Thu, 24 Jun 2010 13:00:54 +0200 > From: Jed Brown > Subject: Re: [petsc-users] MatShell & PCShell > To: "Klaij\, Christiaan" , > "petsc-users\@mcs.anl.gov" > Message-ID: <87vd98pue1.fsf at 59A2.org> > Content-Type: text/plain; charset=us-ascii > > On Thu, 24 Jun 2010 08:08:37 +0000, "Klaij, Christiaan" wrote: >> Thanks! I modified ex15f so that it uses a new MatShell AA with >> associated multiplication Ax. Only problem is that MatCreateShell >> expects the local matrix dimensions which is somewhat contrary to the >> spirit of ex15f, but at least I got it working for one proc. > > How is this contrary to the spirit? It needs to be determined for > compatibility with the vector you will multiply against. Perhaps it > should determine this automatically, but it's easy to provide, and > allows PETSc to check for this compatibility before calling your > function. If it was determined automatically, then you would have to > check what PETSc decided so you can set up your internal data structures > (which you usually need in your implementation of MatMult) > appropriately. I think it's more common to create these structures > first, then call MatCreateShell, in which case you know the local sizes. > > Jed > > > dr. ir.ChristiaanKlaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44M > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > > > > ----------------------------- > [Insert your disclaimer here] > ----------------------------- > From lizs at mail.uc.edu Mon Jun 28 12:14:02 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Mon, 28 Jun 2010 17:14:02 +0000 Subject: [petsc-users] How to use the matrix format of block compressed row (MATBAIJ) Message-ID: <88D7E3BB7E1960428303E760100374510FA37F48@BL2PRD0103MB055.prod.exchangelabs.com> Hi, Petsc Team, The PETSC user manual says that the matrix format of block compressed row (BAIJ) will be more efficient than the default AIJ format for problems with multiple DOF. But the description of using BAIJ matrix is very limited in the manual. When trying to replace AIJ with BAIJ in the code of iterative solver, I often meet the errors " No support for this operation for this object type!", " Matrix format mpibaij does not have a built-in PETSc direct solver! ". I think something must be missing here when we change the format. And how to prelocate memory for BAIJ matrices when we have multiple DOF? Where can we find detailed statements or example codes of using BAIJ with preconditioners in iterative computation? BTW, When we use DA to organize the data for parallel computing, is it more natural to use DAGetMatrix() to create the matrix than using MatCreateMPIAIJ() ? Thank you. Zhisong Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Jun 28 12:42:19 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 28 Jun 2010 11:42:19 -0600 Subject: [petsc-users] How to use the matrix format of block compressed row (MATBAIJ) In-Reply-To: <88D7E3BB7E1960428303E760100374510FA37F48@BL2PRD0103MB055.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E760100374510FA37F48@BL2PRD0103MB055.prod.exchangelabs.com> Message-ID: <87lj9zt5ok.fsf@59A2.org> On Mon, 28 Jun 2010 17:14:02 +0000, "Li, Zhisong (lizs)" wrote: > Hi, Petsc Team, > > The PETSC user manual says that the matrix format of block compressed > row (BAIJ) will be more efficient than the default AIJ format for > problems with multiple DOF. But the description of using BAIJ matrix > is very limited in the manual. When trying to replace AIJ with BAIJ in > the code of iterative solver, I often meet the errors " No support for > this operation for this object type!", " Matrix format mpibaij does > not have a built-in PETSc direct solver! ". I think something must be > missing here when we change the format. This is the fault of the "MPI" not the "B", PETSc does not have a native parallel direct solver, you can use PCREDUNDANT or external packages (like MUMPS or SuperLU_Dist). Note that most external packages don't support block formats at all, so you should just use AIJ (or perhaps SBAIJ(1) for symmetric problems) if you need to use parallel direct solvers. Note that you can use MatSetBlockSize to set a block size for AIJ and use MatSetValuesBlocked, then you can change the matrix format at runtime (depending on which solver you are trying). DAGetMatrix does this, you should definitely use it if you are using a DA in a conventional way. Jed From xxy113 at psu.edu Mon Jun 28 13:21:57 2010 From: xxy113 at psu.edu (Xuan YU) Date: Mon, 28 Jun 2010 14:21:57 -0400 Subject: [petsc-users] help about my first petsc program In-Reply-To: <87fx0dr7mm.fsf@59A2.org> References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> <87ocf1raj0.fsf@59A2.org> <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> <87lja5r9fx.fsf@59A2.org> <87fx0dr7mm.fsf@59A2.org> Message-ID: <726B5C3B-5FBB-44DC-9A09-1206454FE767@psu.edu> On Jun 23, 2010, at 1:17 PM, Jed Brown wrote: > On Wed, 23 Jun 2010 12:45:37 -0400, Xuan YU wrote: >> I changed dt=0.4 and the Monitor function, so GL doesn't work. > > Choosing a smaller dt (only an initial value in this case) fixes that > problem, but I still see some difficulties unless I give > -ts_gl_max_order 2 (seems like I don't have enough stabilization for > the > higher-order error estimators). But feel free to keep using the > simpler > methods, I'm sure they will be more predictable. > >> PetscErrorCode Monitor(TS ts,PetscInt step,PetscReal time,Vec u, void >> *ctx) >> { >> PetscScalar *y; >> PetscReal dt; >> TSGetTimeStep(ts,&dt); >> VecGetArray(u,&y); >> PetscPrintf(PETSC_COMM_WORLD,"At t =%e y =%e %e %e, >> \n",time,y[0],y[1],y[2]); >> VecRestoreArray(u,&y); >> if(step>0)dt=time*9; >> TSSetTimeStep(ts,dt); >> return 0; >> } > > TSGL has an adaptive controller, based on a local error estimator. It > surely won't work right if the monitor changes it (because it doesn't > happen in the correct place; GL can't change the step size > arbitrarily, > it needs to know the size of the next step when it "completes" the > current one). You should be able to TSGLAdaptRegisterDynamic() you > own > adaptive scheme, there is currently NONE, SIZE (only step size, not > order), and BOTH (default). If you do this, and it works better than > the builtin ones, let me know and I'll add it to the library. > >> Why Petsc is faster than cvode? How fast it will be? > > It might not be faster, but at least you can get back CVODE as a > runtime > option. And since PETSc offers a lot more preconditioning options, > you > can probably beat it that way. Whether PETSc-native integrators > (e.g. TSTHETA, TSGL) are faster or not is very problem dependent. > > Jed > I have successfully solve my problem with TS. But it is slower than Cvode. I don't know how to choose preconditioner. I only found preconditioner option in SNES. Is that means I should use SNES to solve my problem? Many thanks! Xuan YU xxy113 at psu.edu From bsmith at mcs.anl.gov Mon Jun 28 13:44:13 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Jun 2010 13:44:13 -0500 Subject: [petsc-users] help about my first petsc program In-Reply-To: <726B5C3B-5FBB-44DC-9A09-1206454FE767@psu.edu> References: <1277248172l.811066l.0l@psu.edu> <87631au043.fsf@59A2.org> <87ocf1raj0.fsf@59A2.org> <91C98243-11EB-42B4-96AB-7AAEDA86CCEB@psu.edu> <87lja5r9fx.fsf@59A2.org> <87fx0dr7mm.fsf@59A2.org> <726B5C3B-5FBB-44DC-9A09-1206454FE767@psu.edu> Message-ID: <0BADBC38-00E7-4A34-8085-830703F4B3F9@mcs.anl.gov> On Jun 28, 2010, at 1:21 PM, Xuan YU wrote: > > On Jun 23, 2010, at 1:17 PM, Jed Brown wrote: > >> On Wed, 23 Jun 2010 12:45:37 -0400, Xuan YU wrote: >>> I changed dt=0.4 and the Monitor function, so GL doesn't work. >> >> Choosing a smaller dt (only an initial value in this case) fixes that >> problem, but I still see some difficulties unless I give >> -ts_gl_max_order 2 (seems like I don't have enough stabilization for the >> higher-order error estimators). But feel free to keep using the simpler >> methods, I'm sure they will be more predictable. >> >>> PetscErrorCode Monitor(TS ts,PetscInt step,PetscReal time,Vec u, void >>> *ctx) >>> { >>> PetscScalar *y; >>> PetscReal dt; >>> TSGetTimeStep(ts,&dt); >>> VecGetArray(u,&y); >>> PetscPrintf(PETSC_COMM_WORLD,"At t =%e y =%e %e %e, >>> \n",time,y[0],y[1],y[2]); >>> VecRestoreArray(u,&y); >>> if(step>0)dt=time*9; >>> TSSetTimeStep(ts,dt); >>> return 0; >>> } >> >> TSGL has an adaptive controller, based on a local error estimator. It >> surely won't work right if the monitor changes it (because it doesn't >> happen in the correct place; GL can't change the step size arbitrarily, >> it needs to know the size of the next step when it "completes" the >> current one). You should be able to TSGLAdaptRegisterDynamic() you own >> adaptive scheme, there is currently NONE, SIZE (only step size, not >> order), and BOTH (default). If you do this, and it works better than >> the builtin ones, let me know and I'll add it to the library. >> >>> Why Petsc is faster than cvode? How fast it will be? >> >> It might not be faster, but at least you can get back CVODE as a runtime >> option. And since PETSc offers a lot more preconditioning options, you >> can probably beat it that way. Whether PETSc-native integrators >> (e.g. TSTHETA, TSGL) are faster or not is very problem dependent. >> >> Jed >> > > > I have successfully solve my problem with TS. But it is slower than Cvode. I don't know how to choose preconditioner. -pc_type asm or whatever preconditioner you want to use. Run with -help to see the options > I only found preconditioner option in SNES. Is that means I should use SNES to solve my problem? NO Barry > > Many thanks! > > > Xuan YU > xxy113 at psu.edu > > > > From xxy113 at psu.edu Mon Jun 28 14:24:32 2010 From: xxy113 at psu.edu (Xuan YU) Date: Mon, 28 Jun 2010 15:24:32 -0400 Subject: [petsc-users] run option Message-ID: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> Hi, I run my code by ./xxx -pc_type icc But it says: WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! Option left: name:-pc_type value: icc Could you please tell me what happened? Xuan YU xxy113 at psu.edu From bsmith at mcs.anl.gov Mon Jun 28 14:47:04 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Jun 2010 14:47:04 -0500 Subject: [petsc-users] run option In-Reply-To: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> References: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> Message-ID: Do you have a TSSetFromOptions() or a SNESSetFromOptions() or a KSPSetFromOptions() or a PCSetFromOptions(). If not then the option never gets processed and used. Barry On Jun 28, 2010, at 2:24 PM, Xuan YU wrote: > Hi, > > I run my code by ./xxx -pc_type icc > > But it says: > > WARNING! There are options you set that were not used! > WARNING! could be spelling mistake, etc! > Option left: name:-pc_type value: icc > > Could you please tell me what happened? > > Xuan YU > xxy113 at psu.edu > > > > From xxy113 at psu.edu Mon Jun 28 15:13:58 2010 From: xxy113 at psu.edu (Xuan YU) Date: Mon, 28 Jun 2010 16:13:58 -0400 Subject: [petsc-users] run option In-Reply-To: References: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> Message-ID: <0287016F-90FA-4753-8E7F-144AA0B1697E@psu.edu> On Jun 28, 2010, at 3:47 PM, Barry Smith wrote: > > Do you have a TSSetFromOptions() or a SNESSetFromOptions() or a > KSPSetFromOptions() or a PCSetFromOptions(). If not then the option > never gets processed and used. > I only have ierr = TSSetFromOptions(ts);CHKERRQ(ierr). > Barry > > On Jun 28, 2010, at 2:24 PM, Xuan YU wrote: > >> Hi, >> >> I run my code by ./xxx -pc_type icc >> >> But it says: >> >> WARNING! There are options you set that were not used! >> WARNING! could be spelling mistake, etc! >> Option left: name:-pc_type value: icc >> >> Could you please tell me what happened? >> >> Xuan YU >> xxy113 at psu.edu >> >> >> >> > > From bsmith at mcs.anl.gov Mon Jun 28 15:17:12 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Jun 2010 15:17:12 -0500 Subject: [petsc-users] run option In-Reply-To: <0287016F-90FA-4753-8E7F-144AA0B1697E@psu.edu> References: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> <0287016F-90FA-4753-8E7F-144AA0B1697E@psu.edu> Message-ID: <6E307FFD-0115-47DB-973B-B97FBB95F2B4@mcs.anl.gov> On Jun 28, 2010, at 3:13 PM, Xuan YU wrote: > > On Jun 28, 2010, at 3:47 PM, Barry Smith wrote: > >> >> Do you have a TSSetFromOptions() or a SNESSetFromOptions() or a KSPSetFromOptions() or a PCSetFromOptions(). If not then the option never gets processed and used. >> > I only have ierr = TSSetFromOptions(ts);CHKERRQ(ierr). That is fine, that calls PCSetFromOptions() and should process your option. Perhaps you set a prefix for the TS that you need in setting the preconditioner type. Barry > > > >> Barry >> >> On Jun 28, 2010, at 2:24 PM, Xuan YU wrote: >> >>> Hi, >>> >>> I run my code by ./xxx -pc_type icc >>> >>> But it says: >>> >>> WARNING! There are options you set that were not used! >>> WARNING! could be spelling mistake, etc! >>> Option left: name:-pc_type value: icc >>> >>> Could you please tell me what happened? >>> >>> Xuan YU >>> xxy113 at psu.edu >>> >>> >>> >>> >> >> > > > > > > From xxy113 at psu.edu Mon Jun 28 15:27:36 2010 From: xxy113 at psu.edu (Xuan YU) Date: Mon, 28 Jun 2010 16:27:36 -0400 Subject: [petsc-users] run option In-Reply-To: <6E307FFD-0115-47DB-973B-B97FBB95F2B4@mcs.anl.gov> References: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> <0287016F-90FA-4753-8E7F-144AA0B1697E@psu.edu> <6E307FFD-0115-47DB-973B-B97FBB95F2B4@mcs.anl.gov> Message-ID: On Jun 28, 2010, at 4:17 PM, Barry Smith wrote: > > On Jun 28, 2010, at 3:13 PM, Xuan YU wrote: > >> >> On Jun 28, 2010, at 3:47 PM, Barry Smith wrote: >> >>> >>> Do you have a TSSetFromOptions() or a SNESSetFromOptions() or a >>> KSPSetFromOptions() or a PCSetFromOptions(). If not then the >>> option never gets processed and used. >>> >> I only have ierr = TSSetFromOptions(ts);CHKERRQ(ierr). > > That is fine, that calls PCSetFromOptions() and should process > your option. Perhaps you set a prefix for the TS that you need in > setting the preconditioner type. > This is the petsc code: ierr = TSCreate(PETSC_COMM_WORLD,&ts);CHKERRQ(ierr); ierr = TSSetProblemType(ts,TS_NONLINEAR);CHKERRQ(ierr); ierr = TSMonitorSet(ts,Monitor,&appctx,PETSC_NULL);CHKERRQ(ierr); ierr = TSSetSolution(ts,CV_Y);CHKERRQ(ierr); ierr = TSSetRHSFunction(ts,f,&appctx);CHKERRQ(ierr); ierr = TSSetInitialTimeStep(ts,0,1.0);CHKERRQ(ierr); ierr = TSSetDuration(ts,cData.EndTime,cData.EndTime);CHKERRQ(ierr); ierr = TSSetFromOptions(ts);CHKERRQ(ierr); ierr = TSSetUp(ts);CHKERRQ(ierr); ierr = TSStep(ts,&its,&ftime);CHKERRQ(ierr); ierr = VecDestroy(CV_Y);CHKERRQ(ierr); ierr = TSDestroy(ts);CHKERRQ(ierr); ierr = PetscFinalize();CHKERRQ(ierr); > Barry > >> >> >> >>> Barry >>> >>> On Jun 28, 2010, at 2:24 PM, Xuan YU wrote: >>> >>>> Hi, >>>> >>>> I run my code by ./xxx -pc_type icc >>>> >>>> But it says: >>>> >>>> WARNING! There are options you set that were not used! >>>> WARNING! could be spelling mistake, etc! >>>> Option left: name:-pc_type value: icc >>>> >>>> Could you please tell me what happened? >>>> >>>> Xuan YU >>>> xxy113 at psu.edu >>>> >>>> >>>> >>>> >>> >>> >> >> >> >> >> >> > > Xuan YU (??) xxy113 at psu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jun 28 18:13:20 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 28 Jun 2010 18:13:20 -0500 Subject: [petsc-users] run option In-Reply-To: References: <957175B2-EC7D-4BEF-B70A-6647FB46BE00@psu.edu> <0287016F-90FA-4753-8E7F-144AA0B1697E@psu.edu> <6E307FFD-0115-47DB-973B-B97FBB95F2B4@mcs.anl.gov> Message-ID: Run with -ts_view it is likely running with the default explicit method Euler which requires no solver and hence no preconditioner. You need to use an implicit method if you want to use it with a preconditioner. Barry On Jun 28, 2010, at 3:27 PM, Xuan YU wrote: > > On Jun 28, 2010, at 4:17 PM, Barry Smith wrote: > >> >> On Jun 28, 2010, at 3:13 PM, Xuan YU wrote: >> >>> >>> On Jun 28, 2010, at 3:47 PM, Barry Smith wrote: >>> >>>> >>>> Do you have a TSSetFromOptions() or a SNESSetFromOptions() or a KSPSetFromOptions() or a PCSetFromOptions(). If not then the option never gets processed and used. >>>> >>> I only have ierr = TSSetFromOptions(ts);CHKERRQ(ierr). >> >> That is fine, that calls PCSetFromOptions() and should process your option. Perhaps you set a prefix for the TS that you need in setting the preconditioner type. >> > This is the petsc code: > ierr = TSCreate(PETSC_COMM_WORLD,&ts);CHKERRQ(ierr); > ierr = TSSetProblemType(ts,TS_NONLINEAR);CHKERRQ(ierr); > ierr = TSMonitorSet(ts,Monitor,&appctx,PETSC_NULL);CHKERRQ(ierr); > ierr = TSSetSolution(ts,CV_Y);CHKERRQ(ierr); > ierr = TSSetRHSFunction(ts,f,&appctx);CHKERRQ(ierr); > ierr = TSSetInitialTimeStep(ts,0,1.0);CHKERRQ(ierr); > ierr = TSSetDuration(ts,cData.EndTime,cData.EndTime);CHKERRQ(ierr); > ierr = TSSetFromOptions(ts);CHKERRQ(ierr); > ierr = TSSetUp(ts);CHKERRQ(ierr); > ierr = TSStep(ts,&its,&ftime);CHKERRQ(ierr); > ierr = VecDestroy(CV_Y);CHKERRQ(ierr); > ierr = TSDestroy(ts);CHKERRQ(ierr); > ierr = PetscFinalize();CHKERRQ(ierr); > > > >> Barry >> >>> >>> >>> >>>> Barry >>>> >>>> On Jun 28, 2010, at 2:24 PM, Xuan YU wrote: >>>> >>>>> Hi, >>>>> >>>>> I run my code by ./xxx -pc_type icc >>>>> >>>>> But it says: >>>>> >>>>> WARNING! There are options you set that were not used! >>>>> WARNING! could be spelling mistake, etc! >>>>> Option left: name:-pc_type value: icc >>>>> >>>>> Could you please tell me what happened? >>>>> >>>>> Xuan YU >>>>> xxy113 at psu.edu >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> >>> >>> >>> >>> >> >> > > Xuan YU (??) > xxy113 at psu.edu > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.bloy at gmail.com Tue Jun 29 08:12:56 2010 From: luke.bloy at gmail.com (Luke Bloy) Date: Tue, 29 Jun 2010 09:12:56 -0400 Subject: [petsc-users] ublas sparse matrix bindings? Message-ID: <4C29F158.6030008@gmail.com> Hi, In one of my projects I use the ublas compressed row major format for storing sparse matricies. One of the reasons i chose this format was interoperability with other packages. I'd like to use some petsc and slepc functionality in my project but I haven't found any bindings between the 2 formats. It seems like MatCreateSeqAIJWithArrays is the optimal way to link the two types of objects but i thought I would see if anyone has done this and could point me in the most optimal direction. Thanks, Luke From knepley at gmail.com Tue Jun 29 08:46:20 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Jun 2010 08:46:20 -0500 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C29F158.6030008@gmail.com> References: <4C29F158.6030008@gmail.com> Message-ID: On Tue, Jun 29, 2010 at 8:12 AM, Luke Bloy wrote: > Hi, > > In one of my projects I use the ublas compressed row major format for > storing sparse matricies. One of the reasons i chose this format was > interoperability with other packages. I'd like to use some petsc and slepc > functionality in my project but I haven't found any bindings between the 2 > formats. > > It seems like MatCreateSeqAIJWithArrays is the optimal way to link the two > types of objects but i thought I would see if anyone has done this and could > point me in the most optimal direction. > I think that sounds right. As long as you can get the offsets, cols, and values arrays, it is easy. Matt > Thanks, > Luke > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jun 29 08:52:32 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 29 Jun 2010 08:52:32 -0500 (CDT) Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C29F158.6030008@gmail.com> References: <4C29F158.6030008@gmail.com> Message-ID: On Tue, 29 Jun 2010, Luke Bloy wrote: > Hi, > > In one of my projects I use the ublas compressed row major format for storing > sparse matricies. One of the reasons i chose this format was interoperability > with other packages. I'd like to use some petsc and slepc functionality in my > project but I haven't found any bindings between the 2 formats. > > It seems like MatCreateSeqAIJWithArrays is the optimal way to link the two > types of objects but i thought I would see if anyone has done this and could > point me in the most optimal direction. The format expected by MatCreateSeqAIJWithArrays() is listed in its man page. http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateSeqAIJWithArrays.html I guess you can use this routine if ublas sparse format is same as this. One alternative - is to convert these matrices into petsc binary format. Then you can load them up to petsc code even parallely - say with src/ksp/ksp/examples/tutorials/ex10.c For converting to petsc binary format - you can follow the example src/mat/examples/tests/ex50.c. Satish From luke.bloy at gmail.com Tue Jun 29 09:24:42 2010 From: luke.bloy at gmail.com (Luke Bloy) Date: Tue, 29 Jun 2010 10:24:42 -0400 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: References: <4C29F158.6030008@gmail.com> Message-ID: <4C2A022A.7060409@gmail.com> Thanks for the quick responses. my matrices are fairly large 80,000 x 80,000 and not too sparse so I'd like to avoid any serialization if possible. I've worked out how to use MatCreateSeqAIJWithArrays so long as i control the types used in my ublas matrix, which is something i'd like to avoid doing if possible. unfortunately I'm not sure how to control the types petscscalar and petscint. I've looked through http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/include/petsc.h.html#PetscScalar and http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/docs/manualpages/Sys/PetscScalar.html which seem to suggest that I can use command line flags --with-scalar-type etc to control these typedefs is it possible to control these in the code directly if so how? Thanks Luke On 06/29/2010 09:52 AM, Satish Balay wrote: > On Tue, 29 Jun 2010, Luke Bloy wrote: > > >> Hi, >> >> In one of my projects I use the ublas compressed row major format for storing >> sparse matricies. One of the reasons i chose this format was interoperability >> with other packages. I'd like to use some petsc and slepc functionality in my >> project but I haven't found any bindings between the 2 formats. >> >> It seems like MatCreateSeqAIJWithArrays is the optimal way to link the two >> types of objects but i thought I would see if anyone has done this and could >> point me in the most optimal direction. >> > The format expected by MatCreateSeqAIJWithArrays() is listed in its man page. > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateSeqAIJWithArrays.html > > I guess you can use this routine if ublas sparse format is same as this. > > One alternative - is to convert these matrices into petsc binary > format. Then you can load them up to petsc code even parallely - say > with src/ksp/ksp/examples/tutorials/ex10.c > > For converting to petsc binary format - you can follow the example > src/mat/examples/tests/ex50.c. > > Satish > From balay at mcs.anl.gov Tue Jun 29 09:31:35 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 29 Jun 2010 09:31:35 -0500 (CDT) Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C2A022A.7060409@gmail.com> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> Message-ID: On Tue, 29 Jun 2010, Luke Bloy wrote: > Thanks for the quick responses. > > my matrices are fairly large 80,000 x 80,000 and not too sparse so I'd like to > avoid any serialization if possible. > > I've worked out how to use MatCreateSeqAIJWithArrays so long as i control the > types used in my ublas matrix, which is something i'd like to avoid doing if > possible. > > unfortunately I'm not sure how to control the types petscscalar and petscint. > I've looked through > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/include/petsc.h.html#PetscScalar > and > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/docs/manualpages/Sys/PetscScalar.html > > which seem to suggest that I can use command line flags --with-scalar-type > etc to control these typedefs is it possible to control these in the code > directly if so how? They are configure/compile time constants - and one can't have 2 types in the same code. By default PetscScalar is 'double; and 'PetscInt is 'int'. Satish From u.tabak at tudelft.nl Tue Jun 29 09:44:52 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 29 Jun 2010 16:44:52 +0200 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> Message-ID: <4C2A06E4.2040501@tudelft.nl> Satish Balay wrote: > On Tue, 29 Jun 2010, Luke Bloy wrote: > > >> Thanks for the quick responses. >> >> my matrices are fairly large 80,000 x 80,000 and not too sparse so I'd like to >> avoid any serialization if possible. >> >> I've worked out how to use MatCreateSeqAIJWithArrays so long as i control the >> types used in my ublas matrix, which is something i'd like to avoid doing if >> possible. >> >> unfortunately I'm not sure how to control the types petscscalar and petscint. >> I've looked through >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/include/petsc.h.html#PetscScalar >> and >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/docs/manualpages/Sys/PetscScalar.html >> I did sth like this some time ago, where the matrices were in CSR format "compressed matrix", I guess, in ublas, That was not that difficult, I did that with MatSetValues, if I can find some code I will post that, maybe that could help. Best, Umut From luke.bloy at gmail.com Tue Jun 29 10:03:43 2010 From: luke.bloy at gmail.com (Luke Bloy) Date: Tue, 29 Jun 2010 11:03:43 -0400 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C2A06E4.2040501@tudelft.nl> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A06E4.2040501@tudelft.nl> Message-ID: <4C2A0B4F.6040109@gmail.com> Umut, Thanks for the offer I have the basics working with MatCreateSeqAIJWithArrays in case you need it you can use something like this. using boost::numeric::ublas; typedef unbounded_array IndexArrayType; typedef unbounded_array ValueArrayType; typedef compressed_matrix MatrixType; MatrixType L(10,10); //Fill L ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD, 10, 10, L.index1_data().begin() ,L.index2_data().begin(),L.value_data().begin(),&A); this works so long as you use petscInt and petscScalar as the template parameters of MatrixType. -Luke On 06/29/2010 10:44 AM, Umut Tabak wrote: > Satish Balay wrote: >> On Tue, 29 Jun 2010, Luke Bloy wrote: >> >>> Thanks for the quick responses. >>> >>> my matrices are fairly large 80,000 x 80,000 and not too sparse so >>> I'd like to >>> avoid any serialization if possible. >>> >>> I've worked out how to use MatCreateSeqAIJWithArrays so long as i >>> control the >>> types used in my ublas matrix, which is something i'd like to avoid >>> doing if >>> possible. >>> >>> unfortunately I'm not sure how to control the types petscscalar and >>> petscint. >>> I've looked through >>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/include/petsc.h.html#PetscScalar >>> >>> and >>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/docs/manualpages/Sys/PetscScalar.html >>> > I did sth like this some time ago, where the matrices were in CSR > format "compressed matrix", I guess, in ublas, That was not that > difficult, I did that with MatSetValues, if I can find some code I > will post that, maybe that could help. > Best, > Umut From u.tabak at tudelft.nl Tue Jun 29 10:11:20 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 29 Jun 2010 17:11:20 +0200 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C2A0B4F.6040109@gmail.com> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A06E4.2040501@tudelft.nl> <4C2A0B4F.6040109@gmail.com> Message-ID: <4C2A0D18.9070506@tudelft.nl> Luke Bloy wrote: > Umut, > > Thanks for the offer I have the basics working with > MatCreateSeqAIJWithArrays in case you need it you can use something > like this. > > using boost::numeric::ublas; > typedef unbounded_array IndexArrayType; > typedef unbounded_array ValueArrayType; > typedef > compressed_matrix > MatrixType; > MatrixType L(10,10); > //Fill L > ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD, 10, 10, > L.index1_data().begin() > ,L.index2_data().begin(),L.value_data().begin(),&A); > > this works so long as you use petscInt and petscScalar as the template > parameters of MatrixType. > > - This might be a bit off the topic, however, I had great difficulties when assigning some ranges to petsc matrices through the ublas interface, I used ublas for interfacing to petsc binary matrices, and on the level of interfacing I can tell that the combination of ublas, petsc, and PetscExt for block operations seems to be pretty fast... Thanks to Dave May for PetscExt ;) From luke.bloy at gmail.com Tue Jun 29 10:16:17 2010 From: luke.bloy at gmail.com (Luke Bloy) Date: Tue, 29 Jun 2010 11:16:17 -0400 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> Message-ID: <4C2A0E41.3040205@gmail.com> Thanks for the response. Thats unfortunate as i use many different types of matrices that i would like use with petsc. I'm not much of a c++ whiz, but i'm curious if something like an adaptor would be possible that would make a (float *) behave like a (petscscalar *) as far as petsc was concerned? Thoughts? -Luke On 06/29/2010 10:31 AM, Satish Balay wrote: > On Tue, 29 Jun 2010, Luke Bloy wrote: > > >> Thanks for the quick responses. >> >> my matrices are fairly large 80,000 x 80,000 and not too sparse so I'd like to >> avoid any serialization if possible. >> >> I've worked out how to use MatCreateSeqAIJWithArrays so long as i control the >> types used in my ublas matrix, which is something i'd like to avoid doing if >> possible. >> >> unfortunately I'm not sure how to control the types petscscalar and petscint. >> I've looked through >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/include/petsc.h.html#PetscScalar >> and >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-3.0.0/docs/manualpages/Sys/PetscScalar.html >> >> which seem to suggest that I can use command line flags --with-scalar-type >> etc to control these typedefs is it possible to control these in the code >> directly if so how? >> > They are configure/compile time constants - and one can't have 2 types > in the same code. By default PetscScalar is 'double; and 'PetscInt is 'int'. > > Satish > From u.tabak at tudelft.nl Tue Jun 29 10:21:48 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 29 Jun 2010 17:21:48 +0200 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C2A0E41.3040205@gmail.com> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> Message-ID: <4C2A0F8C.9020407@tudelft.nl> Luke Bloy wrote: > Thanks for the response. Thats unfortunate as i use many different > types of matrices that i would like use with petsc. > > I'm not much of a c++ whiz, me neither ;) > but i'm curious if something like an adaptor would be possible that > would make a > (float *) behave like a (petscscalar *) as far as petsc was concerned? > Thoughts? I am not sure if these kinds of pointer conversions are safe if you do not know that what 'petscscalar *' really is, you might check the docs. From aron.ahmadia at kaust.edu.sa Tue Jun 29 10:30:23 2010 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Tue, 29 Jun 2010 18:30:23 +0300 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C2A0F8C.9020407@tudelft.nl> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> <4C2A0F8C.9020407@tudelft.nl> Message-ID: You couldn't simply template the dereference, you would need to have a way to reformat the data into single/double-precision, and PETSc assumes you are giving it a raw C pointer. This would have the effect of potentially generating an expensive data copy every time you need to hand your object to PETSc. I think you would be much better served by deciding ahead of time whether you will need a single or double-precision PETSc and writing your code accordingly with that assumption. A On Tue, Jun 29, 2010 at 6:21 PM, Umut Tabak wrote: > Luke Bloy wrote: > >> Thanks for the response. Thats unfortunate as i use many different types >> of matrices that i would like use with petsc. >> >> I'm not much of a c++ whiz, >> > me neither ;) > > but i'm curious if something like an adaptor would be possible that would >> make a >> (float *) behave like a (petscscalar *) as far as petsc was concerned? >> Thoughts? >> > I am not sure if these kinds of pointer conversions are safe if you do not > know that what 'petscscalar *' really is, you might check the docs. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.bloy at gmail.com Tue Jun 29 11:39:06 2010 From: luke.bloy at gmail.com (Luke Bloy) Date: Tue, 29 Jun 2010 12:39:06 -0400 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> <4C2A0F8C.9020407@tudelft.nl> Message-ID: <4C2A21AA.3080001@gmail.com> Aron, Thanks for the reply. Thats unfortunate i was hoping to use petsc/slepc on matrices of doubles and of ints within the same application. I was hoping to keep the ints for a smaller memory footprint as I'm already in the >10g range. but it seems like that is possible. -Luke On 06/29/2010 11:30 AM, Aron Ahmadia wrote: > You couldn't simply template the dereference, you would need to have a > way to reformat the data into single/double-precision, and PETSc > assumes you are giving it a raw C pointer. This would have the effect > of potentially generating an expensive data copy every time you need > to hand your object to PETSc. I think you would be much better served > by deciding ahead of time whether you will need a single or > double-precision PETSc and writing your code accordingly with that > assumption. > > A > > On Tue, Jun 29, 2010 at 6:21 PM, Umut Tabak > wrote: > > Luke Bloy wrote: > > Thanks for the response. Thats unfortunate as i use many > different types of matrices that i would like use with petsc. > > I'm not much of a c++ whiz, > > me neither ;) > > but i'm curious if something like an adaptor would be possible > that would make a > (float *) behave like a (petscscalar *) as far as petsc was > concerned? Thoughts? > > I am not sure if these kinds of pointer conversions are safe if > you do not know that what 'petscscalar *' really is, you might > check the docs. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jun 29 12:43:36 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Jun 2010 12:43:36 -0500 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <4C2A21AA.3080001@gmail.com> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> <4C2A0F8C.9020407@tudelft.nl> <4C2A21AA.3080001@gmail.com> Message-ID: 1) You would never want ints, unless you understand fixed point math 2) You could use floats and doubles, however it would be involved 3) What you really want is to make MatScalar to be float. Then use doubles for the residual calculation. Matt On Tue, Jun 29, 2010 at 11:39 AM, Luke Bloy wrote: > Aron, > > Thanks for the reply. Thats unfortunate i was hoping to use petsc/slepc on > matrices of doubles and of ints within the same application. I was hoping to > keep the ints for a smaller memory footprint as I'm already in the >10g > range. but it seems like that is possible. > > -Luke > > > On 06/29/2010 11:30 AM, Aron Ahmadia wrote: > > You couldn't simply template the dereference, you would need to have a > way to reformat the data into single/double-precision, and PETSc assumes you > are giving it a raw C pointer. This would have the effect of potentially > generating an expensive data copy every time you need to hand your object to > PETSc. I think you would be much better served by deciding ahead of time > whether you will need a single or double-precision PETSc and writing your > code accordingly with that assumption. > > A > > On Tue, Jun 29, 2010 at 6:21 PM, Umut Tabak wrote: > >> Luke Bloy wrote: >> >>> Thanks for the response. Thats unfortunate as i use many different types >>> of matrices that i would like use with petsc. >>> >>> I'm not much of a c++ whiz, >>> >> me neither ;) >> >> but i'm curious if something like an adaptor would be possible that would >>> make a >>> (float *) behave like a (petscscalar *) as far as petsc was concerned? >>> Thoughts? >>> >> I am not sure if these kinds of pointer conversions are safe if you do >> not know that what 'petscscalar *' really is, you might check the docs. >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Jun 29 12:58:53 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 29 Jun 2010 11:58:53 -0600 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> <4C2A0F8C.9020407@tudelft.nl> <4C2A21AA.3080001@gmail.com> Message-ID: <87fx05u3du.fsf@59A2.org> On Tue, 29 Jun 2010 12:43:36 -0500, Matthew Knepley wrote: > 3) What you really want is to make MatScalar to be float. Then use > doubles for the residual calculation. I have no idea what Luke's usage is, but MatScalar != PetscScalar is not currently supported. http://lists.mcs.anl.gov/pipermail/petsc-dev/2009-June/001417.html I'm intrigued by it because my matrices are only for preconditioning purposes, but haven't caught the bug to try fixing it since the above thread. Jed From knepley at gmail.com Tue Jun 29 12:59:57 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Jun 2010 12:59:57 -0500 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: <87fx05u3du.fsf@59A2.org> References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> <4C2A0F8C.9020407@tudelft.nl> <4C2A21AA.3080001@gmail.com> <87fx05u3du.fsf@59A2.org> Message-ID: On Tue, Jun 29, 2010 at 12:58 PM, Jed Brown wrote: > On Tue, 29 Jun 2010 12:43:36 -0500, Matthew Knepley > wrote: > > 3) What you really want is to make MatScalar to be float. Then use > > doubles for the residual calculation. > > I have no idea what Luke's usage is, but MatScalar != PetscScalar is not > currently supported. > > http://lists.mcs.anl.gov/pipermail/petsc-dev/2009-June/001417.html > > I'm intrigued by it because my matrices are only for preconditioning > purposes, but haven't caught the bug to try fixing it since the above > thread. If someone really needs it, I will fix it. Matt > > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jun 29 13:49:17 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 29 Jun 2010 13:49:17 -0500 Subject: [petsc-users] ublas sparse matrix bindings? In-Reply-To: References: <4C29F158.6030008@gmail.com> <4C2A022A.7060409@gmail.com> <4C2A0E41.3040205@gmail.com> <4C2A0F8C.9020407@tudelft.nl> <4C2A21AA.3080001@gmail.com> <87fx05u3du.fsf@59A2.org> Message-ID: On Jun 29, 2010, at 12:59 PM, Matthew Knepley wrote: > On Tue, Jun 29, 2010 at 12:58 PM, Jed Brown wrote: > On Tue, 29 Jun 2010 12:43:36 -0500, Matthew Knepley wrote: > > 3) What you really want is to make MatScalar to be float. Then use > > doubles for the residual calculation. > > I have no idea what Luke's usage is, but MatScalar != PetscScalar is not > currently supported. > > http://lists.mcs.anl.gov/pipermail/petsc-dev/2009-June/001417.html > > I'm intrigued by it because my matrices are only for preconditioning > purposes, but haven't caught the bug to try fixing it since the above > thread. > > If someone really needs it, I will fix it. It is a major mother to fix. Unless we can come up with a new clever paradigm it will be a big mess of ugly nasty code to "fix". The problem is that in many places in the code, values in the arrays of the matrices are passed as values through PETSc functions (which are all prototyped as PetscScalar) so at each passage one must be write code to convert from double to single pass in then free the copy. Barry > > Matt > > > Jed > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jun 29 14:44:30 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 29 Jun 2010 14:44:30 -0500 Subject: [petsc-users] Simplifying PETSc Debugging in Totalview References: <20100629100038.ibive1bfwp444w0c@www-openlabnet.llnl.gov> Message-ID: <0B8E5299-66DB-45E1-BCF8-AF6ADABEB5F5@mcs.anl.gov> PETSc users, Jeff Keasler has pointed out that TotalView 8.8 and later support a new feature where libraries can provide code that displays the contents of an object in any useful format (allowing for example, PETSc to provide code to nicely display entries of a sparse matrix in the debugger). I have tried this out by including in petsc-dev a very simple viewer for Vec and Mat; these will just work if you use TotalView 8.8 or higher with PETSc-dev. If any users really love using Totalview with PETSc and would like to see this extended (or extend it themselves :-) the example source code is in src/vec/vec/interface/vector.c called TV_display_type() and please get in touch with us. Thanks Barry From recrusader at gmail.com Tue Jun 29 20:35:51 2010 From: recrusader at gmail.com (Yujie) Date: Tue, 29 Jun 2010 20:35:51 -0500 Subject: [petsc-users] MatView for huge matrix output Message-ID: Dear PETSc developers, I want to output an about 36K*36K dense matrix using MatView in binary format. I use RedHat Enterprise 5 64bits system. However, when the file size of output matrix reaches about 2.7G, the codes pause and don't response for a long time (almost 3 hours). Could you help me figure out what happened? Thanks a lot. Regards, Yujie From balay at mcs.anl.gov Tue Jun 29 21:20:25 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 29 Jun 2010 21:20:25 -0500 (CDT) Subject: [petsc-users] MatView for huge matrix output In-Reply-To: References: Message-ID: Perhaps you can build PETSc with the additional flag '--with-large-file-io=1' and see if this helps.. Satish On Tue, 29 Jun 2010, Yujie wrote: > Dear PETSc developers, > > I want to output an about 36K*36K dense matrix using MatView in binary > format. I use RedHat Enterprise 5 64bits system. However, when the > file size of output matrix reaches about 2.7G, the codes pause and > don't response for a long time (almost 3 hours). Could you help me > figure out what happened? Thanks a lot. > > Regards, > Yujie > From jed at 59A2.org Tue Jun 29 21:36:12 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 29 Jun 2010 20:36:12 -0600 Subject: [petsc-users] MatView for huge matrix output In-Reply-To: References: Message-ID: On Tue, Jun 29, 2010 at 19:35, Yujie wrote: > Dear PETSc developers, > > I want to output an about 36K*36K dense matrix using MatView in binary > format. I use RedHat Enterprise 5 64bits system. However, when the > file size of output matrix reaches about 2.7G, the codes pause and > don't response for a long time (almost 3 hours). Could you help me > figure out what happened? Thanks a lot. > Were you writing this in serial or parallel? MPICH2 and Open MPI don't properly handle large message sizes fixes require ABI-incompatible changes that they don't want to push out in a minor release. I believe the latest versions of both will actually do the send, but MPI_Get_count does not return the correct value, and it probably would not be surprising if some MPI-IO functionality did not work correctly with large messages. Tickets that I'm familiar with: https://trac.mcs.anl.gov/projects/mpich2/ticket/1005 https://svn.open-mpi.org/trac/ompi/ticket/2241 I think they should both be fine for MPI-IO as long as each processor sends less than 2 GiB (even though the final output may be much bigger). If this happens again, you could attach a debugger to the running process (gdb -pid XXX) and get a backtrace. Note that you can build "optimized" with debugging symbols at a very small runtime penalty. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From recrusader at gmail.com Tue Jun 29 21:48:30 2010 From: recrusader at gmail.com (Yujie) Date: Tue, 29 Jun 2010 21:48:30 -0500 Subject: [petsc-users] MatView for huge matrix output In-Reply-To: References: Message-ID: Thank you very much! I am writing it in parallel mode. I am using MPICH not MPICH2. I don't know the mechanism in MatView. If I write in paralle, there is no communication between nodes? Regards, Yujie On Tue, Jun 29, 2010 at 9:36 PM, Jed Brown wrote: > On Tue, Jun 29, 2010 at 19:35, Yujie wrote: >> >> Dear PETSc developers, >> >> I want to output an about 36K*36K dense matrix using MatView in binary >> format. I use RedHat Enterprise 5 64bits system. However, when the >> file size of output matrix reaches about 2.7G, the codes pause and >> don't response for a long time (almost 3 hours). Could you help me >> figure out what happened? Thanks a lot. > > Were you writing this in serial or parallel? ?MPICH2 and Open MPI don't > properly handle large message sizes fixes require ABI-incompatible changes > that they don't want to push out in a minor release. ?I believe the latest > versions of both will actually do the send, but MPI_Get_count does not > return the correct value, and it probably would not be surprising if some > MPI-IO functionality did not work correctly with large messages. ?Tickets > that I'm familiar with: > https://trac.mcs.anl.gov/projects/mpich2/ticket/1005 > https://svn.open-mpi.org/trac/ompi/ticket/2241 > I think they should both be fine for MPI-IO as long as each processor sends > less than 2 GiB (even though the final output may be much bigger). > If this happens again, you could attach a debugger to the running process > (gdb -pid XXX) and get a backtrace. ?Note that you can build "optimized" > with debugging symbols at a very small runtime penalty. > Jed From jed at 59A2.org Tue Jun 29 21:53:45 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 29 Jun 2010 20:53:45 -0600 Subject: [petsc-users] MatView for huge matrix output In-Reply-To: References: Message-ID: On Tue, Jun 29, 2010 at 20:48, Yujie wrote: > Thank you very much! I am writing it in parallel mode. I am using > MPICH not MPICH2. I don't know the mechanism in MatView. If I write in > paralle, there is no communication between nodes? > There is still communication with the (parallel) file system. I think there should be no problem if the local blocks are smaller than 2 GiB but there may be issues with larger blocks. MPICH1 is old and unsupported, you should definitely upgrade unless you need support for a heterogeneous cluster (mix of 32 and 64-bit or different endianness). Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jun 29 21:57:45 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 29 Jun 2010 21:57:45 -0500 Subject: [petsc-users] MatView for huge matrix output In-Reply-To: References: Message-ID: <62F99044-B065-4848-9131-EEFEFBB61194@mcs.anl.gov> Until the MPI implementors fix this problem the only way you can work with such large matrices is to use more processors. Make sure that no process has more than 250,000,000 entries in the matrix. So for your 36k by 36k matrix you need to use at least 5 processors, maybe six just try them. Barry On Jun 29, 2010, at 9:48 PM, Yujie wrote: > Thank you very much! I am writing it in parallel mode. I am using > MPICH not MPICH2. I don't know the mechanism in MatView. If I write in > paralle, there is no communication between nodes? > > Regards, > Yujie > > On Tue, Jun 29, 2010 at 9:36 PM, Jed Brown wrote: >> On Tue, Jun 29, 2010 at 19:35, Yujie wrote: >>> >>> Dear PETSc developers, >>> >>> I want to output an about 36K*36K dense matrix using MatView in binary >>> format. I use RedHat Enterprise 5 64bits system. However, when the >>> file size of output matrix reaches about 2.7G, the codes pause and >>> don't response for a long time (almost 3 hours). Could you help me >>> figure out what happened? Thanks a lot. >> >> Were you writing this in serial or parallel? MPICH2 and Open MPI don't >> properly handle large message sizes fixes require ABI-incompatible changes >> that they don't want to push out in a minor release. I believe the latest >> versions of both will actually do the send, but MPI_Get_count does not >> return the correct value, and it probably would not be surprising if some >> MPI-IO functionality did not work correctly with large messages. Tickets >> that I'm familiar with: >> https://trac.mcs.anl.gov/projects/mpich2/ticket/1005 >> https://svn.open-mpi.org/trac/ompi/ticket/2241 >> I think they should both be fine for MPI-IO as long as each processor sends >> less than 2 GiB (even though the final output may be much bigger). >> If this happens again, you could attach a debugger to the running process >> (gdb -pid XXX) and get a backtrace. Note that you can build "optimized" >> with debugging symbols at a very small runtime penalty. >> Jed From recrusader at gmail.com Tue Jun 29 22:02:17 2010 From: recrusader at gmail.com (Yujie) Date: Tue, 29 Jun 2010 22:02:17 -0500 Subject: [petsc-users] MatView for huge matrix output In-Reply-To: References: Message-ID: I will double check the size of blocks in each node. Thanks a lot. On Tue, Jun 29, 2010 at 9:53 PM, Jed Brown wrote: > On Tue, Jun 29, 2010 at 20:48, Yujie wrote: >> >> Thank you very much! I am writing it in parallel mode. I am using >> MPICH not MPICH2. I don't know the mechanism in MatView. If I write in >> paralle, there is no communication between nodes? > > There is still communication with the (parallel) file system. ?I think there > should be no problem if the local blocks are smaller than 2 GiB but there > may be issues with larger blocks. ?MPICH1 is old and unsupported, you should > definitely upgrade unless you need support for a heterogeneous cluster (mix > of 32 and 64-bit or different endianness). > Jed From recrusader at gmail.com Tue Jun 29 22:03:16 2010 From: recrusader at gmail.com (Yujie) Date: Tue, 29 Jun 2010 22:03:16 -0500 Subject: [petsc-users] MatView for huge matrix output In-Reply-To: <62F99044-B065-4848-9131-EEFEFBB61194@mcs.anl.gov> References: <62F99044-B065-4848-9131-EEFEFBB61194@mcs.anl.gov> Message-ID: I have used 9 processors. However, I didn't pay attention to the size of the blocks in each node. I will double check. Thanks a lot. On Tue, Jun 29, 2010 at 9:57 PM, Barry Smith wrote: > > ?Until the MPI implementors fix this problem the only way you can work with such large matrices is to use more processors. Make sure that no process has more than 250,000,000 entries in the matrix. So for your 36k by 36k matrix you need to use at least 5 processors, maybe six just try them. > > ? Barry > > > On Jun 29, 2010, at 9:48 PM, Yujie wrote: > >> Thank you very much! I am writing it in parallel mode. I am using >> MPICH not MPICH2. I don't know the mechanism in MatView. If I write in >> paralle, there is no communication between nodes? >> >> Regards, >> Yujie >> >> On Tue, Jun 29, 2010 at 9:36 PM, Jed Brown wrote: >>> On Tue, Jun 29, 2010 at 19:35, Yujie wrote: >>>> >>>> Dear PETSc developers, >>>> >>>> I want to output an about 36K*36K dense matrix using MatView in binary >>>> format. I use RedHat Enterprise 5 64bits system. However, when the >>>> file size of output matrix reaches about 2.7G, the codes pause and >>>> don't response for a long time (almost 3 hours). Could you help me >>>> figure out what happened? Thanks a lot. >>> >>> Were you writing this in serial or parallel? ?MPICH2 and Open MPI don't >>> properly handle large message sizes fixes require ABI-incompatible changes >>> that they don't want to push out in a minor release. ?I believe the latest >>> versions of both will actually do the send, but MPI_Get_count does not >>> return the correct value, and it probably would not be surprising if some >>> MPI-IO functionality did not work correctly with large messages. ?Tickets >>> that I'm familiar with: >>> https://trac.mcs.anl.gov/projects/mpich2/ticket/1005 >>> https://svn.open-mpi.org/trac/ompi/ticket/2241 >>> I think they should both be fine for MPI-IO as long as each processor sends >>> less than 2 GiB (even though the final output may be much bigger). >>> If this happens again, you could attach a debugger to the running process >>> (gdb -pid XXX) and get a backtrace. ?Note that you can build "optimized" >>> with debugging symbols at a very small runtime penalty. >>> Jed > >