From asmund.ervik at ntnu.no Mon May 2 01:55:29 2016 From: asmund.ervik at ntnu.no (=?UTF-8?Q?=c3=85smund_Ervik?=) Date: Mon, 2 May 2016 08:55:29 +0200 Subject: [petsc-users] DMDACreateSection In-Reply-To: References: Message-ID: <5726F9E1.4040507@ntnu.no> Hi all, On 27. april 2016 10:21, Matthew Knepley wrote: > > I will also note that I think this is correct for the future as well since > the DMForest implementation (the > wrapper Toby Isaac wrote for p4est) uses PetscSection as well, is memory > efficient, and does structured > AMR. I am switching everything I had on DMDA to use this. If you want help > in this direction, of course > I am at your service. I am currently getting my magma dynamics code to work > in this way. > Pardon me for intruding, but this DMForest looks very interesting if it can get me structured AMR for "cheap". Questions: Will DMForest provide a more-or-less drop in replacement for usual DMDA for a finite difference/volume code? Does it do this already, or is it planned to? By the way, I see web rendering for my simple'n'stupid DMDA example is broken with 3.7.0 (and it's not linked to from the DM examples page): http://www.mcs.anl.gov/petsc/petsc-current/src/dm/examples/tutorials/ex13f90.F90.html Best regards, ?smund -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From asmund.ervik at ntnu.no Mon May 2 02:18:05 2016 From: asmund.ervik at ntnu.no (=?UTF-8?Q?=c3=85smund_Ervik?=) Date: Mon, 2 May 2016 09:18:05 +0200 Subject: [petsc-users] PETSc DM ex13f90 In-Reply-To: References: Message-ID: <5726FF2D.9000404@ntnu.no> Hi Praveen, First of all: I'm cc-ing the petsc-users list, to preserve this discussion also for others. And, you'll get better/quicker help by asking the list. (Please reply also to the list if you have more questions.) On 29. april 2016 12:15, praveen kumar wrote: > Hi Asmund, > > I am trying to implement PETSc in a serial Fortran code for domain > decomposition. I've gone through your Van der pol example. I'm confused > about subroutines petsc_to_local and local_to_petsc. Please correct me if > I've misunderstood something. > Good that you've found dm/ex13f90 at least a little useful. I'll try to clarify further. > While explaining these subroutines in ex13f90aux.F90, you have mentioned > that "Petsc gives you local arrays which are indexed using global > coordinates". > What are 'local arrays' ? Do you mean the local vector which is derived > from DMCreateLocalVector. To answer your questions, a brief recap on how PETSc uses/stores data. In PETSc terminology, there are local and global vectors that store your fields. The only real difference between local and global is that the local vectors also have ghost values set which have been gathered from other processors after you have done DMLocalToGlobalBegin/End. There is also a difference in use, where global vectors are intended to be used with solvers like KSP etc. The vectors are of a special data structure that is hard to work with manually. Therefore we use the DMDAVecGetArray command, which gives us an array that has the same data as the vector. The array coming from the local vector is what I call the "local array". The point of the petsc_to_local and local_to_petsc subroutines, and the point of the sentence you quoted is that when PETSc gives you this array, in a program running in parallel with MPI, the array has different indexing on each process. Let's think about 1D to keep it simple, and say we have 80 grid points in total distributed across 4 processes. On the first process the array is indexed from 0 to 19, then on the second process it is indexed from 20 to 39, third process has 40 to 59 and the fourth process has the indices from 60 to 79. In addition, in the local array, the first process will also have the values at index 20, 21 etc (up to the stencil width you have specified) that belong to the second process, after you have done DMLocatToGlobalBegin/End, but it cannot change these values. It can only use these values in computations, for instance when computing the value at index 19. The petsc_to_local subroutine changes this numbering system, such that on all processors the array is indexed from 1 to 20. This makes it easier to use with an existing serial Fortran code, which typically does all loops from 1 to N (and 1 is hard-coded). The local array then has the correct ghost values below 1 and above 20, unless the array is next to the global domain edge(s), where you must set the correct boundary conditions. > As far as I know, if petsc_to_local is called before a DO loop going > from i=0, nx; then nx becomes local. The petsc_to_local subroutine does not change the value of nx. This value comes from DMDAGetCorners, which is called on line 87 in ex13f90.F90. The petsc_to_local subroutine only calls DMDAVecGetArrayF90 to go from vector to array, and then changes the array indexing. Note also that petsc_to_local assumes you want the indices to start at 1 (as is standard in Fortran). If you want them to start at 0, you must change the array definitions for "f" and "array" in the subroutines transform_petsc_us and transform_us_petsc, from PetscReal,intent(inout),dimension(:,1-stw:,1-stw:,1-stw:) :: f to PetscReal,intent(inout),dimension(:,0-stw:,0-stw:,0-stw:) :: f Hope that helps, ?smund > > Thanks, > Praveen > Research Scholar, > Computational Combustion Lab, > Dept.of Aerospace Engg. > IIT Madras > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From bsmith at mcs.anl.gov Mon May 2 03:13:53 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 2 May 2016 03:13:53 -0500 Subject: [petsc-users] DMDACreateSection In-Reply-To: <5726F9E1.4040507@ntnu.no> References: <5726F9E1.4040507@ntnu.no> Message-ID: > On May 2, 2016, at 1:55 AM, ?smund Ervik wrote: > > Hi all, > > On 27. april 2016 10:21, Matthew Knepley wrote: >> >> I will also note that I think this is correct for the future as well since >> the DMForest implementation (the >> wrapper Toby Isaac wrote for p4est) uses PetscSection as well, is memory >> efficient, and does structured >> AMR. I am switching everything I had on DMDA to use this. If you want help >> in this direction, of course >> I am at your service. I am currently getting my magma dynamics code to work >> in this way. >> > > Pardon me for intruding, but this DMForest looks very interesting if it > can get me structured AMR for "cheap". Questions: > > Will DMForest provide a more-or-less drop in replacement for usual DMDA > for a finite difference/volume code? Does it do this already, or is it > planned to? > > > By the way, I see web rendering for my simple'n'stupid DMDA example is > broken with 3.7.0 (and it's not linked to from the DM examples page): > http://www.mcs.anl.gov/petsc/petsc-current/src/dm/examples/tutorials/ex13f90.F90.html I suspect that it is the non-ASCII character in your name that that is causing the problem. > > > Best regards, > ?smund > From asmund.ervik at ntnu.no Mon May 2 03:29:44 2016 From: asmund.ervik at ntnu.no (=?UTF-8?Q?=c3=85smund_Ervik?=) Date: Mon, 2 May 2016 10:29:44 +0200 Subject: [petsc-users] DMDACreateSection In-Reply-To: References: <5726F9E1.4040507@ntnu.no> Message-ID: <57270FF8.6040007@ntnu.no> On 02. mai 2016 10:13, Barry Smith wrote: > >> On May 2, 2016, at 1:55 AM, ?smund Ervik wrote: >> >> Hi all, >> >> On 27. april 2016 10:21, Matthew Knepley wrote: >>> >>> I will also note that I think this is correct for the future as well since >>> the DMForest implementation (the >>> wrapper Toby Isaac wrote for p4est) uses PetscSection as well, is memory >>> efficient, and does structured >>> AMR. I am switching everything I had on DMDA to use this. If you want help >>> in this direction, of course >>> I am at your service. I am currently getting my magma dynamics code to work >>> in this way. >>> >> >> Pardon me for intruding, but this DMForest looks very interesting if it >> can get me structured AMR for "cheap". Questions: >> >> Will DMForest provide a more-or-less drop in replacement for usual DMDA >> for a finite difference/volume code? Does it do this already, or is it >> planned to? >> >> >> By the way, I see web rendering for my simple'n'stupid DMDA example is >> broken with 3.7.0 (and it's not linked to from the DM examples page): >> http://www.mcs.anl.gov/petsc/petsc-current/src/dm/examples/tutorials/ex13f90.F90.html > > I suspect that it is the non-ASCII character in your name that that is causing the problem. Aha. You may ASCII-ize my name as "Aasmund Ervik". Odd that it still works fine for 3.6: http://www.mcs.anl.gov/petsc/petsc-3.6/src/dm/examples/tutorials/ex13f90.F90.html >> >> >> Best regards, >> ?smund >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From bsmith at mcs.anl.gov Mon May 2 07:16:59 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 2 May 2016 07:16:59 -0500 Subject: [petsc-users] DMDACreateSection In-Reply-To: <57270FF8.6040007@ntnu.no> References: <5726F9E1.4040507@ntnu.no> <57270FF8.6040007@ntnu.no> Message-ID: > On May 2, 2016, at 3:29 AM, ?smund Ervik wrote: > > On 02. mai 2016 10:13, Barry Smith wrote: >> >>> On May 2, 2016, at 1:55 AM, ?smund Ervik wrote: >>> >>> Hi all, >>> >>> On 27. april 2016 10:21, Matthew Knepley wrote: >>>> >>>> I will also note that I think this is correct for the future as well since >>>> the DMForest implementation (the >>>> wrapper Toby Isaac wrote for p4est) uses PetscSection as well, is memory >>>> efficient, and does structured >>>> AMR. I am switching everything I had on DMDA to use this. If you want help >>>> in this direction, of course >>>> I am at your service. I am currently getting my magma dynamics code to work >>>> in this way. >>>> >>> >>> Pardon me for intruding, but this DMForest looks very interesting if it >>> can get me structured AMR for "cheap". Questions: >>> >>> Will DMForest provide a more-or-less drop in replacement for usual DMDA >>> for a finite difference/volume code? Does it do this already, or is it >>> planned to? >>> >>> >>> By the way, I see web rendering for my simple'n'stupid DMDA example is >>> broken with 3.7.0 (and it's not linked to from the DM examples page): >>> http://www.mcs.anl.gov/petsc/petsc-current/src/dm/examples/tutorials/ex13f90.F90.html >> >> I suspect that it is the non-ASCII character in your name that that is causing the problem. > > Aha. You may ASCII-ize my name as "Aasmund Ervik". Fixed in main > > Odd that it still works fine for 3.6: > http://www.mcs.anl.gov/petsc/petsc-3.6/src/dm/examples/tutorials/ex13f90.F90.html > >>> >>> >>> Best regards, >>> ?smund >>> >> > From Federico.Miorelli at CGG.COM Mon May 2 07:48:29 2016 From: Federico.Miorelli at CGG.COM (Miorelli, Federico) Date: Mon, 2 May 2016 12:48:29 +0000 Subject: [petsc-users] Custom KSP monitor changes in PETSc 3.7 Message-ID: <8D478341240222479E0DB361E050CB6A71EF51@msy-emb04.int.cggveritas.com> Dear All, I am having some issues upgrading to PETSc 3.7 due to some changes in the KSPMonitor routines. I need to configure my KSP solver to output its convergence log to an existing ASCII viewer through a custom monitor, printing only one every 10 iterations. The calling code is Fortran, we wrote a small C code that just calls the default monitor every 10 iterations, passing the viewer as last argument. If I understood correctly it is now necessary to set up a PetscViewerAndFormat structure and pass that as last argument to the monitor routine. I tried to create one with PetscViewerAndFormatCreate but I'm getting a runtime error (see below). Could you please help me understand what I did wrong? Thanks in advance, Federico Error: [1]PETSC ERROR: #1 PetscViewerPushFormat() line 149 in /PETSc/petsc-3.7.0/src/sys/classes/viewer/interface/viewa.c Error Message -------------------------------------------------------------- [8]PETSC ERROR: Argument out of range [8]PETSC ERROR: Too many PetscViewerPushFormat(), perhaps you forgot PetscViewerPopFormat()? Fortran code: PetscViewer :: viewer PetscViewerAndFormat :: vf external ShellKSPMonitor ... call PetscViewerAndFormatCreate(viewer, PETSC_VIEWER_DEFAULT, vf,ierr) call KSPMonitorSet(ksp, ShellKSPMonitor, vf, PetscViewerAndFormatDestroy, ierr) C code: PetscErrorCode shellkspmonitor_(KSP *ksp, PetscInt *n, PetscReal *rnorm, void *ptr) { PetscErrorCode ierr=0; if (*n % 10 == 0) { ierr = KSPMonitorTrueResidualNorm(*ksp,*n,*rnorm,(PetscViewerAndFormat*)ptr);CHKERRQ(ierr); } return ierr; } ______ ______ ______ Federico Miorelli Senior R&D Geophysicist Subsurface Imaging - General Geophysics Italy This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveen1bharadwaj at gmail.com Mon May 2 08:38:06 2016 From: praveen1bharadwaj at gmail.com (praveen kumar) Date: Mon, 2 May 2016 19:08:06 +0530 Subject: [petsc-users] PETSc DM ex13f90 In-Reply-To: <5726FF2D.9000404@ntnu.no> References: <5726FF2D.9000404@ntnu.no> Message-ID: Thanks a lot for such a lucid explanation. The note which you mentioned at the end is very important for me, as my code contains loops going from both 1 to N and 0 to N+1. Thanks, Praveen Research Scholar, Computational Combustion Lab, Dept. of Aerospace Engg. IIT Madras On Mon, May 2, 2016 at 12:48 PM, ?smund Ervik wrote: > Hi Praveen, > > First of all: I'm cc-ing the petsc-users list, to preserve this > discussion also for others. And, you'll get better/quicker help by > asking the list. (Please reply also to the list if you have more > questions.) > > On 29. april 2016 12:15, praveen kumar wrote: > > Hi Asmund, > > > > I am trying to implement PETSc in a serial Fortran code for domain > > decomposition. I've gone through your Van der pol example. I'm confused > > about subroutines petsc_to_local and local_to_petsc. Please correct me if > > I've misunderstood something. > > > > Good that you've found dm/ex13f90 at least a little useful. I'll try to > clarify further. > > > While explaining these subroutines in ex13f90aux.F90, you have mentioned > > that "Petsc gives you local arrays which are indexed using global > > coordinates". > > What are 'local arrays' ? Do you mean the local vector which is derived > > from DMCreateLocalVector. > > To answer your questions, a brief recap on how PETSc uses/stores data. > In PETSc terminology, there are local and global vectors that store your > fields. The only real difference between local and global is that the > local vectors also have ghost values set which have been gathered from > other processors after you have done DMLocalToGlobalBegin/End. There is > also a difference in use, where global vectors are intended to be used > with solvers like KSP etc. > > The vectors are of a special data structure that is hard to work with > manually. Therefore we use the DMDAVecGetArray command, which gives us > an array that has the same data as the vector. The array coming from the > local vector is what I call the "local array". The point of the > petsc_to_local and local_to_petsc subroutines, and the point of the > sentence you quoted is that when PETSc gives you this array, in a > program running in parallel with MPI, the array has different indexing > on each process. > > Let's think about 1D to keep it simple, and say we have 80 grid points > in total distributed across 4 processes. On the first process the array > is indexed from 0 to 19, then on the second process it is indexed from > 20 to 39, third process has 40 to 59 and the fourth process has the > indices from 60 to 79. In addition, in the local array, the first > process will also have the values at index 20, 21 etc (up to the stencil > width you have specified) that belong to the second process, after you > have done DMLocatToGlobalBegin/End, but it cannot change these values. > It can only use these values in computations, for instance when > computing the value at index 19. > > The petsc_to_local subroutine changes this numbering system, such that > on all processors the array is indexed from 1 to 20. This makes it > easier to use with an existing serial Fortran code, which typically does > all loops from 1 to N (and 1 is hard-coded). The local array then has > the correct ghost values below 1 and above 20, unless the array is next > to the global domain edge(s), where you must set the correct boundary > conditions. > > > As far as I know, if petsc_to_local is called before a DO loop going > > from i=0, nx; then nx becomes local. > > The petsc_to_local subroutine does not change the value of nx. This > value comes from DMDAGetCorners, which is called on line 87 in > ex13f90.F90. The petsc_to_local subroutine only calls DMDAVecGetArrayF90 > to go from vector to array, and then changes the array indexing. > > Note also that petsc_to_local assumes you want the indices to start at 1 > (as is standard in Fortran). If you want them to start at 0, you must > change the array definitions for "f" and "array" in the subroutines > transform_petsc_us and transform_us_petsc, > from > PetscReal,intent(inout),dimension(:,1-stw:,1-stw:,1-stw:) :: f > to > PetscReal,intent(inout),dimension(:,0-stw:,0-stw:,0-stw:) :: f > > Hope that helps, > ?smund > > > > > Thanks, > > Praveen > > Research Scholar, > > Computational Combustion Lab, > > Dept.of Aerospace Engg. > > IIT Madras > > > > -- B. Praveen Kumar Research Scholar, Computational Combustion Lab, Dept.of Aerospace Engg. IIT Madras -------------- next part -------------- An HTML attachment was scrubbed... URL: From asmund.ervik at ntnu.no Mon May 2 13:23:23 2016 From: asmund.ervik at ntnu.no (=?iso-8859-1?Q?=C5smund_Ervik?=) Date: Mon, 2 May 2016 18:23:23 +0000 Subject: [petsc-users] PETSc DM ex13f90 In-Reply-To: References: <5726FF2D.9000404@ntnu.no>, Message-ID: <1462213088408.34545@ntnu.no> ? > >Thanks a lot for such a lucid explanation. The note which you mentioned at the end is very important for me, as my code contains loops going from both 1 to N and 0 to N+1. > You're welcome. Just to avoid any potential misunderstanding: the note at the end was if you have loops over your "real" (non-ghost) variables going from 0 to N-1, i.e. zero indexing as in C, where the first index of your "real" values is 0, ghost values are at -1 and below and at N and above. On the other hand, if you have some loops from 1 to N and some loops from 0 to N+1, it means you have some loops over just the "real" variables (1 to N) and some loops over "real" + ghost variables (0 to N+1). In this case, you do not need to change the code. Just set the stencil width to 1 (or to 2 if your "widest" loop goes from -1 to N+2, etc.). (When I say "real", I don't mean as in real vs. complex numbers, but as in the grid point values that are not boundary conditions.) Regards, ?smund From overholt at capesim.com Mon May 2 16:00:28 2016 From: overholt at capesim.com (Matthew Overholt) Date: Mon, 2 May 2016 17:00:28 -0400 Subject: [petsc-users] Parallel to sequential matrix scatter for PARDISO Message-ID: <002301d1a4b5$a96242c0$fc26c840$@capesim.com> Petsc-users, I want to use PARDISO for a KSPPREONLY solution in a parallel context. I understand that my FEA stiffness matrix for KSP (PARDISO) needs to be of type MATSEQAIJ (according to MATSOLVERMKL_PARDISO), but I would like to assemble this matrix in parallel (MATSBAIJ) and then collect it on root as a sequential matrix for calling KSP/PARDISO. For vectors there is VecScatterCreateToZero() but I can't find the equivalent for matrices. I can avoid the parallel matrix stiffness altogether and use 3 vectors instead, but I'm wondering if that is my best option. What is the recommended practice for using PARDISO in a parallel context? The only examples I've found so far are sequential. Thanks in advance, Matt Overholt --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon May 2 16:40:27 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 2 May 2016 16:40:27 -0500 Subject: [petsc-users] Parallel to sequential matrix scatter for PARDISO In-Reply-To: <002301d1a4b5$a96242c0$fc26c840$@capesim.com> References: <002301d1a4b5$a96242c0$fc26c840$@capesim.com> Message-ID: <6C12D299-C09B-48EB-A07E-E54CD4523A39@mcs.anl.gov> The easiest way to do this is use -ksp_type preonly -pc_type redundant -redundant_ksp_type preonly -redundant_pc_type lu -redundant_pc_factor_mat_solver_package mkl_pardiso This automatically manages move the matrix and right hand side vector down to one process (this is what -pc_type redundant) does. Barry Of course all the code exists in PETSc to help you manage the process of moving the matrix down to one process yourself but why bother when -pc_type redundant can do it for you. Plus with the same code you can try parallel solvers like mumps and superlu_dist and mkl_cpardiso. > On May 2, 2016, at 4:00 PM, Matthew Overholt wrote: > > Petsc-users, > > I want to use PARDISO for a KSPPREONLY solution in a parallel context. I understand that my FEA stiffness matrix for KSP (PARDISO) needs to be of type MATSEQAIJ (according to MATSOLVERMKL_PARDISO), but I would like to assemble this matrix in parallel (MATSBAIJ) and then collect it on root as a sequential matrix for calling KSP/PARDISO. For vectors there is VecScatterCreateToZero() but I can't find the equivalent for matrices. I can avoid the parallel matrix stiffness altogether and use 3 vectors instead, but I'm wondering if that is my best option. > > What is the recommended practice for using PARDISO in a parallel context? The only examples I've found so far are sequential. > > Thanks in advance, > Matt Overholt > > > Virus-free. www.avast.com From ztdepyahoo at 163.com Mon May 2 20:29:47 2016 From: ztdepyahoo at 163.com (ztdepyahoo at 163.com) Date: Tue, 3 May 2016 09:29:47 +0800 Subject: [petsc-users] Dose Petsc has DMPlex example Message-ID: <201605030929463862822@163.com> Dear professor: I want to write a parallel 3D CFD code based on unstructred grid, does Petsc has DMPlex examples to start with. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon May 2 21:44:42 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 2 May 2016 21:44:42 -0500 Subject: [petsc-users] Dose Petsc has DMPlex example In-Reply-To: <201605030929463862822@163.com> References: <201605030929463862822@163.com> Message-ID: On Mon, May 2, 2016 at 8:29 PM, ztdepyahoo at 163.com wrote: > Dear professor: > I want to write a parallel 3D CFD code based on unstructred grid, > does Petsc has DMPlex examples to start with. > SNES ex62 is an unstructured grid Stokes problem discretized with low-order finite elements. Of course, all the different possible choices will impact the design. Matt > Regards > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveen1bharadwaj at gmail.com Tue May 3 07:25:16 2016 From: praveen1bharadwaj at gmail.com (praveen kumar) Date: Tue, 3 May 2016 17:55:16 +0530 Subject: [petsc-users] PETSc DM ex13f90 In-Reply-To: <1462213088408.34545@ntnu.no> References: <5726FF2D.9000404@ntnu.no> <1462213088408.34545@ntnu.no> Message-ID: Thanks again, Asmund. You guessed it correct that I have misunderstood. Things are clear now. Regards, Praveen On Mon, May 2, 2016 at 11:53 PM, ?smund Ervik wrote: > > > > >Thanks a lot for such a lucid explanation. The note which you mentioned > at the end is very important for me, as my code contains loops going from > both 1 to N and 0 to N+1. > > > > You're welcome. > > Just to avoid any potential misunderstanding: the note at the end was if > you have loops over your "real" (non-ghost) variables going from 0 to N-1, > i.e. zero indexing as in C, where the first index of your "real" values is > 0, ghost values are at -1 and below and at N and above. > > On the other hand, if you have some loops from 1 to N and some loops from > 0 to N+1, it means you have some loops over just the "real" variables (1 to > N) and some loops over "real" + ghost variables (0 to N+1). In this case, > you do not need to change the code. Just set the stencil width to 1 (or to > 2 if your "widest" loop goes from -1 to N+2, etc.). > > (When I say "real", I don't mean as in real vs. complex numbers, but as in > the grid point values that are not boundary conditions.) > > Regards, > ?smund -- B. Praveen Kumar Research Scholar, Computational Combustion Lab, Dept.of Aerospace Engg. IIT Madras -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.gutheil at fz-juelich.de Tue May 3 09:26:08 2016 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Tue, 3 May 2016 16:26:08 +0200 Subject: [petsc-users] Error compiling 3.7.0 on BlueGene/Q Message-ID: <5728B500.3060603@fz-juelich.de> When trying to install PETSc 3.7.0 on JUQUEEN I get an error during make. I attached the configure.log and the make.log What can be the reason for that error? Regards Inge Gutheil -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ -------------- next part -------------- A non-text attachment was scrubbed... Name: problem_juqueen_3.7.0.tar.gz Type: application/gzip Size: 415491 bytes Desc: not available URL: From balay at mcs.anl.gov Tue May 3 10:25:12 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 3 May 2016 10:25:12 -0500 Subject: [petsc-users] Error compiling 3.7.0 on BlueGene/Q In-Reply-To: <5728B500.3060603@fz-juelich.de> References: <5728B500.3060603@fz-juelich.de> Message-ID: You'll need the following patch for essl. https://bitbucket.org/petsc/petsc/commits/d6d77db912108bafdb1d20f46c9dc7bc23f0e076 Satish On Tue, 3 May 2016, Inge Gutheil wrote: > When trying to install PETSc 3.7.0 on JUQUEEN I get an error during > make. I attached the configure.log and the make.log > What can be the reason for that error? > > Regards > Inge Gutheil > > -- > -- > > Inge Gutheil > Juelich Supercomputing Centre > Institute for Advanced Simulation > Forschungszentrum Juelich GmbH > 52425 Juelich, Germany > > Phone: +49-2461-61-3135 > Fax: +49-2461-61-6656 > E-mail:i.gutheil at fz-juelich.de > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > > From bsmith at mcs.anl.gov Tue May 3 15:29:13 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 May 2016 15:29:13 -0500 Subject: [petsc-users] Custom KSP monitor changes in PETSc 3.7 In-Reply-To: <8D478341240222479E0DB361E050CB6A71EF51@msy-emb04.int.cggveritas.com> References: <8D478341240222479E0DB361E050CB6A71EF51@msy-emb04.int.cggveritas.com> Message-ID: <2F2AFA80-A3FF-49B0-BD88-C8C9C1BA2A72@mcs.anl.gov> Federico, Sorry for the delay in responding. I think the problem is due to setting the monitor from Fortran but having the monitor written in C. To make your life simpler I think you should be able to write the ShellKSPMonitor in Fortran. (no C code at all) If that fails please send us a complete (trivial) code that we can build that exhibits the problem and we'll see what is going on. Barry > On May 2, 2016, at 7:48 AM, Miorelli, Federico wrote: > > Dear All, > > I am having some issues upgrading to PETSc 3.7 due to some changes in the KSPMonitor routines. > I need to configure my KSP solver to output its convergence log to an existing ASCII viewer through a custom monitor, printing only one every 10 iterations. > The calling code is Fortran, we wrote a small C code that just calls the default monitor every 10 iterations, passing the viewer as last argument. If I understood correctly it is now necessary to set up a PetscViewerAndFormat structure and pass that as last argument to the monitor routine. > > I tried to create one with PetscViewerAndFormatCreate but I'm getting a runtime error (see below). > Could you please help me understand what I did wrong? > > Thanks in advance, > > Federico > > > > Error: > > [1]PETSC ERROR: #1 PetscViewerPushFormat() line 149 in /PETSc/petsc-3.7.0/src/sys/classes/viewer/interface/viewa.c > Error Message -------------------------------------------------------------- > [8]PETSC ERROR: Argument out of range > [8]PETSC ERROR: Too many PetscViewerPushFormat(), perhaps you forgot PetscViewerPopFormat()? > > > Fortran code: > > PetscViewer :: viewer > PetscViewerAndFormat :: vf > external ShellKSPMonitor > ... > call PetscViewerAndFormatCreate(viewer, PETSC_VIEWER_DEFAULT, vf,ierr) > call KSPMonitorSet(ksp, ShellKSPMonitor, vf, PetscViewerAndFormatDestroy, ierr) > > > > C code: > PetscErrorCode shellkspmonitor_(KSP *ksp, PetscInt *n, PetscReal *rnorm, void *ptr) > { > PetscErrorCode ierr=0; > if (*n % 10 == 0) { > ierr = KSPMonitorTrueResidualNorm(*ksp,*n,*rnorm,(PetscViewerAndFormat*)ptr);CHKERRQ(ierr); > } > return ierr; > } > > > > > ______ ______ ______ > Federico Miorelli > Senior R&D Geophysicist > Subsurface Imaging - General Geophysics Italy > > > > This email and any accompanying attachments are confidential. If you received this email by mistake, please delete > it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. From gomer at stanford.edu Tue May 3 19:38:45 2016 From: gomer at stanford.edu (Paul Urbanczyk) Date: Tue, 3 May 2016 17:38:45 -0700 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help Message-ID: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> Hello, I'm trying to implement a CFD code using finite differences on a structured mesh with PETSc. I'm having a bit of trouble understanding how to properly use the periodic boundary condition with distributed arrays, and need some clarification. If I set the boundaries in the x-direction to be DM_BOUNDARY_PERIODIC, and set a stencil width of two, there should be two ghost cells in the x-direction at each end of the mesh, which looks like it's happening just fine. However, it seems that the assumption being made by PETSc when filling in these values is that the mesh is a cell-centered finite difference, rather than a vertex-centered finite difference. Is there a way to change this behavior? In other words, I need the first ghost cell on each side to correspond to the opposite side's first interior point, rather than the opposite boundary point. If there is not a way to change this behavior, then I need to set my own ghost cells; however, I'm having trouble implementing this... If I change the boundaries to DM_BOUNDARY_GHOSTED, with a stencil width of two, I have the necessary ghost cells at either end of the mesh. I then try to do the following: 1) Scatter the global vector to local vectors on each rank 2) Get a local array referencing the local vectors 3) Calculate ghost values and fill them in the appropriate local arrays 4) Restore the local vectors from the arrays 5) Scatter the local vector info back to the global vector However, if I then re-scatter and look at the local vectors, the ghost cell values are zero. It seems as though the ghost values are lost when scattered back to the global vector. What am I doing wrong here? Thanks for your help! -Paul From bsmith at mcs.anl.gov Tue May 3 21:26:38 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 May 2016 21:26:38 -0500 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help In-Reply-To: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> References: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> Message-ID: <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> > On May 3, 2016, at 7:38 PM, Paul Urbanczyk wrote: > > Hello, > > I'm trying to implement a CFD code using finite differences on a structured mesh with PETSc. > > I'm having a bit of trouble understanding how to properly use the periodic boundary condition with distributed arrays, and need some clarification. > > If I set the boundaries in the x-direction to be DM_BOUNDARY_PERIODIC, and set a stencil width of two, there should be two ghost cells in the x-direction at each end of the mesh, which looks like it's happening just fine. > > However, it seems that the assumption being made by PETSc when filling in these values is that the mesh is a cell-centered finite difference, rather than a vertex-centered finite difference. Is there a way to change this behavior? In other words, I need the first ghost cell on each side to correspond to the opposite side's first interior point, rather than the opposite boundary point. DMDA doesn't really have a concept of if the values are cell-centered or vertex centered. I think this issue you are facing is only an issue of labeling, not of real substance; in some sense it is up to you to interpret the meaning of the values. In your case with vertex centered values here is how I see it in a picture. Consider a domain from [0,1) periodic with n grid points in the global vector x = 0 1-2h 1-h 1 i = 0 1 2 n-1 So you divide the domain into n sections and label the vertices (left end of the sections) starting with zero, note that the last section has no right hand side index because the value at x=1 is the same as the value at x=0 now if you have two processors and a stencil width of two the local domains look like x = 1-2h 1-h 0 i = n-2 n-1 0 1 2 .... x = 1-2h 1-h 1=0* 1+h i = n-2 n-1 0 1 This is what the DMDA will deliver in the local vector after a DMGlobalToLocalBegin/End In periodic coordinates the location x=1 is the same as the location x=0 > > If there is not a way to change this behavior, then I need to set my own ghost cells; however, I'm having trouble implementing this... > > If I change the boundaries to DM_BOUNDARY_GHOSTED, with a stencil width of two, I have the necessary ghost cells at either end of the mesh. I then try to do the following: > > 1) Scatter the global vector to local vectors on each rank > > 2) Get a local array referencing the local vectors > > 3) Calculate ghost values and fill them in the appropriate local arrays > > 4) Restore the local vectors from the arrays > > 5) Scatter the local vector info back to the global vector > > However, if I then re-scatter and look at the local vectors, the ghost cell values are zero. It seems as though the ghost values are lost when scattered back to the global vector. This is nuts; in theory the basic DMDA does what one needs to handle periodic boundary conditions. No need to try to implement it yourself. Of course we could always have bugs so if you have a problem with my reasoning send a simple 1d code that demonstrates the issue. Barry > > What am I doing wrong here? > > Thanks for your help! > > -Paul > > From i.gutheil at fz-juelich.de Wed May 4 01:15:04 2016 From: i.gutheil at fz-juelich.de (Inge Gutheil) Date: Wed, 4 May 2016 08:15:04 +0200 Subject: [petsc-users] Error compiling 3.7.0 on BlueGene/Q In-Reply-To: References: <5728B500.3060603@fz-juelich.de> Message-ID: <57299368.8090706@fz-juelich.de> Thanks, that solves it. Inge On 05/03/16 17:25, Satish Balay wrote: > You'll need the following patch for essl. > > https://bitbucket.org/petsc/petsc/commits/d6d77db912108bafdb1d20f46c9dc7bc23f0e076 > > Satish > > On Tue, 3 May 2016, Inge Gutheil wrote: > >> When trying to install PETSc 3.7.0 on JUQUEEN I get an error during >> make. I attached the configure.log and the make.log >> What can be the reason for that error? >> >> Regards >> Inge Gutheil >> >> -- >> -- >> >> Inge Gutheil >> Juelich Supercomputing Centre >> Institute for Advanced Simulation >> Forschungszentrum Juelich GmbH >> 52425 Juelich, Germany >> >> Phone: +49-2461-61-3135 >> Fax: +49-2461-61-6656 >> E-mail:i.gutheil at fz-juelich.de >> >> >> >> ------------------------------------------------------------------------------------------------ >> ------------------------------------------------------------------------------------------------ >> Forschungszentrum Juelich GmbH >> 52425 Juelich >> Sitz der Gesellschaft: Juelich >> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 >> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher >> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), >> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, >> Prof. Dr. Sebastian M. Schmidt >> ------------------------------------------------------------------------------------------------ >> ------------------------------------------------------------------------------------------------ >> >> -- -- Inge Gutheil Juelich Supercomputing Centre Institute for Advanced Simulation Forschungszentrum Juelich GmbH 52425 Juelich, Germany Phone: +49-2461-61-3135 Fax: +49-2461-61-6656 E-mail:i.gutheil at fz-juelich.de From Federico.Miorelli at CGG.COM Wed May 4 07:26:49 2016 From: Federico.Miorelli at CGG.COM (Miorelli, Federico) Date: Wed, 4 May 2016 12:26:49 +0000 Subject: [petsc-users] Custom KSP monitor changes in PETSc 3.7 In-Reply-To: <2F2AFA80-A3FF-49B0-BD88-C8C9C1BA2A72@mcs.anl.gov> References: <8D478341240222479E0DB361E050CB6A71EF51@msy-emb04.int.cggveritas.com> <2F2AFA80-A3FF-49B0-BD88-C8C9C1BA2A72@mcs.anl.gov> Message-ID: <8D478341240222479E0DB361E050CB6A71FA32@msy-emb04.int.cggveritas.com> Barry, Thanks for your answer, implementing everything in Fortran solved the problem. Regards, Federico ______ ______ ______ Federico Miorelli Senior R&D Geophysicist Subsurface Imaging - General Geophysics Italy CGG Electromagnetics (Italy) Srl cgg.com -----Original Message----- From: Barry Smith [mailto:bsmith at mcs.anl.gov] Sent: marted? 3 maggio 2016 22:29 To: Miorelli, Federico Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Custom KSP monitor changes in PETSc 3.7 Federico, Sorry for the delay in responding. I think the problem is due to setting the monitor from Fortran but having the monitor written in C. To make your life simpler I think you should be able to write the ShellKSPMonitor in Fortran. (no C code at all) If that fails please send us a complete (trivial) code that we can build that exhibits the problem and we'll see what is going on. Barry > On May 2, 2016, at 7:48 AM, Miorelli, Federico wrote: > > Dear All, > > I am having some issues upgrading to PETSc 3.7 due to some changes in the KSPMonitor routines. > I need to configure my KSP solver to output its convergence log to an existing ASCII viewer through a custom monitor, printing only one every 10 iterations. > The calling code is Fortran, we wrote a small C code that just calls the default monitor every 10 iterations, passing the viewer as last argument. If I understood correctly it is now necessary to set up a PetscViewerAndFormat structure and pass that as last argument to the monitor routine. > > I tried to create one with PetscViewerAndFormatCreate but I'm getting a runtime error (see below). > Could you please help me understand what I did wrong? > > Thanks in advance, > > Federico > > > > Error: > > [1]PETSC ERROR: #1 PetscViewerPushFormat() line 149 in > /PETSc/petsc-3.7.0/src/sys/classes/viewer/interface/viewa.c > Error Message > -------------------------------------------------------------- > [8]PETSC ERROR: Argument out of range > [8]PETSC ERROR: Too many PetscViewerPushFormat(), perhaps you forgot PetscViewerPopFormat()? > > > Fortran code: > > PetscViewer :: viewer > PetscViewerAndFormat :: vf > external ShellKSPMonitor > ... > call PetscViewerAndFormatCreate(viewer, PETSC_VIEWER_DEFAULT, vf,ierr) > call KSPMonitorSet(ksp, ShellKSPMonitor, vf, > PetscViewerAndFormatDestroy, ierr) > > > > C code: > PetscErrorCode shellkspmonitor_(KSP *ksp, PetscInt *n, PetscReal > *rnorm, void *ptr) { > PetscErrorCode ierr=0; > if (*n % 10 == 0) { > ierr = KSPMonitorTrueResidualNorm(*ksp,*n,*rnorm,(PetscViewerAndFormat*)ptr);CHKERRQ(ierr); > } > return ierr; > } > > > > > ______ ______ ______ > Federico Miorelli > Senior R&D Geophysicist > Subsurface Imaging - General Geophysics Italy > > > > This email and any accompanying attachments are confidential. If you > received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. From juhaj at iki.fi Thu May 5 12:01:36 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Thu, 05 May 2016 18:01:36 +0100 Subject: [petsc-users] -snes_mf and -snes_fd Message-ID: <2781732.IjOLFbidAP@vega> Dear list, I am a bit confused by some SNES options. How do -snes_mf, -snes_fd, SNESSetJacobian exactly interact? I run my code (it's basically the Bratu3D example in python) with -snes_fd 0 -snes_mf 0 and my RHS code gets called 7 times -snes_fd 0 -snes_mf 1 and RHS called 44 times -snes_fd 1 -snes_mf 0 and RHS called 775 times -snes_fd 1 -snes_mf 1 and RHS called 44 times without -snes_mf AT ALL but with -snes_fd 0 I get RHS called 3 times [1] without -snes_mf AT ALL but with -snes_fd 1 I get RHS called 775 times without -snes_mf OR -snes_fd, I get RHS called 3 times [1] without -snes_fd at all and -snes_mf, I get 44 calls without -snes_fd at all and -snes_mf 1, I get 44 calls without -snes_fd at all and -snes_mf 0, I get 3 calls [1] I would have expected -snes_mf 0 and no snes_mf at all to behave in the same way. I am also puzzled by the fact that there seem to be four different ways to solve the problem: with Jacobian (3 calls), with fd (775 calls) and with mf (44 calls). What is the 7 calls case all about? I guess in all the years I've used PETSc I should have learnt better, but I had never taught it to anyone before, so hand't noticed my lack of knowledge. Cheers, Juha [1] This causes my Jacobian routine to be called as intended. From knepley at gmail.com Thu May 5 12:37:42 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 May 2016 12:37:42 -0500 Subject: [petsc-users] -snes_mf and -snes_fd In-Reply-To: <2781732.IjOLFbidAP@vega> References: <2781732.IjOLFbidAP@vega> Message-ID: On Thu, May 5, 2016 at 12:01 PM, Juha Jaykka wrote: > Dear list, > > I am a bit confused by some SNES options. How do -snes_mf, -snes_fd, > SNESSetJacobian exactly interact? > > I run my code (it's basically the Bratu3D example in python) with > It would help to have -snes_monitor -snes_converged_reason here: > -snes_fd 0 -snes_mf 0 and my RHS code gets called 7 times > This looks like 7 iterates with your assembled Jacobian. Judging from below, it appears there might be a different PC here. > -snes_fd 0 -snes_mf 1 and RHS called 44 times > Here each action of the Jacobian takes one RHS evaluation, and one at the beginning of each Newton iterate. > -snes_fd 1 -snes_mf 0 and RHS called 775 times > Here we have one RHS for each column of the Jacobian at each Newton iterate since we form it explicitly (actually for each color in the graph, but that is a detail). > -snes_fd 1 -snes_mf 1 and RHS called 44 times > The second option overrides the first. > without -snes_mf AT ALL but with -snes_fd 0 I get RHS called 3 times [1] > I need to see the output of -snes_view for these cases. Its possible that we have a bug related to the argument. > without -snes_mf AT ALL but with -snes_fd 1 I get RHS called 775 times > without -snes_mf OR -snes_fd, I get RHS called 3 times [1] > Yes. > without -snes_fd at all and -snes_mf, I get 44 calls > without -snes_fd at all and -snes_mf 1, I get 44 calls > These two are the same since we assume a 1 argument. Thanks, Matt > without -snes_fd at all and -snes_mf 0, I get 3 calls [1] > > I would have expected -snes_mf 0 and no snes_mf at all to behave in the > same > way. I am also puzzled by the fact that there seem to be four different > ways > to solve the problem: with Jacobian (3 calls), with fd (775 calls) and > with mf > (44 calls). What is the 7 calls case all about? > > I guess in all the years I've used PETSc I should have learnt better, but I > had never taught it to anyone before, so hand't noticed my lack of > knowledge. > > Cheers, > Juha > > [1] This causes my Jacobian routine to be called as intended. > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu May 5 23:03:51 2016 From: jed at jedbrown.org (Jed Brown) Date: Thu, 05 May 2016 22:03:51 -0600 Subject: [petsc-users] -snes_mf and -snes_fd In-Reply-To: References: <2781732.IjOLFbidAP@vega> Message-ID: <87bn4jde8o.fsf@jedbrown.org> Matthew Knepley writes: >> without -snes_mf AT ALL but with -snes_fd 0 I get RHS called 3 times [1] >> > > I need to see the output of -snes_view for these cases. Its possible that we > have a bug related to the argument. This case looks correct to me. It's the same as passing no options, so the analytic Jacobian gets used. That said, our options scheme is lame to use separate options when only one can possibly be used. We should have -snes_jacobian or similar. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From Federico.Miorelli at CGG.COM Fri May 6 04:16:32 2016 From: Federico.Miorelli at CGG.COM (Miorelli, Federico) Date: Fri, 6 May 2016 09:16:32 +0000 Subject: [petsc-users] Crash in MatDuplicate_MPIAIJ_MatPtAP Message-ID: <8D478341240222479E0DB361E050CB6A72042E@msy-emb04.int.cggveritas.com> Dear All, I think there may be a bug in PETSc 3.7.0, function MatDuplicate_MPIAIJ_MatPtAP. Whenever I call MatDuplicate on this type of matrix I'm getting SIGSEGV. The bug is easily reproduced with the attached file. This is just ex2f from KSP, where I added two additional lines to create an A^T A matrix which is then duplicated. call MatTransposeMatMult(A, A, MAT_INITIAL_MATRIX, 1.d0, AtA, ierr) call MatDuplicate(AtA,MAT_COPY_VALUES,AtA2,ierr) [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [1]PETSC ERROR: [1] MatDuplicate_MPIAIJ_MatPtAP line 78 /state/std2/FEMI/PETSc/petsc-3.7.0/src/mat/impls/aij/mpi/mpiptap.c [1]PETSC ERROR: [1] MatDuplicate line 4324 /state/std2/FEMI/PETSc/petsc-3.7.0/src/mat/interface/matrix.c Thanks, Federico ______ ______ ______ Federico Miorelli Senior R&D Geophysicist Subsurface Imaging - General Geophysics Italy CGG Electromagnetics (Italy) Srl This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex2.F90 Type: application/octet-stream Size: 15278 bytes Desc: ex2.F90 URL: From praveenpetsc at gmail.com Fri May 6 05:08:53 2016 From: praveenpetsc at gmail.com (praveen kumar) Date: Fri, 6 May 2016 15:38:53 +0530 Subject: [petsc-users] virtual nodes at processes boundary Message-ID: Hi, I am trying to implement Petsc for DD in a serial fortran FVM code. I want to use solver from serial code itself. Solver consists of gauss seidel + TDMA. BCs are given along with the solver at boundary virtual nodes. For Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual nodes (which don't take part in computation). I partitioned the domain using DMDACreate and got the ghost nodes information using DMDAGetcorners. But how to create the virtual nodes at the processes boundary where BCs are to be given. Please suggest all the possibilities to fix this other than using PETSc for solver parallelization. Thanks, Praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhaj at iki.fi Fri May 6 05:21:43 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Fri, 06 May 2016 11:21:43 +0100 Subject: [petsc-users] -snes_mf and -snes_fd In-Reply-To: <87bn4jde8o.fsf@jedbrown.org> References: <2781732.IjOLFbidAP@vega> <87bn4jde8o.fsf@jedbrown.org> Message-ID: <3811297.lgVlP9qkvJ@vega> > the analytic Jacobian gets used. That said, our options scheme is lame You said that, I did refrain from calling it confusing! ;) That said, you're doing a terrific job, so just having a small weirdness in something as peripheral as command line options is just a testament to how good PETSc is! > to use separate options when only one can possibly be used. We should > have -snes_jacobian or similar. So there are five different cases? I think my misunderstanding was related to mf_operator and fd_color: I did not realise they were distinct from md and fd, hence my perception that there should only be 3 different possibilities. Yet I only found four! I would imagine the one I missed is fd_color, correct? It seems to me -snes_fd == -snes_fd 1 and this uses the fd approx Jacobian [1] -snes_mf == -snes_mf 1 uses mf -snes_mf_operator uses, well, mf_operator (and calls RHS 16 times, btw) -snes_fd_color uses FD with colouring (this is the case which calls RHS 3 times; the hand-written Jacobian needs 7) -NOTHING AT ALL is the same as -snes_fd_color (this is what confused me most) -there is no way to tell PETSc "yes, I want hand-written Jacobian"? Even setting them all to explicitly to zero seems to cause fd_color to be used. (I do not get any errors and I get the right result after 3 calls to the RHS even if I never call SetJacobian.) Cheers, Juha By the way Matthew, all the tests converged with CONVERGED_FNORM_RELATIVE. [1] If I so both SNESSetJacobian and SNESSetUseFD (or jt SetFromOptions and call but -snes_fd 1), which one takes precedence? The last one called? From knepley at gmail.com Fri May 6 07:49:54 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 6 May 2016 07:49:54 -0500 Subject: [petsc-users] [petsc-maint] virtual nodes at processes boundary In-Reply-To: References: Message-ID: I cannot understand your description. Maybe you can draw a picture in 1D. Matt On Fri, May 6, 2016 at 5:08 AM, praveen kumar wrote: > Hi, > > I am trying to implement Petsc for DD in a serial fortran FVM code. I > want to use solver from serial code itself. Solver consists of gauss seidel > + TDMA. BCs are given along with the solver at boundary virtual nodes. For > Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual > nodes (which don't take part in computation). I partitioned the domain > using DMDACreate and got the ghost nodes information using DMDAGetcorners. > But how to create the virtual nodes at the processes boundary where BCs are > to be given. Please suggest all the possibilities to fix this other than > using PETSc for solver parallelization. > > Thanks, > Praveen > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri May 6 08:58:03 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 May 2016 08:58:03 -0500 Subject: [petsc-users] -snes_mf and -snes_fd In-Reply-To: <87bn4jde8o.fsf@jedbrown.org> References: <2781732.IjOLFbidAP@vega> <87bn4jde8o.fsf@jedbrown.org> Message-ID: <6DD7C526-7213-4935-9B45-984A161CDEEE@mcs.anl.gov> > On May 5, 2016, at 11:03 PM, Jed Brown wrote: > > Matthew Knepley writes: >>> without -snes_mf AT ALL but with -snes_fd 0 I get RHS called 3 times [1] >>> >> >> I need to see the output of -snes_view for these cases. Its possible that we >> have a bug related to the argument. > > This case looks correct to me. It's the same as passing no options, so > the analytic Jacobian gets used. That said, our options scheme is lame > to use separate options when only one can possibly be used. We should > have -snes_jacobian or similar. Perhaps it should have two parts; one for the mat and one for the pmat. So -snes_jacobian X[:Y] where X can be analytic, mf,fd,fd_color and Y can be analytic,fd,fd_color,none Make it an issue. Barry From bsmith at mcs.anl.gov Fri May 6 09:18:08 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 May 2016 09:18:08 -0500 Subject: [petsc-users] virtual nodes at processes boundary In-Reply-To: References: Message-ID: <9F5D23D9-75CE-4715-AD08-A97625AB0FD4@mcs.anl.gov> > On May 6, 2016, at 5:08 AM, praveen kumar wrote: > > Hi, > > I am trying to implement Petsc for DD in a serial fortran FVM code. I want to use solver from serial code itself. Solver consists of gauss seidel + TDMA. BCs are given along with the solver at boundary virtual nodes. For Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual nodes (which don't take part in computation). I partitioned the domain using DMDACreate and got the ghost nodes information using DMDAGetcorners. But how to create the virtual nodes at the processes boundary where BCs are to be given. Please suggest all the possibilities to fix this other than using PETSc for solver parallelization. DMCreateGlobalVector(dm,gvector,ierr); DMCreateLocalVector(dm,lvector,ierr); /* full up gvector with initial guess or whatever */ DMGlobalToLocalBegin(dm,gvector,INSERT_VALUES,lvector,ierr) DMGlobalToLocalEnd(dm,gvector,INSERT_VALUES,lvector,ierr) Now the vector lvector has the ghost values you can use DMDAVecGetArrayF90(dm,lvector,fortran_array_pointer_of_correct dimension for your problem (1,2,3d)) Note that the indexing into the fortran_array_pointer uses the global indexing, not the local indexing. You can use DMDAGetCorners() to get the start and end indices for each process. Barry > > Thanks, > Praveen > From praveenpetsc at gmail.com Fri May 6 09:30:16 2016 From: praveenpetsc at gmail.com (praveen kumar) Date: Fri, 6 May 2016 20:00:16 +0530 Subject: [petsc-users] virtual nodes at processes boundary In-Reply-To: <9F5D23D9-75CE-4715-AD08-A97625AB0FD4@mcs.anl.gov> References: <9F5D23D9-75CE-4715-AD08-A97625AB0FD4@mcs.anl.gov> Message-ID: Thanks Matt , Thanks Barry. I'll get back to you. Thanks, Praveen On Fri, May 6, 2016 at 7:48 PM, Barry Smith wrote: > > > On May 6, 2016, at 5:08 AM, praveen kumar > wrote: > > > > Hi, > > > > I am trying to implement Petsc for DD in a serial fortran FVM code. I > want to use solver from serial code itself. Solver consists of gauss seidel > + TDMA. BCs are given along with the solver at boundary virtual nodes. For > Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual > nodes (which don't take part in computation). I partitioned the domain > using DMDACreate and got the ghost nodes information using DMDAGetcorners. > But how to create the virtual nodes at the processes boundary where BCs are > to be given. Please suggest all the possibilities to fix this other than > using PETSc for solver parallelization. > > DMCreateGlobalVector(dm,gvector,ierr); > DMCreateLocalVector(dm,lvector,ierr); > > /* full up gvector with initial guess or whatever */ > > DMGlobalToLocalBegin(dm,gvector,INSERT_VALUES,lvector,ierr) > DMGlobalToLocalEnd(dm,gvector,INSERT_VALUES,lvector,ierr) > > Now the vector lvector has the ghost values you can use > > DMDAVecGetArrayF90(dm,lvector,fortran_array_pointer_of_correct > dimension for your problem (1,2,3d)) > > Note that the indexing into the fortran_array_pointer uses the global > indexing, not the local indexing. You can use DMDAGetCorners() to get the > start and end indices for each process. > > Barry > > > > > > > Thanks, > > Praveen > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Fri May 6 10:09:13 2016 From: hzhang at mcs.anl.gov (Hong) Date: Fri, 6 May 2016 10:09:13 -0500 Subject: [petsc-users] Crash in MatDuplicate_MPIAIJ_MatPtAP In-Reply-To: <8D478341240222479E0DB361E050CB6A72042E@msy-emb04.int.cggveritas.com> References: <8D478341240222479E0DB361E050CB6A72042E@msy-emb04.int.cggveritas.com> Message-ID: Miorelli: I can reproduce it. I'll fix it and get back to you. Hong Dear All, > > > > I think there may be a bug in PETSc 3.7.0, function > MatDuplicate_MPIAIJ_MatPtAP. > > Whenever I call MatDuplicate on this type of matrix I'm getting SIGSEGV. > > > > The bug is easily reproduced with the attached file. This is just ex2f > from KSP, where I added two additional lines to create an A^T A matrix > which is then duplicated. > > > > call MatTransposeMatMult(A, A, MAT_INITIAL_MATRIX, 1.d0, AtA, ierr) > > call MatDuplicate(AtA,MAT_COPY_VALUES,AtA2,ierr) > > > > > > [1]PETSC ERROR: > ------------------------------------------------------------------------ > > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > X to find memory corruption errors > > [1]PETSC ERROR: likely location of problem given in stack below > > [1]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the function > > [0]PETSC ERROR: is given. > > [1]PETSC ERROR: [1] MatDuplicate_MPIAIJ_MatPtAP line 78 > /state/std2/FEMI/PETSc/petsc-3.7.0/src/mat/impls/aij/mpi/mpiptap.c > > [1]PETSC ERROR: [1] MatDuplicate line 4324 > /state/std2/FEMI/PETSc/petsc-3.7.0/src/mat/interface/matrix.c > > > > > > Thanks, > > > > Federico > > > > > > *______* *______* *______* > > Federico Miorelli > > > > Senior R&D Geophysicist > > *Subsurface Imaging - General Geophysics **Italy* > > > > CGG Electromagnetics (Italy) Srl > > > *This email and any accompanying attachments are confidential. If you > received this email by mistake, please delete it from your system. Any > review, disclosure, copying, distribution, or use of the email by others is > strictly prohibited.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elbueler at alaska.edu Fri May 6 11:20:52 2016 From: elbueler at alaska.edu (Ed Bueler) Date: Fri, 6 May 2016 08:20:52 -0800 Subject: [petsc-users] -snes_mf and -snes_fd Message-ID: > We should have -snes_jacobian > or similar. That would be nice! Ed -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gideon.simpson at gmail.com Fri May 6 11:57:28 2016 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Fri, 6 May 2016 12:57:28 -0400 Subject: [petsc-users] fixed RK time stepping Message-ID: <259EF308-098E-49F7-AB71-724066223CCA@gmail.com> I?m trying to do RK4 time stepping for comparison in another problem, and I?m having trouble getting the TS to save at the same values of dt. There seems to be some automatic adjustment. Ordinarily this would be fine, but I?d really like to use a fixed value. Is there a way to force it to use a given dt at all time steps? -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Fri May 6 12:07:40 2016 From: hzhang at mcs.anl.gov (Hong) Date: Fri, 6 May 2016 12:07:40 -0500 Subject: [petsc-users] Crash in MatDuplicate_MPIAIJ_MatPtAP In-Reply-To: <8D478341240222479E0DB361E050CB6A72042E@msy-emb04.int.cggveritas.com> References: <8D478341240222479E0DB361E050CB6A72042E@msy-emb04.int.cggveritas.com> Message-ID: Miorelli: Fixed https://bitbucket.org/petsc/petsc/commits/a560ef987c2a1e86047c71b5951614168aab22f9 After our regression tests, I'll merge it to petsc-release. Thanks for report it! Hong Dear All, > > > > I think there may be a bug in PETSc 3.7.0, function > MatDuplicate_MPIAIJ_MatPtAP. > > Whenever I call MatDuplicate on this type of matrix I'm getting SIGSEGV. > > > > The bug is easily reproduced with the attached file. This is just ex2f > from KSP, where I added two additional lines to create an A^T A matrix > which is then duplicated. > > > > call MatTransposeMatMult(A, A, MAT_INITIAL_MATRIX, 1.d0, AtA, ierr) > > call MatDuplicate(AtA,MAT_COPY_VALUES,AtA2,ierr) > > > > > > [1]PETSC ERROR: > ------------------------------------------------------------------------ > > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > X to find memory corruption errors > > [1]PETSC ERROR: likely location of problem given in stack below > > [1]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the function > > [0]PETSC ERROR: is given. > > [1]PETSC ERROR: [1] MatDuplicate_MPIAIJ_MatPtAP line 78 > /state/std2/FEMI/PETSc/petsc-3.7.0/src/mat/impls/aij/mpi/mpiptap.c > > [1]PETSC ERROR: [1] MatDuplicate line 4324 > /state/std2/FEMI/PETSc/petsc-3.7.0/src/mat/interface/matrix.c > > > > > > Thanks, > > > > Federico > > > > > > *______* *______* *______* > > Federico Miorelli > > > > Senior R&D Geophysicist > > *Subsurface Imaging - General Geophysics **Italy* > > > > CGG Electromagnetics (Italy) Srl > > > *This email and any accompanying attachments are confidential. If you > received this email by mistake, please delete it from your system. Any > review, disclosure, copying, distribution, or use of the email by others is > strictly prohibited.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Fri May 6 12:12:09 2016 From: hongzhang at anl.gov (Hong Zhang) Date: Fri, 6 May 2016 12:12:09 -0500 Subject: [petsc-users] fixed RK time stepping In-Reply-To: <259EF308-098E-49F7-AB71-724066223CCA@gmail.com> References: <259EF308-098E-49F7-AB71-724066223CCA@gmail.com> Message-ID: Use the command line option -ts_adapt_type none or call TSAdaptSetType(adapt,"none'') in the code. Hong > On May 6, 2016, at 11:57 AM, Gideon Simpson wrote: > > I?m trying to do RK4 time stepping for comparison in another problem, and I?m having trouble getting the TS to save at the same values of dt. There seems to be some automatic adjustment. Ordinarily this would be fine, but I?d really like to use a fixed value. Is there a way to force it to use a given dt at all time steps? > > -gideon > From praveenpetsc at gmail.com Fri May 6 14:15:07 2016 From: praveenpetsc at gmail.com (praveen kumar) Date: Sat, 7 May 2016 00:45:07 +0530 Subject: [petsc-users] virtual nodes at processes boundary In-Reply-To: References: <9F5D23D9-75CE-4715-AD08-A97625AB0FD4@mcs.anl.gov> Message-ID: I didn't frame the question properly. Suppose we have grid on vertices numbered 1 to N and we break it into two pieces (1,N/2) and (N/2+1,N). As it is FVM code, DMboundary type is DM_BOUNDARY_GHOSTED. nodes 0 and N+1 are to the left and right of nodes 1 and N at a distance of dx/2 respectively. Let me call 0 and N+1 as virtual nodes where boundary conditions are applied. As you know virtual nodes don't take part in computation and are different from what we call ghost nodes in parallel computing terminology. In serial code problem is solved by CALL TDMA(0,N+1). I've decomposed the domain using PETSc, and I've replaced indices in serial code DO Loops with the information from DMDAGetCorners . if I want to solve the problem using TDMA on process0, it is not possible as process0 doesn't contain virtual node at it's right boundary i.e CALL TDMA(0,X) where X should be at a distance of dx/2 from N/2 but it is not there. Similarly for process1 there is no virtual node at it's left boundary. So, how can I create these virtual nodes at the processes boundary. I want to set the variable value at X as previous time-step/iteration value. I'm not sure whether my methodology is correct or not. If you think it is very cumbersome, please suggest something else. Thanks, Praveen On Fri, May 6, 2016 at 8:00 PM, praveen kumar wrote: > Thanks Matt , Thanks Barry. I'll get back to you. > > Thanks, > Praveen > > On Fri, May 6, 2016 at 7:48 PM, Barry Smith wrote: > >> >> > On May 6, 2016, at 5:08 AM, praveen kumar >> wrote: >> > >> > Hi, >> > >> > I am trying to implement Petsc for DD in a serial fortran FVM code. I >> want to use solver from serial code itself. Solver consists of gauss seidel >> + TDMA. BCs are given along with the solver at boundary virtual nodes. For >> Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual >> nodes (which don't take part in computation). I partitioned the domain >> using DMDACreate and got the ghost nodes information using DMDAGetcorners. >> But how to create the virtual nodes at the processes boundary where BCs are >> to be given. Please suggest all the possibilities to fix this other than >> using PETSc for solver parallelization. >> >> DMCreateGlobalVector(dm,gvector,ierr); >> DMCreateLocalVector(dm,lvector,ierr); >> >> /* full up gvector with initial guess or whatever */ >> >> DMGlobalToLocalBegin(dm,gvector,INSERT_VALUES,lvector,ierr) >> DMGlobalToLocalEnd(dm,gvector,INSERT_VALUES,lvector,ierr) >> >> Now the vector lvector has the ghost values you can use >> >> DMDAVecGetArrayF90(dm,lvector,fortran_array_pointer_of_correct >> dimension for your problem (1,2,3d)) >> >> Note that the indexing into the fortran_array_pointer uses the global >> indexing, not the local indexing. You can use DMDAGetCorners() to get the >> start and end indices for each process. >> >> Barry >> >> >> >> > >> > Thanks, >> > Praveen >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri May 6 14:28:07 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 May 2016 14:28:07 -0500 Subject: [petsc-users] virtual nodes at processes boundary In-Reply-To: References: <9F5D23D9-75CE-4715-AD08-A97625AB0FD4@mcs.anl.gov> Message-ID: > On May 6, 2016, at 2:15 PM, praveen kumar wrote: > > I didn't frame the question properly. Suppose we have grid on vertices numbered 1 to N and we break it into two pieces (1,N/2) and (N/2+1,N). As it is FVM code, DMboundary type is DM_BOUNDARY_GHOSTED. nodes 0 and N+1 are to the left and right of nodes 1 and N at a distance of dx/2 respectively. Let me call 0 and N+1 as virtual nodes where boundary conditions are applied. As you know virtual nodes don't take part in computation and are different from what we call ghost nodes in parallel computing terminology. In serial code problem is solved by CALL TDMA(0,N+1). I don't know why you want to have a concept of "virtual nodes" being different than "ghost nodes"? > I've decomposed the domain using PETSc, and I've replaced indices in serial code DO Loops with the information from DMDAGetCorners . > if I want to solve the problem using TDMA on process0, it is not possible as process0 doesn't contain virtual node at it's right boundary i.e CALL TDMA(0,X) where X should be at a distance of dx/2 from N/2 but it is not there. Similarly for process1 there is no virtual node at it's left boundary. So, how can I create these virtual nodes at the processes boundary. I want to set the variable value at X as previous time-step/iteration value. Why? Don't you have to do some kind of iteration where you update the boundary conditions from the other processes, solve the local problem and then repeat until the solution is converged? This is the normal thing people do with domain decomposition type solvers and is very easy with DMDA. How can you hope to get the solution correct without an iteration passing ghost values? If you just put values from some previous time-step in in the ghost locations then each process will solver a local problem but the result won't match between processes so will be garbage, won't it? Barry > I'm not sure whether my methodology is correct or not. If you think it is very cumbersome, please suggest something else. > > Thanks, > Praveen > > On Fri, May 6, 2016 at 8:00 PM, praveen kumar wrote: > Thanks Matt , Thanks Barry. I'll get back to you. > > Thanks, > Praveen > > On Fri, May 6, 2016 at 7:48 PM, Barry Smith wrote: > > > On May 6, 2016, at 5:08 AM, praveen kumar wrote: > > > > Hi, > > > > I am trying to implement Petsc for DD in a serial fortran FVM code. I want to use solver from serial code itself. Solver consists of gauss seidel + TDMA. BCs are given along with the solver at boundary virtual nodes. For Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual nodes (which don't take part in computation). I partitioned the domain using DMDACreate and got the ghost nodes information using DMDAGetcorners. But how to create the virtual nodes at the processes boundary where BCs are to be given. Please suggest all the possibilities to fix this other than using PETSc for solver parallelization. > > DMCreateGlobalVector(dm,gvector,ierr); > DMCreateLocalVector(dm,lvector,ierr); > > /* full up gvector with initial guess or whatever */ > > DMGlobalToLocalBegin(dm,gvector,INSERT_VALUES,lvector,ierr) > DMGlobalToLocalEnd(dm,gvector,INSERT_VALUES,lvector,ierr) > > Now the vector lvector has the ghost values you can use > > DMDAVecGetArrayF90(dm,lvector,fortran_array_pointer_of_correct dimension for your problem (1,2,3d)) > > Note that the indexing into the fortran_array_pointer uses the global indexing, not the local indexing. You can use DMDAGetCorners() to get the start and end indices for each process. > > Barry > > > > > > > Thanks, > > Praveen > > > > > From gomer at stanford.edu Fri May 6 17:09:48 2016 From: gomer at stanford.edu (Paul Stephen Urbanczyk) Date: Fri, 6 May 2016 22:09:48 +0000 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help In-Reply-To: <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> References: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu>, <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> Message-ID: Hello Barry, Thank you for responding. I think I'm following you. I set up a little 1-D test problem, and used the built-in DMDASetUniformCoordinates function. Then, by turning on and off the periodic BCs, I see the change in behavior. I think I was making a mistake with the right boundary indexing. However, this raises another question: shouldn't the coordinates behave differently than other field variables? In other words, shouldn't the periodic ghost points at the left end have coordinates (0-h) and (0-2h), rather than (1-h) and (1-2h)? And, at the right side, shouldn't the x-coordinates be 1 and 1+h? Apologies for what may be somewhat remedial questions, but I'm somewhat new to PETSc and have not actually programmed a periodic case before. Thanks for all of your help! -Paul ________________________________________ From: Barry Smith Sent: Tuesday, May 3, 2016 7:26:38 PM To: Paul Stephen Urbanczyk Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Structured Finite Difference Method With Periodic BC Help > On May 3, 2016, at 7:38 PM, Paul Urbanczyk wrote: > > Hello, > > I'm trying to implement a CFD code using finite differences on a structured mesh with PETSc. > > I'm having a bit of trouble understanding how to properly use the periodic boundary condition with distributed arrays, and need some clarification. > > If I set the boundaries in the x-direction to be DM_BOUNDARY_PERIODIC, and set a stencil width of two, there should be two ghost cells in the x-direction at each end of the mesh, which looks like it's happening just fine. > > However, it seems that the assumption being made by PETSc when filling in these values is that the mesh is a cell-centered finite difference, rather than a vertex-centered finite difference. Is there a way to change this behavior? In other words, I need the first ghost cell on each side to correspond to the opposite side's first interior point, rather than the opposite boundary point. DMDA doesn't really have a concept of if the values are cell-centered or vertex centered. I think this issue you are facing is only an issue of labeling, not of real substance; in some sense it is up to you to interpret the meaning of the values. In your case with vertex centered values here is how I see it in a picture. Consider a domain from [0,1) periodic with n grid points in the global vector x = 0 1-2h 1-h 1 i = 0 1 2 n-1 So you divide the domain into n sections and label the vertices (left end of the sections) starting with zero, note that the last section has no right hand side index because the value at x=1 is the same as the value at x=0 now if you have two processors and a stencil width of two the local domains look like x = 1-2h 1-h 0 i = n-2 n-1 0 1 2 .... x = 1-2h 1-h 1=0* 1+h i = n-2 n-1 0 1 This is what the DMDA will deliver in the local vector after a DMGlobalToLocalBegin/End In periodic coordinates the location x=1 is the same as the location x=0 > > If there is not a way to change this behavior, then I need to set my own ghost cells; however, I'm having trouble implementing this... > > If I change the boundaries to DM_BOUNDARY_GHOSTED, with a stencil width of two, I have the necessary ghost cells at either end of the mesh. I then try to do the following: > > 1) Scatter the global vector to local vectors on each rank > > 2) Get a local array referencing the local vectors > > 3) Calculate ghost values and fill them in the appropriate local arrays > > 4) Restore the local vectors from the arrays > > 5) Scatter the local vector info back to the global vector > > However, if I then re-scatter and look at the local vectors, the ghost cell values are zero. It seems as though the ghost values are lost when scattered back to the global vector. This is nuts; in theory the basic DMDA does what one needs to handle periodic boundary conditions. No need to try to implement it yourself. Of course we could always have bugs so if you have a problem with my reasoning send a simple 1d code that demonstrates the issue. Barry > > What am I doing wrong here? > > Thanks for your help! > > -Paul > > From bsmith at mcs.anl.gov Fri May 6 17:53:59 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 May 2016 17:53:59 -0500 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help In-Reply-To: References: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> Message-ID: <98E0B523-76B2-4578-831D-308E69FA14E3@mcs.anl.gov> > On May 6, 2016, at 5:09 PM, Paul Stephen Urbanczyk wrote: > > Hello Barry, > > Thank you for responding. I think I'm following you. I set up a little 1-D test problem, and used the built-in DMDASetUniformCoordinates function. Then, by turning on and off the periodic BCs, I see the change in behavior. I think I was making a mistake with the right boundary indexing. > > However, this raises another question: shouldn't the coordinates behave differently than other field variables? In other words, shouldn't the periodic ghost points at the left end have coordinates (0-h) and (0-2h), rather than (1-h) and (1-2h)? And, at the right side, shouldn't the x-coordinates be 1 and 1+h? I don't know. But I believe they should not; periodic boundary conditions can be interpreted as having the domain [0,1) connected at the two ends so there would never be coordinates less than 0 or 1 or larger. Another way, I think, of saying the same thing is that if you take a point x in domain and add some positive distance d then the coordinate of the result is give by fractionalpart(x + d) something similar can be done for subtracting some distance. I remember vaguely there are precise mathematical ways to define this kind of thing but I don't think it matters. Barry > > Apologies for what may be somewhat remedial questions, but I'm somewhat new to PETSc and have not actually programmed a periodic case before. > > Thanks for all of your help! > > -Paul > > ________________________________________ > From: Barry Smith > Sent: Tuesday, May 3, 2016 7:26:38 PM > To: Paul Stephen Urbanczyk > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Structured Finite Difference Method With Periodic BC Help > >> On May 3, 2016, at 7:38 PM, Paul Urbanczyk wrote: >> >> Hello, >> >> I'm trying to implement a CFD code using finite differences on a structured mesh with PETSc. >> >> I'm having a bit of trouble understanding how to properly use the periodic boundary condition with distributed arrays, and need some clarification. >> >> If I set the boundaries in the x-direction to be DM_BOUNDARY_PERIODIC, and set a stencil width of two, there should be two ghost cells in the x-direction at each end of the mesh, which looks like it's happening just fine. >> >> However, it seems that the assumption being made by PETSc when filling in these values is that the mesh is a cell-centered finite difference, rather than a vertex-centered finite difference. Is there a way to change this behavior? In other words, I need the first ghost cell on each side to correspond to the opposite side's first interior point, rather than the opposite boundary point. > > DMDA doesn't really have a concept of if the values are cell-centered or vertex centered. I think this issue you are facing is only an issue of labeling, not of real substance; in some sense it is up to you to interpret the meaning of the values. In your case with vertex centered values here is how I see it in a picture. Consider a domain from [0,1) periodic with n grid points in the global vector > > x = 0 1-2h 1-h 1 > i = 0 1 2 n-1 > > So you divide the domain into n sections and label the vertices (left end of the sections) starting with zero, note that the last section has no right hand side index because the value at x=1 is the same as the value at x=0 now if you have two processors and a stencil width of two the local domains look like > > x = 1-2h 1-h 0 > i = n-2 n-1 0 1 2 .... > > > x = 1-2h 1-h 1=0* 1+h > i = n-2 n-1 0 1 > > This is what the DMDA will deliver in the local vector after a DMGlobalToLocalBegin/End > > In periodic coordinates the location x=1 is the same as the location x=0 >> >> If there is not a way to change this behavior, then I need to set my own ghost cells; however, I'm having trouble implementing this... >> >> If I change the boundaries to DM_BOUNDARY_GHOSTED, with a stencil width of two, I have the necessary ghost cells at either end of the mesh. I then try to do the following: >> >> 1) Scatter the global vector to local vectors on each rank >> >> 2) Get a local array referencing the local vectors >> >> 3) Calculate ghost values and fill them in the appropriate local arrays >> >> 4) Restore the local vectors from the arrays >> >> 5) Scatter the local vector info back to the global vector >> >> However, if I then re-scatter and look at the local vectors, the ghost cell values are zero. It seems as though the ghost values are lost when scattered back to the global vector. > > This is nuts; in theory the basic DMDA does what one needs to handle periodic boundary conditions. No need to try to implement it yourself. > > Of course we could always have bugs so if you have a problem with my reasoning send a simple 1d code that demonstrates the issue. > > Barry > >> >> What am I doing wrong here? >> >> Thanks for your help! >> >> -Paul >> >> > > From jed at jedbrown.org Fri May 6 18:13:20 2016 From: jed at jedbrown.org (Jed Brown) Date: Fri, 06 May 2016 17:13:20 -0600 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help In-Reply-To: <98E0B523-76B2-4578-831D-308E69FA14E3@mcs.anl.gov> References: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> <98E0B523-76B2-4578-831D-308E69FA14E3@mcs.anl.gov> Message-ID: <8760uqdblb.fsf@jedbrown.org> Barry Smith writes: >> On May 6, 2016, at 5:09 PM, Paul Stephen Urbanczyk wrote: >> >> However, this raises another question: shouldn't the coordinates behave differently than other field variables? In other words, shouldn't the periodic ghost points at the left end have coordinates (0-h) and (0-2h), rather than (1-h) and (1-2h)? And, at the right side, shouldn't the x-coordinates be 1 and 1+h? > > I don't know. But I believe they should not; periodic boundary > conditions can be interpreted as having the domain [0,1) connected > at the two ends so there would never be coordinates less than 0 or > 1 or larger. Another way, I think, of saying the same thing is that > if you take a point x in domain and add some positive distance d > then the coordinate of the result is give by fractionalpart(x + d) > something similar can be done for subtracting some distance. Additionally, making up coordinates for ghost points is not well-defined for non-uniform or distorted grids. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From praveenpetsc at gmail.com Sat May 7 09:09:23 2016 From: praveenpetsc at gmail.com (praveen kumar) Date: Sat, 7 May 2016 19:39:23 +0530 Subject: [petsc-users] virtual nodes at processes boundary In-Reply-To: References: <9F5D23D9-75CE-4715-AD08-A97625AB0FD4@mcs.anl.gov> Message-ID: Thanks a lot Barry. Thanks, Praveen On Sat, May 7, 2016 at 12:58 AM, Barry Smith wrote: > > > On May 6, 2016, at 2:15 PM, praveen kumar > wrote: > > > > I didn't frame the question properly. Suppose we have grid on vertices > numbered 1 to N and we break it into two pieces (1,N/2) and (N/2+1,N). As > it is FVM code, DMboundary type is DM_BOUNDARY_GHOSTED. nodes 0 and N+1 > are to the left and right of nodes 1 and N at a distance of dx/2 > respectively. Let me call 0 and N+1 as virtual nodes where boundary > conditions are applied. As you know virtual nodes don't take part in > computation and are different from what we call ghost nodes in parallel > computing terminology. In serial code problem is solved by CALL TDMA(0,N+1). > > I don't know why you want to have a concept of "virtual nodes" being > different than "ghost nodes"? > > > > I've decomposed the domain using PETSc, and I've replaced indices in > serial code DO Loops with the information from DMDAGetCorners . > > if I want to solve the problem using TDMA on process0, it is not > possible as process0 doesn't contain virtual node at it's right boundary > i.e CALL TDMA(0,X) where X should be at a distance of dx/2 from N/2 but it > is not there. Similarly for process1 there is no virtual node at it's left > boundary. So, how can I create these virtual nodes at the processes > boundary. I want to set the variable value at X as previous > time-step/iteration value. > > Why? Don't you have to do some kind of iteration where you update the > boundary conditions from the other processes, solve the local problem and > then repeat until the solution is converged? This is the normal thing > people do with domain decomposition type solvers and is very easy with > DMDA. How can you hope to get the solution correct without an iteration > passing ghost values? If you just put values from some previous time-step > in in the ghost locations then each process will solver a local problem but > the result won't match between processes so will be garbage, won't it? > > Barry > > > > I'm not sure whether my methodology is correct or not. If you think it > is very cumbersome, please suggest something else. > > > > > > Thanks, > > Praveen > > > > On Fri, May 6, 2016 at 8:00 PM, praveen kumar > wrote: > > Thanks Matt , Thanks Barry. I'll get back to you. > > > > Thanks, > > Praveen > > > > On Fri, May 6, 2016 at 7:48 PM, Barry Smith wrote: > > > > > On May 6, 2016, at 5:08 AM, praveen kumar > wrote: > > > > > > Hi, > > > > > > I am trying to implement Petsc for DD in a serial fortran FVM code. I > want to use solver from serial code itself. Solver consists of gauss seidel > + TDMA. BCs are given along with the solver at boundary virtual nodes. For > Ex: CALL TDMA(0,nx+1), where BCs are given at 0 and nx+1 which are virtual > nodes (which don't take part in computation). I partitioned the domain > using DMDACreate and got the ghost nodes information using DMDAGetcorners. > But how to create the virtual nodes at the processes boundary where BCs are > to be given. Please suggest all the possibilities to fix this other than > using PETSc for solver parallelization. > > > > DMCreateGlobalVector(dm,gvector,ierr); > > DMCreateLocalVector(dm,lvector,ierr); > > > > /* full up gvector with initial guess or whatever */ > > > > DMGlobalToLocalBegin(dm,gvector,INSERT_VALUES,lvector,ierr) > > DMGlobalToLocalEnd(dm,gvector,INSERT_VALUES,lvector,ierr) > > > > Now the vector lvector has the ghost values you can use > > > > DMDAVecGetArrayF90(dm,lvector,fortran_array_pointer_of_correct > dimension for your problem (1,2,3d)) > > > > Note that the indexing into the fortran_array_pointer uses the > global indexing, not the local indexing. You can use DMDAGetCorners() to > get the start and end indices for each process. > > > > Barry > > > > > > > > > > > > Thanks, > > > Praveen > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gomer at stanford.edu Sat May 7 19:58:10 2016 From: gomer at stanford.edu (Paul Stephen Urbanczyk) Date: Sun, 8 May 2016 00:58:10 +0000 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help In-Reply-To: <8760uqdblb.fsf@jedbrown.org> References: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> <98E0B523-76B2-4578-831D-308E69FA14E3@mcs.anl.gov>, <8760uqdblb.fsf@jedbrown.org> Message-ID: Hi Barry and Jed, Thank you for responding. I greatly appreciate it. I'm still wrapping my head around the periodic BC case. In the meantime, putting the periodic BC case aside, I'm wondering how best to set ghost points outside the computational domain, generally (for example if i need to set values for other types of BCs like inflow/outflow/wall/sponge/etc)? In my earlier experiments, scattering the global vector to local vectors, setting the ghost point values using a local array reference to the local vectors, and then scattering the local vectors back to the global vectors didn't seem to work. Any help is appreciated. Thanks, Paul ________________________________________ From: Jed Brown Sent: Friday, May 6, 2016 4:13:20 PM To: Barry Smith; Paul Stephen Urbanczyk Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Structured Finite Difference Method With Periodic BC Help Barry Smith writes: >> On May 6, 2016, at 5:09 PM, Paul Stephen Urbanczyk wrote: >> >> However, this raises another question: shouldn't the coordinates behave differently than other field variables? In other words, shouldn't the periodic ghost points at the left end have coordinates (0-h) and (0-2h), rather than (1-h) and (1-2h)? And, at the right side, shouldn't the x-coordinates be 1 and 1+h? > > I don't know. But I believe they should not; periodic boundary > conditions can be interpreted as having the domain [0,1) connected > at the two ends so there would never be coordinates less than 0 or > 1 or larger. Another way, I think, of saying the same thing is that > if you take a point x in domain and add some positive distance d > then the coordinate of the result is give by fractionalpart(x + d) > something similar can be done for subtracting some distance. Additionally, making up coordinates for ghost points is not well-defined for non-uniform or distorted grids. From bsmith at mcs.anl.gov Sat May 7 21:05:45 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 7 May 2016 21:05:45 -0500 Subject: [petsc-users] Structured Finite Difference Method With Periodic BC Help In-Reply-To: References: <3e36134d-156f-2f6a-26df-4b7f46cd70d2@stanford.edu> <0B93C7D3-5818-4431-B9BE-A1BBE944EC1A@mcs.anl.gov> <98E0B523-76B2-4578-831D-308E69FA14E3@mcs.anl.gov> <8760uqdblb.fsf@jedbrown.org> Message-ID: > On May 7, 2016, at 7:58 PM, Paul Stephen Urbanczyk wrote: > > Hi Barry and Jed, > > Thank you for responding. I greatly appreciate it. I'm still wrapping my head around the periodic BC case. > > In the meantime, putting the periodic BC case aside, I'm wondering how best to set ghost points outside the computational domain, generally (for example if i need to set values for other types of BCs like inflow/outflow/wall/sponge/etc)? > > In my earlier experiments, scattering the global vector to local vectors, setting the ghost point values using a local array reference to the local vectors, and then scattering the local vectors back to the global vectors didn't seem to work. Hmm, generally these ghost values are only put into the "local" representation, they are then used in computing the function evaluation but are not included in anyway in the global solution (global vector). So what one does is scatter the current solution to the local vectors, put whatever is needed in the ghost locations and then the function is evaluated putting the results in a global vector but using the local vector as the input function. The global vector does not have any concept of ghost values nor does it need it; only the local vectors sometimes need it. It is easier than you think. If you are still having trouble perhaps you could explain a particular type of boundary condition you want to do and why what you do fails. Barry > > Any help is appreciated. > > Thanks, > > Paul > ________________________________________ > From: Jed Brown > Sent: Friday, May 6, 2016 4:13:20 PM > To: Barry Smith; Paul Stephen Urbanczyk > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Structured Finite Difference Method With Periodic BC Help > > Barry Smith writes: > >>> On May 6, 2016, at 5:09 PM, Paul Stephen Urbanczyk wrote: >>> >>> However, this raises another question: shouldn't the coordinates behave differently than other field variables? In other words, shouldn't the periodic ghost points at the left end have coordinates (0-h) and (0-2h), rather than (1-h) and (1-2h)? And, at the right side, shouldn't the x-coordinates be 1 and 1+h? >> >> I don't know. But I believe they should not; periodic boundary >> conditions can be interpreted as having the domain [0,1) connected >> at the two ends so there would never be coordinates less than 0 or >> 1 or larger. Another way, I think, of saying the same thing is that >> if you take a point x in domain and add some positive distance d >> then the coordinate of the result is give by fractionalpart(x + d) >> something similar can be done for subtracting some distance. > > Additionally, making up coordinates for ghost points is not well-defined > for non-uniform or distorted grids. > From dominic at steinitz.org Sun May 8 03:15:31 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Sun, 8 May 2016 09:15:31 +0100 Subject: [petsc-users] petsc and slepc incompatibility Message-ID: <572EF5A3.9080501@steinitz.org> Hi Hong, I too am having compatibility issues. I looked at the git repos. > v3.7 PETSc 3.7 > v3.6.4 PETSc 3.6.4 > v3.6.3 PETSc 3.6.3 > v3.6.2 PETSc 3.6.2 > v3.6.1 PETSc 3.6.1 and > v3.6.3 SLEPc 3.6.3 > v3.6.2 SLEPc 3.6.2 > v3.6.1 SLEPc 3.6.1 If I just checkout master on both repos, won't I get PETSc 3.7 and SLEPc 3.6.3? Should I checkout the tag for 3.6.3 on the PETSc repo? Many thanks, Dominic BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for announcements on new versions but nothing has been posted since June 2015. > Marco: > You may use petsc-master and slepc-master. > Hong > > Dear All, > > > > first, let me congratulate and thank the PETSc team and all > > contributors for release 3.7 . > > > > I just noticed that the changes in Options handling introduced in 3.7 > > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is flagged as > > incompatible with 3.7). > > > > Is there already a time estimate for a compatible release of SLEPc ? > > > > Thank you and kind regards, > > > > Marco Zocca > > From rupp at iue.tuwien.ac.at Sun May 8 03:19:34 2016 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Sun, 8 May 2016 10:19:34 +0200 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF5A3.9080501@steinitz.org> References: <572EF5A3.9080501@steinitz.org> Message-ID: <572EF696.20003@iue.tuwien.ac.at> Dear Dominic, have a look here: http://lists.mcs.anl.gov/pipermail/petsc-users/2016-April/029224.html Best regards, Karli On 05/08/2016 10:15 AM, Dominic Steinitz wrote: > Hi Hong, > > I too am having compatibility issues. I looked at the git repos. > >> v3.7 PETSc 3.7 >> v3.6.4 PETSc 3.6.4 >> v3.6.3 PETSc 3.6.3 >> v3.6.2 PETSc 3.6.2 >> v3.6.1 PETSc 3.6.1 > and > >> v3.6.3 SLEPc 3.6.3 >> v3.6.2 SLEPc 3.6.2 >> v3.6.1 SLEPc 3.6.1 > If I just checkout master on both repos, won't I get PETSc 3.7 and SLEPc > 3.6.3? > > Should I checkout the tag for 3.6.3 on the PETSc repo? > > Many thanks, Dominic > > BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for > announcements on new versions but nothing has been posted since June 2015. > >> Marco: >> You may use petsc-master and slepc-master. >> Hong >> >> Dear All, >> > >> > first, let me congratulate and thank the PETSc team and all >> > contributors for release 3.7 . >> > >> > I just noticed that the changes in Options handling introduced in 3.7 >> > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is flagged as >> > incompatible with 3.7). >> > >> > Is there already a time estimate for a compatible release of SLEPc ? >> > >> > Thank you and kind regards, >> > >> > Marco Zocca >> > From dominic at steinitz.org Sun May 8 03:22:53 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Sun, 8 May 2016 09:22:53 +0100 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF696.20003@iue.tuwien.ac.at> References: <572EF5A3.9080501@steinitz.org> <572EF696.20003@iue.tuwien.ac.at> Message-ID: <572EF75D.9000001@steinitz.org> Hi Karli, Many thanks for the very swift reply :-) I did indeed look there and it says > Stay tuned via the mailing list Which is why I subscribed to the slepc mailing list but perhaps it meant *this* mailing list rather than the slepc mailing list? I can build 3.7 :-) I have just checked out the tag marked 3.6.3 and am trying to build that. I will report back soon. Once again thanks for responding Dominic. On 08/05/2016 09:19, Karl Rupp wrote: > Dear Dominic, > > have a look here: > http://lists.mcs.anl.gov/pipermail/petsc-users/2016-April/029224.html > > Best regards, > Karli > > > > On 05/08/2016 10:15 AM, Dominic Steinitz wrote: >> Hi Hong, >> >> I too am having compatibility issues. I looked at the git repos. >> >>> v3.7 PETSc 3.7 >>> v3.6.4 PETSc 3.6.4 >>> v3.6.3 PETSc 3.6.3 >>> v3.6.2 PETSc 3.6.2 >>> v3.6.1 PETSc 3.6.1 >> and >> >>> v3.6.3 SLEPc 3.6.3 >>> v3.6.2 SLEPc 3.6.2 >>> v3.6.1 SLEPc 3.6.1 >> If I just checkout master on both repos, won't I get PETSc 3.7 and SLEPc >> 3.6.3? >> >> Should I checkout the tag for 3.6.3 on the PETSc repo? >> >> Many thanks, Dominic >> >> BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for >> announcements on new versions but nothing has been posted since June >> 2015. >> >>> Marco: >>> You may use petsc-master and slepc-master. >>> Hong >>> >>> Dear All, >>> > >>> > first, let me congratulate and thank the PETSc team and all >>> > contributors for release 3.7 . >>> > >>> > I just noticed that the changes in Options handling introduced in 3.7 >>> > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is flagged as >>> > incompatible with 3.7). >>> > >>> > Is there already a time estimate for a compatible release of SLEPc ? >>> > >>> > Thank you and kind regards, >>> > >>> > Marco Zocca >>> > > From dominic at steinitz.org Sun May 8 03:31:14 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Sun, 8 May 2016 09:31:14 +0100 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF75D.9000001@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572EF696.20003@iue.tuwien.ac.at> <572EF75D.9000001@steinitz.org> Message-ID: <572EF952.20600@steinitz.org> Replying to myself I am not convinced what I am doing is going to work. I looked here: http://www.mcs.anl.gov/petsc/petsc-as/download/index.html and there is no mention of a 3.6.3 release despite there being a tag in the repo with 3.6.3. Can someone advise me which version of PETSc and SLEPc are compatible? Should I try PETSc 3.6.4 and SLEPc 3.6.3 or 3.6.0? Many thanks, Dominic. On 08/05/2016 09:22, Dominic Steinitz wrote: > Hi Karli, > > Many thanks for the very swift reply :-) I did indeed look there and > it says > >> Stay tuned via the mailing list > Which is why I subscribed to the slepc mailing list but perhaps it > meant *this* mailing list rather than the slepc mailing list? > > I can build 3.7 :-) I have just checked out the tag marked 3.6.3 and > am trying to build that. I will report back soon. > > Once again thanks for responding > > Dominic. > > On 08/05/2016 09:19, Karl Rupp wrote: >> Dear Dominic, >> >> have a look here: >> http://lists.mcs.anl.gov/pipermail/petsc-users/2016-April/029224.html >> >> Best regards, >> Karli >> >> >> >> On 05/08/2016 10:15 AM, Dominic Steinitz wrote: >>> Hi Hong, >>> >>> I too am having compatibility issues. I looked at the git repos. >>> >>>> v3.7 PETSc 3.7 >>>> v3.6.4 PETSc 3.6.4 >>>> v3.6.3 PETSc 3.6.3 >>>> v3.6.2 PETSc 3.6.2 >>>> v3.6.1 PETSc 3.6.1 >>> and >>> >>>> v3.6.3 SLEPc 3.6.3 >>>> v3.6.2 SLEPc 3.6.2 >>>> v3.6.1 SLEPc 3.6.1 >>> If I just checkout master on both repos, won't I get PETSc 3.7 and >>> SLEPc >>> 3.6.3? >>> >>> Should I checkout the tag for 3.6.3 on the PETSc repo? >>> >>> Many thanks, Dominic >>> >>> BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for >>> announcements on new versions but nothing has been posted since June >>> 2015. >>> >>>> Marco: >>>> You may use petsc-master and slepc-master. >>>> Hong >>>> >>>> Dear All, >>>> > >>>> > first, let me congratulate and thank the PETSc team and all >>>> > contributors for release 3.7 . >>>> > >>>> > I just noticed that the changes in Options handling introduced in >>>> 3.7 >>>> > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is >>>> flagged as >>>> > incompatible with 3.7). >>>> > >>>> > Is there already a time estimate for a compatible release of SLEPc ? >>>> > >>>> > Thank you and kind regards, >>>> > >>>> > Marco Zocca >>>> > >> > From dominic at steinitz.org Sun May 8 03:35:24 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Sun, 8 May 2016 09:35:24 +0100 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF952.20600@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572EF696.20003@iue.tuwien.ac.at> <572EF75D.9000001@steinitz.org> <572EF952.20600@steinitz.org> Message-ID: <572EFA4C.5020808@steinitz.org> Reply to myself again, this seems to have worked: Checkout the tag 3.6.3 on PETSc and build Checkout the tag 3.6.3 on SLEPc and build Sorry for the noise Dominic. On 08/05/2016 09:31, Dominic Steinitz wrote: > Replying to myself > > I am not convinced what I am doing is going to work. I looked here: > http://www.mcs.anl.gov/petsc/petsc-as/download/index.html and there is > no mention of a 3.6.3 release despite there being a tag in the repo > with 3.6.3. > > Can someone advise me which version of PETSc and SLEPc are compatible? > Should I try PETSc 3.6.4 and SLEPc 3.6.3 or 3.6.0? > > Many thanks, Dominic. > > On 08/05/2016 09:22, Dominic Steinitz wrote: >> Hi Karli, >> >> Many thanks for the very swift reply :-) I did indeed look there and >> it says >> >>> Stay tuned via the mailing list >> Which is why I subscribed to the slepc mailing list but perhaps it >> meant *this* mailing list rather than the slepc mailing list? >> >> I can build 3.7 :-) I have just checked out the tag marked 3.6.3 and >> am trying to build that. I will report back soon. >> >> Once again thanks for responding >> >> Dominic. >> >> On 08/05/2016 09:19, Karl Rupp wrote: >>> Dear Dominic, >>> >>> have a look here: >>> http://lists.mcs.anl.gov/pipermail/petsc-users/2016-April/029224.html >>> >>> Best regards, >>> Karli >>> >>> >>> >>> On 05/08/2016 10:15 AM, Dominic Steinitz wrote: >>>> Hi Hong, >>>> >>>> I too am having compatibility issues. I looked at the git repos. >>>> >>>>> v3.7 PETSc 3.7 >>>>> v3.6.4 PETSc 3.6.4 >>>>> v3.6.3 PETSc 3.6.3 >>>>> v3.6.2 PETSc 3.6.2 >>>>> v3.6.1 PETSc 3.6.1 >>>> and >>>> >>>>> v3.6.3 SLEPc 3.6.3 >>>>> v3.6.2 SLEPc 3.6.2 >>>>> v3.6.1 SLEPc 3.6.1 >>>> If I just checkout master on both repos, won't I get PETSc 3.7 and >>>> SLEPc >>>> 3.6.3? >>>> >>>> Should I checkout the tag for 3.6.3 on the PETSc repo? >>>> >>>> Many thanks, Dominic >>>> >>>> BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for >>>> announcements on new versions but nothing has been posted since >>>> June 2015. >>>> >>>>> Marco: >>>>> You may use petsc-master and slepc-master. >>>>> Hong >>>>> >>>>> Dear All, >>>>> > >>>>> > first, let me congratulate and thank the PETSc team and all >>>>> > contributors for release 3.7 . >>>>> > >>>>> > I just noticed that the changes in Options handling introduced >>>>> in 3.7 >>>>> > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is >>>>> flagged as >>>>> > incompatible with 3.7). >>>>> > >>>>> > Is there already a time estimate for a compatible release of >>>>> SLEPc ? >>>>> > >>>>> > Thank you and kind regards, >>>>> > >>>>> > Marco Zocca >>>>> > >>> >> > From jroman at dsic.upv.es Sun May 8 03:35:49 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 8 May 2016 10:35:49 +0200 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF952.20600@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572EF696.20003@iue.tuwien.ac.at> <572EF75D.9000001@steinitz.org> <572EF952.20600@steinitz.org> Message-ID: <137A3040-90DA-4291-9413-2618F871247C@dsic.upv.es> > El 8 may 2016, a las 10:31, Dominic Steinitz escribi?: > > Replying to myself > > I am not convinced what I am doing is going to work. I looked here: http://www.mcs.anl.gov/petsc/petsc-as/download/index.html and there is no mention of a 3.6.3 release despite there being a tag in the repo with 3.6.3. > > Can someone advise me which version of PETSc and SLEPc are compatible? Should I try PETSc 3.6.4 and SLEPc 3.6.3 or 3.6.0? > > Many thanks, Dominic. slepc-3.6.3 is available for download at slepc.upv.es/download slepc-3.7 will be compatible with petsc-3.7. It will probably be released during next week. The release will be announced in the slepc-announce mailing list. Jose From giacomo.mulas84 at gmail.com Sun May 8 06:20:49 2016 From: giacomo.mulas84 at gmail.com (Giacomo Mulas) Date: Sun, 8 May 2016 13:20:49 +0200 (CEST) Subject: [petsc-users] EPSSolve finds eigensolutions in deflation space Message-ID: I found a problem in the default iterative eigensystem solver of SLEPC, namely the Krilov-Schur one: it does not respect the deflation space that it is given. This makes it very inefficient even in a case in which it should be extremely effective. The problem I want to solve: I have a symmetric operator (an approximated vibrational Hamiltonian, but that's unimportant) and I want to selectively find a subset of eigenvectors so that some predefined vectors (which I call "target" vectors) are very nearly contained in the eigenvector subspace. I therefore have both a well defined selection criterion (find eigenstates which have maximum projection on the subspace of "target" vectors) and a well defined condition for stopping (say, >99% of each of the "target" vectors is contained in the subspace of the eigenstates found). The first one can be explicitly cast in a function and given to SLEPC via EPSSetArbitrarySelection and EPSSetWhichEigenpairs, so that the solver is guided to the solutions I am most interested in. I also make use of EPSSetInitialSpace to have the solver start from a guess which is "rich" in the "target" vectors (just the "target" vector which has the smallest projection on the eigenstate space so far). As to stopping when I found just enough eigenstates, unfortunately it is thus far impossible to give a custom function to slepc so that it will keep finding eigenvectors until all are found or the user-defined function tells it it's enough. Therefore, my second best solution was to repeatedly run EPSSolve, aiming at a small number of eigensolutions in each call, putting previously found solutions in the deflation space and passing it to the solver via EPSSetDeflationSpace. Unfortunately, this did not simply work out of the box. The first problem is that after the first few, easiest to find eigensolutions are found, EPSSolve frequently does not find any solutions, if run with a very conservative EPSSetDimensions (e.g. like "find me just one eigensolution"). I thus wrote my code so that the number of requested solutions in EPSSolve are increased if no new solutions are found by an EPSSolve call, and decreased if too many were found in just one call. Inefficient as it may be, due to many useless EPSSolve calls, it works. However, when trying to use this on real cases, another problem arose, the big one: frequently, EPSSolve would find and return eigenvectors that were in the deflation space! Some eigensolutions would therefore appear more than once in the set of solutions, of course breaking everything down the road of my physics calculation before I figured out why. When I found out, I set up a check in my code that explicitly computes the projection of newly found eigenstates on previously found ones, and discards them if they are duplicates. But this is ineffective for several reasons: first, such a check is a waste of time and should not be necessary, otherwise what is the point in setting up a deflation space? second, and most important, I have to run EPSSolve looking for lots of solutions to only find one or two really new ones, because all the first ones are duplicates. Even if the initial guess is rich in the solution I actually want, and poor in the duplicate ones. As a result, even if I would just need to find a (relatively) small subset of the eigenvectors, lapack turns out to be always much faster, which should not be the case. I am attaching a test code that shows this behaviour, in the hope that it will help the SLEPC developers in finding why the deflation space is not respected and, if possible, correct it. I know that being able to reproduce a problem is the first, sometimes actually the most important, step to attempt to solve it, so there it goes. Of course I also have a tiny hope that they might actually include support for a user-defined check for "enough solutions" in EPSSolve... There is in fact something not too different already implemented, for EPSSolve can find all solutions with eigenvalues in a given interval, without knowing in advance how many they will be, without the user having to loop over many tentative EPSSolve calls and finding lots of unneeded additional solutions in the process. To run the test code, just compile the source, and run it in the same directory of the two data files. This is a small test case, so that it runs fast enough, but I have much larger ones and can produce them more or less as large as wanted on request. Upon running the code, you will see that lots of tentative EPSSolve calls are made, many times finding no solutions or only solutions already in the deflation space (which contains previously found eigenvectors). In this code I privileged clarity over efficiency, no attempt was made to overoptimise anything. I put lots of comments in it, so that hopefully it is self-explanatory. But of course I am more than willing to explain whatever is not clear in it. Just ignore the part that reads the input files, that's irrelevant. Of course, I would also very much appreciate it if I could get some clever suggestions, from SLEPC developers or anyone else on the list, as to how to solve my problem more efficiently. And of course I am perfectly willing to thank them and/or to include them as authors in the scientific publication describing my molecular spectroscopy code, if some substantial contribution is offered. Best regards Giacomo Mulas -- _________________________________________________________________ Giacomo Mulas _________________________________________________________________ INAF - Osservatorio Astronomico di Cagliari via della scienza 5 - 09047 Selargius (CA) tel. +39 070 71180244 mob. : +39 329 6603810 _________________________________________________________________ "When the storms are raging around you, stay right where you are" (Freddy Mercury) _________________________________________________________________ -------------- next part -------------- 0 1 2 3 4 5 6 7 8 9 10 14 16 18 45 47 48 61 66 69 148 155 160 161 -------------- next part -------------- HAM 0 0 12516.8876204415 HAM 1 1 13410.9293803757 HAM 2 2 13807.5567671758 HAM 3 3 14698.8366742206 HAM 4 4 14753.0617782369 HAM 5 5 14757.9145234170 HAM 6 6 14809.1609207543 HAM 7 7 14810.0757294660 HAM 8 8 15100.0588048543 HAM 9 9 15102.7817280683 HAM 10 10 15137.3811319279 HAM 11 11 15243.3001396945 HAM 12 12 15284.1340445604 HAM 13 13 15284.7228408509 HAM 14 14 15451.0139708461 HAM 15 15 15504.4709255707 HAM 16 16 15527.0152178136 HAM 17 17 15573.0506618348 HAM 18 18 15574.8214425670 HAM 19 19 15582.9446947084 HAM 20 20 15590.8951482988 HAM 21 21 15611.0089737759 HAM 22 22 15680.2092849840 HAM 23 23 15923.5787984769 HAM 24 24 15927.0174397958 HAM 25 25 15936.8464025645 HAM 26 26 15965.8797858165 HAM 27 27 15988.5768590098 HAM 28 28 16041.4311998079 HAM 29 29 16045.7065204739 HAM 30 30 16100.1122082759 HAM 31 31 16104.3359210613 HAM 32 32 16287.9024011335 HAM 33 33 16319.8035615461 HAM 34 34 16322.5348221563 HAM 35 35 16324.9835593526 HAM 36 36 16344.7824008626 HAM 37 37 16391.2048758267 HAM 38 38 16393.3008862604 HAM 39 39 16394.3937334770 HAM 40 40 16413.7122559834 HAM 41 41 16414.2595382767 HAM 42 42 16417.4265159137 HAM 43 43 16417.6863055096 HAM 44 44 16425.8737164011 HAM 45 45 16441.9151641324 HAM 46 46 16444.1091050699 HAM 47 47 16452.1638258817 HAM 48 48 16467.2594512397 HAM 49 49 16467.8663099893 HAM 50 50 16476.7249957566 HAM 51 51 16480.0117378381 HAM 52 52 16506.7380265919 HAM 53 53 16566.6039349395 HAM 54 54 16740.0682693060 HAM 55 55 16753.0287119098 HAM 56 56 16762.2016076720 HAM 57 57 16763.1212920437 HAM 58 58 16768.8529472219 HAM 59 59 16774.5378641760 HAM 60 60 16796.0942403925 HAM 61 61 16811.1805111980 HAM 62 62 16815.6309651891 HAM 63 63 16817.3809553770 HAM 64 64 16818.6060255670 HAM 65 65 16829.0042630660 HAM 66 66 16829.1850491025 HAM 67 67 16831.4949524414 HAM 68 68 16861.6227077302 HAM 69 69 16861.6622996356 HAM 70 70 16873.0002930315 HAM 71 71 16876.6960129012 HAM 72 72 16902.9341416102 HAM 73 73 16965.2920426159 HAM 74 74 17703.2804103999 HAM 75 75 17725.8568090435 HAM 76 76 17736.4537444393 HAM 77 77 17738.2907118694 HAM 78 78 17739.4515080188 HAM 79 79 17751.3384554190 HAM 80 80 17753.6765029955 HAM 81 81 17760.0975196187 HAM 82 82 17764.0192890905 HAM 83 83 17764.7035547386 HAM 84 84 17767.8137217453 HAM 85 85 17769.6504967120 HAM 86 86 17770.7805455661 HAM 87 87 17772.5850490916 HAM 88 88 17783.9828778954 HAM 89 89 17787.0640808685 HAM 90 90 17821.3479459743 HAM 91 91 17821.4831757131 HAM 92 92 17824.8727674663 HAM 93 93 17837.9743850425 HAM 94 94 17841.9570507233 HAM 95 95 18045.3149924429 HAM 96 96 18097.1936269259 HAM 97 97 18104.2632016335 HAM 98 98 18104.7174306744 HAM 99 99 18106.0796035088 HAM 100 100 18108.4253438466 HAM 101 101 18117.3911507863 HAM 102 102 18117.5842976362 HAM 103 103 18129.9788096614 HAM 104 104 18131.4896287809 HAM 105 105 18148.4300336527 HAM 106 106 18150.0941242732 HAM 107 107 18150.3360476484 HAM 108 108 18152.0276445699 HAM 109 109 18153.9689360235 HAM 110 110 18169.2813863607 HAM 111 111 18255.0755475688 HAM 112 112 18267.4468851782 HAM 113 113 18288.8465516023 HAM 114 114 18290.7443967233 HAM 115 115 18311.7169112045 HAM 116 116 18393.4603217746 HAM 117 117 18447.1878866549 HAM 118 118 18456.1964367292 HAM 119 119 18468.4968429871 HAM 120 120 18492.4503767861 HAM 121 121 18507.9502732381 HAM 122 122 18523.2637358933 HAM 123 123 18530.6121231785 HAM 124 124 18534.3610777548 HAM 125 125 18536.8355751369 HAM 126 126 18544.1256331562 HAM 127 127 18545.4054611225 HAM 128 128 18549.4978233929 HAM 129 129 18550.0320591621 HAM 130 130 18550.9285947826 HAM 131 131 18555.9564704252 HAM 132 132 18562.1044030372 HAM 133 133 18563.2542429370 HAM 134 134 18573.7057418605 HAM 135 135 18573.9678957932 HAM 136 136 18576.1607050182 HAM 137 137 18577.1296030692 HAM 138 138 18578.0260614922 HAM 139 139 18591.7494116898 HAM 140 140 18595.6921481048 HAM 141 141 18598.0799370839 HAM 142 142 18598.2785327553 HAM 143 143 18599.6992666258 HAM 144 144 18600.1228252272 HAM 145 145 18604.9996902989 HAM 146 146 18606.0901062519 HAM 147 147 18609.4148938941 HAM 148 148 18659.2166308585 HAM 149 149 18728.5718471405 HAM 150 150 18734.1113901197 HAM 151 151 18736.9897484506 HAM 152 152 18740.8432135816 HAM 153 153 18740.8666440240 HAM 154 154 18742.2157209945 HAM 155 155 18747.9752288105 HAM 156 156 18748.5130448142 HAM 157 157 18749.2935997743 HAM 158 158 18800.7438670903 HAM 159 159 18808.8171303531 HAM 160 160 18925.8092509898 HAM 161 161 18956.1339944998 HAM 162 162 19423.1686245934 HAM 163 163 19466.1329128885 HAM 164 164 19466.2920981702 HAM 165 165 19466.7334519793 HAM 166 166 19469.3726019113 HAM 167 167 19470.6499907136 HAM 168 168 19482.9345980102 HAM 169 169 19483.3015255160 HAM 170 170 19483.6523658172 HAM 171 171 19483.7646604604 HAM 172 172 19484.7615710927 HAM 173 173 19487.0508238989 HAM 174 174 19487.2121049035 HAM 175 175 19488.7624382063 HAM 176 176 19491.4440057366 HAM 177 177 19496.9161994805 HAM 178 178 19513.5260939871 HAM 179 179 19541.0115872168 HAM 180 180 19550.0508882697 HAM 181 181 19778.6166549916 HAM 182 182 19819.1485354647 HAM 183 183 19860.9044612480 HAM 184 184 19861.9203299807 HAM 185 185 19862.2159286241 HAM 186 186 19863.1131674915 HAM 187 187 19863.9227149606 HAM 188 188 19865.0093530808 HAM 189 189 19865.3921742370 HAM 190 190 19866.1601775665 HAM 191 191 19866.6558170521 HAM 192 192 19878.2680582264 HAM 193 193 19881.9708545530 HAM 194 194 19940.1603566770 HAM 195 195 19942.2291982614 HAM 196 196 19954.5939947703 HAM 197 197 20029.7490613213 HAM 198 198 20207.7990036358 HAM 199 199 20237.7574912601 HAM 200 200 21770.0731853158 HAM 201 201 21836.5117389223 HAM 202 202 22018.2671173297 HAM 203 203 22049.5185801807 HAM 204 204 22154.0594201432 HAM 0 18 -300.4188662192 HAM 18 0 -300.4188662192 HAM 0 2 -170.7952171627 HAM 2 0 -170.7952171627 HAM 1 48 -300.4188662192 HAM 48 1 -300.4188662192 HAM 1 48 10.1816057936 HAM 48 1 10.1816057936 HAM 1 3 -170.7952171627 HAM 3 1 -170.7952171627 HAM 1 3 -57.7341762039 HAM 3 1 -57.7341762039 HAM 2 69 -300.4188662192 HAM 69 2 -300.4188662192 HAM 2 69 22.6009747399 HAM 69 2 22.6009747399 HAM 2 8 -241.5409125000 HAM 8 2 -241.5409125000 HAM 2 8 -57.1844450000 HAM 8 2 -57.1844450000 HAM 3 79 -300.4188662192 HAM 79 3 -300.4188662192 HAM 3 79 22.6009747399 HAM 79 3 22.6009747399 HAM 3 79 10.1816057936 HAM 79 3 10.1816057936 HAM 3 27 -241.5409125000 HAM 27 3 -241.5409125000 HAM 3 27 -57.1844450000 HAM 27 3 -57.1844450000 HAM 3 27 -81.6484550000 HAM 27 3 -81.6484550000 HAM 4 84 -300.4188662192 HAM 84 4 -300.4188662192 HAM 4 84 205.5464024205 HAM 84 4 205.5464024205 HAM 4 28 -170.7952171627 HAM 28 4 -170.7952171627 HAM 4 28 -38.7471181566 HAM 28 4 -38.7471181566 HAM 4 7 -4.2398725677 HAM 7 4 -4.2398725677 HAM 4 5 -3.7169879872 HAM 5 4 -3.7169879872 HAM 4 6 -3.4841136458 HAM 6 4 -3.4841136458 HAM 5 82 -300.4188662192 HAM 82 5 -300.4188662192 HAM 5 82 209.0512549631 HAM 82 5 209.0512549631 HAM 5 29 -170.7952171627 HAM 29 5 -170.7952171627 HAM 5 29 -23.3926338144 HAM 29 5 -23.3926338144 HAM 6 90 -300.4188662192 HAM 90 6 -300.4188662192 HAM 6 90 163.5191926826 HAM 90 6 163.5191926826 HAM 6 31 -170.7952171627 HAM 31 6 -170.7952171627 HAM 6 31 -48.7127063641 HAM 31 6 -48.7127063641 HAM 6 7 -2.0797773582 HAM 7 6 -2.0797773582 HAM 7 91 -300.4188662192 HAM 91 7 -300.4188662192 HAM 7 91 158.5140080426 HAM 91 7 158.5140080426 HAM 7 30 -170.7952171627 HAM 30 7 -170.7952171627 HAM 7 30 -35.0505901788 HAM 30 7 -35.0505901788 HAM 8 107 -300.4188662192 HAM 107 8 -300.4188662192 HAM 8 107 45.2019494798 HAM 107 8 45.2019494798 HAM 8 39 -295.8259938156 HAM 39 8 -295.8259938156 HAM 8 39 -140.0727114742 HAM 39 8 -140.0727114742 HAM 9 100 -300.4188662192 HAM 100 9 -300.4188662192 HAM 9 100 48.5711365154 HAM 100 9 48.5711365154 HAM 9 100 104.5256274816 HAM 100 9 104.5256274816 HAM 9 37 -170.7952171627 HAM 37 9 -170.7952171627 HAM 9 37 -2.2151039159 HAM 37 9 -2.2151039159 HAM 9 37 -11.6963169072 HAM 37 9 -11.6963169072 HAM 9 10 -5.3789536266 HAM 10 9 -5.3789536266 HAM 10 105 -300.4188662192 HAM 105 10 -300.4188662192 HAM 10 105 53.1892049414 HAM 105 10 53.1892049414 HAM 10 105 102.7732012102 HAM 105 10 102.7732012102 HAM 10 13 -19.7957050000 HAM 13 10 -19.7957050000 HAM 10 11 16.9736775000 HAM 11 10 16.9736775000 HAM 10 44 -170.7952171627 HAM 44 10 -170.7952171627 HAM 10 44 -19.7004227235 HAM 44 10 -19.7004227235 HAM 10 44 -19.3735590783 HAM 44 10 -19.3735590783 HAM 12 13 -5.2250954574 HAM 13 12 -5.2250954574 HAM 14 118 -300.4188662192 HAM 118 14 -300.4188662192 HAM 14 118 97.1422730309 HAM 118 14 97.1422730309 HAM 14 18 34.3449800000 HAM 18 14 34.3449800000 HAM 14 54 -170.7952171627 HAM 54 14 -170.7952171627 HAM 14 54 -4.4302078319 HAM 54 14 -4.4302078319 HAM 15 121 -300.4188662192 HAM 121 15 -300.4188662192 HAM 15 121 113.9080258357 HAM 121 15 113.9080258357 HAM 15 121 8.7967583758 HAM 121 15 8.7967583758 HAM 15 121 81.7595963413 HAM 121 15 81.7595963413 HAM 15 60 -170.7952171627 HAM 60 15 -170.7952171627 HAM 15 60 -17.9209708309 HAM 60 15 -17.9209708309 HAM 15 60 -42.3469492822 HAM 60 15 -42.3469492822 HAM 15 60 -24.3563531820 HAM 60 15 -24.3563531820 HAM 15 18 7.0230418611 HAM 18 15 7.0230418611 HAM 16 124 -300.4188662192 HAM 124 16 -300.4188662192 HAM 16 124 106.3784098828 HAM 124 16 106.3784098828 HAM 16 18 37.6104475000 HAM 18 16 37.6104475000 HAM 16 22 -27.9953544877 HAM 22 16 -27.9953544877 HAM 16 21 24.0044049238 HAM 21 16 24.0044049238 HAM 16 62 -170.7952171627 HAM 62 16 -170.7952171627 HAM 16 62 -39.4008454469 HAM 62 16 -39.4008454469 HAM 17 136 -300.4188662192 HAM 136 17 -300.4188662192 HAM 17 136 10.1816057936 HAM 136 17 10.1816057936 HAM 17 136 79.2570040213 HAM 136 17 79.2570040213 HAM 17 136 113.9080258357 HAM 136 17 113.9080258357 HAM 17 68 -170.7952171627 HAM 68 17 -170.7952171627 HAM 17 68 -57.7341762039 HAM 68 17 -57.7341762039 HAM 17 68 -17.5252950894 HAM 68 17 -17.5252950894 HAM 17 68 -17.9209708309 HAM 68 17 -17.9209708309 HAM 17 18 4.0418029756 HAM 18 17 4.0418029756 HAM 17 19 4.2461309736 HAM 19 17 4.2461309736 HAM 18 148 -238.8224875000 HAM 148 18 -238.8224875000 HAM 18 148 -424.8564350000 HAM 148 18 -424.8564350000 HAM 18 69 -170.7952171627 HAM 69 18 -170.7952171627 HAM 18 69 -8.6375497485 HAM 69 18 -8.6375497485 HAM 18 20 -7.1130613178 HAM 20 18 -7.1130613178 HAM 18 19 -3.8549362026 HAM 19 18 -3.8549362026 HAM 19 130 -300.4188662192 HAM 130 19 -300.4188662192 HAM 19 130 79.2570040213 HAM 130 19 79.2570040213 HAM 19 130 104.5256274816 HAM 130 19 104.5256274816 HAM 19 130 164.8004772412 HAM 130 19 164.8004772412 HAM 19 70 -170.7952171627 HAM 70 19 -170.7952171627 HAM 19 70 -17.5252950894 HAM 70 19 -17.5252950894 HAM 19 70 -11.6963169072 HAM 70 19 -11.6963169072 HAM 19 70 -9.9286949111 HAM 70 19 -9.9286949111 HAM 19 21 11.8807505823 HAM 21 19 11.8807505823 HAM 19 20 -3.6947004303 HAM 20 19 -3.6947004303 HAM 20 143 -300.4188662192 HAM 143 20 -300.4188662192 HAM 20 143 102.7732012102 HAM 143 20 102.7732012102 HAM 20 143 104.5256274816 HAM 143 20 104.5256274816 HAM 20 143 8.7967583758 HAM 143 20 8.7967583758 HAM 20 71 -170.7952171627 HAM 71 20 -170.7952171627 HAM 20 71 -19.3735590783 HAM 71 20 -19.3735590783 HAM 20 71 -11.6963169072 HAM 71 20 -11.6963169072 HAM 20 71 -42.3469492822 HAM 71 20 -42.3469492822 HAM 20 45 -101.2430327423 HAM 45 20 -101.2430327423 HAM 21 132 -300.4188662192 HAM 132 21 -300.4188662192 HAM 21 132 53.1892049414 HAM 132 21 53.1892049414 HAM 21 132 329.6009544823 HAM 132 21 329.6009544823 HAM 21 72 -170.7952171627 HAM 72 21 -170.7952171627 HAM 21 72 -19.7004227235 HAM 72 21 -19.7004227235 HAM 21 72 -19.8573898223 HAM 72 21 -19.8573898223 HAM 22 73 -170.7952171627 HAM 73 22 -170.7952171627 HAM 22 73 -19.7004227235 HAM 73 22 -19.7004227235 HAM 22 73 -84.6938985644 HAM 73 22 -84.6938985644 HAM 23 47 68.0216641669 HAM 47 23 68.0216641669 HAM 23 26 -4.7701024864 HAM 26 23 -4.7701024864 HAM 23 25 13.1422604482 HAM 25 23 13.1422604482 HAM 23 24 -3.6947004303 HAM 24 23 -3.6947004303 HAM 26 47 58.3507132130 HAM 47 26 58.3507132130 HAM 28 30 -4.2398725677 HAM 30 28 -4.2398725677 HAM 28 29 -3.7169879872 HAM 29 28 -3.7169879872 HAM 28 31 -3.4841136458 HAM 31 28 -3.4841136458 HAM 30 31 -2.0797773582 HAM 31 30 -2.0797773582 HAM 32 61 -58.3274140446 HAM 61 32 -58.3274140446 HAM 32 66 68.0216641669 HAM 66 32 68.0216641669 HAM 32 35 -6.9666076100 HAM 35 32 -6.9666076100 HAM 32 34 -6.7459436302 HAM 34 32 -6.7459436302 HAM 33 45 -49.1071906307 HAM 45 33 -49.1071906307 HAM 34 66 82.5203700000 HAM 66 34 82.5203700000 HAM 34 35 -2.0797773582 HAM 35 34 -2.0797773582 HAM 35 43 16.9736775000 HAM 43 35 16.9736775000 HAM 35 61 -100.1932350000 HAM 61 35 -100.1932350000 HAM 36 48 34.3449800000 HAM 48 36 34.3449800000 HAM 37 44 -5.3789536266 HAM 44 37 -5.3789536266 HAM 38 48 7.0230418611 HAM 48 38 7.0230418611 HAM 38 45 -3.7926138140 HAM 45 38 -3.7926138140 HAM 38 40 4.2461309736 HAM 40 38 4.2461309736 HAM 40 47 4.3345762837 HAM 47 40 4.3345762837 HAM 40 46 8.3491699926 HAM 46 40 8.3491699926 HAM 40 41 2.1168581056 HAM 41 40 2.1168581056 HAM 41 46 -3.4577939437 HAM 46 41 -3.4577939437 HAM 41 43 -7.3894008606 HAM 43 41 -7.3894008606 HAM 42 162 -300.4188662192 HAM 162 42 -300.4188662192 HAM 42 162 106.3784098828 HAM 162 42 106.3784098828 HAM 42 162 10.1816057936 HAM 162 42 10.1816057936 HAM 42 48 37.6104475000 HAM 48 42 37.6104475000 HAM 42 53 -27.9953544877 HAM 53 42 -27.9953544877 HAM 42 52 24.0044049238 HAM 52 42 24.0044049238 HAM 42 74 -170.7952171627 HAM 74 42 -170.7952171627 HAM 42 74 -39.4008454469 HAM 74 42 -39.4008454469 HAM 42 74 -57.7341762039 HAM 74 42 -57.7341762039 HAM 42 44 4.1766372747 HAM 44 42 4.1766372747 HAM 45 178 -300.4188662192 HAM 178 45 -300.4188662192 HAM 45 178 -339.0077727691 HAM 178 45 -339.0077727691 HAM 45 178 8.7967583758 HAM 178 45 8.7967583758 HAM 45 75 -170.7952171627 HAM 75 45 -170.7952171627 HAM 45 75 -7.6813044996 HAM 75 45 -7.6813044996 HAM 45 75 -42.3469492822 HAM 75 45 -42.3469492822 HAM 45 181 -357.0126665684 HAM 181 45 -357.0126665684 HAM 46 76 -241.5409125000 HAM 76 46 -241.5409125000 HAM 46 76 -57.1844450000 HAM 76 46 -57.1844450000 HAM 46 76 -25.3440800000 HAM 76 46 -25.3440800000 HAM 46 76 -59.8876300000 HAM 76 46 -59.8876300000 HAM 46 76 -14.0412950000 HAM 76 46 -14.0412950000 HAM 46 47 -4.4558933975 HAM 47 46 -4.4558933975 HAM 47 173 -300.4188662192 HAM 173 47 -300.4188662192 HAM 47 173 -355.2935556330 HAM 173 47 -355.2935556330 HAM 47 173 164.8004772412 HAM 173 47 164.8004772412 HAM 47 78 -170.7952171627 HAM 78 47 -170.7952171627 HAM 47 78 -11.6484528492 HAM 78 47 -11.6484528492 HAM 47 78 -9.9286949111 HAM 78 47 -9.9286949111 HAM 47 179 -357.0126665684 HAM 179 47 -357.0126665684 HAM 47 48 -4.9463631074 HAM 48 47 -4.9463631074 HAM 48 180 -238.8224875000 HAM 180 48 -238.8224875000 HAM 48 180 -424.8564350000 HAM 180 48 -424.8564350000 HAM 48 180 14.3989650000 HAM 180 48 14.3989650000 HAM 48 79 -170.7952171627 HAM 79 48 -170.7952171627 HAM 48 79 -8.6375497485 HAM 79 48 -8.6375497485 HAM 48 79 -57.7341762039 HAM 79 48 -57.7341762039 HAM 48 51 -7.1130613178 HAM 51 48 -7.1130613178 HAM 48 49 5.7159725845 HAM 49 48 5.7159725845 HAM 48 50 -3.8549362026 HAM 50 48 -3.8549362026 HAM 49 166 -300.4188662192 HAM 166 49 -300.4188662192 HAM 49 166 20.3632115871 HAM 166 49 20.3632115871 HAM 49 166 79.2570040213 HAM 166 49 79.2570040213 HAM 49 166 113.9080258357 HAM 166 49 113.9080258357 HAM 49 80 -170.7952171627 HAM 80 49 -170.7952171627 HAM 49 80 -115.4683524078 HAM 80 49 -115.4683524078 HAM 49 80 -17.5252950894 HAM 80 49 -17.5252950894 HAM 49 80 -17.9209708309 HAM 80 49 -17.9209708309 HAM 49 50 6.0049360104 HAM 50 49 6.0049360104 HAM 50 52 11.8807505823 HAM 52 50 11.8807505823 HAM 50 51 -3.6947004303 HAM 51 50 -3.6947004303 HAM 51 174 -300.4188662192 HAM 174 51 -300.4188662192 HAM 51 174 102.7732012102 HAM 174 51 102.7732012102 HAM 51 174 10.1816057936 HAM 174 51 10.1816057936 HAM 51 174 104.5256274816 HAM 174 51 104.5256274816 HAM 51 174 8.7967583758 HAM 174 51 8.7967583758 HAM 54 69 34.3449800000 HAM 69 54 34.3449800000 HAM 55 60 3.3831146002 HAM 60 55 3.3831146002 HAM 55 57 9.2929814831 HAM 57 55 9.2929814831 HAM 55 56 -3.3383856767 HAM 56 55 -3.3383856767 HAM 56 66 6.1300165675 HAM 66 56 6.1300165675 HAM 56 60 11.8075094380 HAM 60 56 11.8075094380 HAM 56 57 2.9936894426 HAM 57 56 2.9936894426 HAM 56 59 -2.0797773582 HAM 59 56 -2.0797773582 HAM 57 61 -5.0238491605 HAM 61 57 -5.0238491605 HAM 57 60 -3.4577939437 HAM 60 57 -3.4577939437 HAM 57 58 -5.2250954574 HAM 58 57 -5.2250954574 HAM 57 59 2.9046405511 HAM 59 57 2.9046405511 HAM 59 61 3.1263451552 HAM 61 59 3.1263451552 HAM 60 69 7.0230418611 HAM 69 60 7.0230418611 HAM 60 66 -4.4558933975 HAM 66 60 -4.4558933975 HAM 61 186 -300.4188662192 HAM 186 61 -300.4188662192 HAM 61 186 -359.1540157115 HAM 186 61 -359.1540157115 HAM 61 186 79.2570040213 HAM 186 61 79.2570040213 HAM 61 96 -170.7952171627 HAM 96 61 -170.7952171627 HAM 61 96 -9.9542674279 HAM 96 61 -9.9542674279 HAM 61 96 -17.5252950894 HAM 96 61 -17.5252950894 HAM 61 196 -357.0126665684 HAM 196 61 -357.0126665684 HAM 61 66 2.8175646831 HAM 66 61 2.8175646831 HAM 62 182 -300.4188662192 HAM 182 62 -300.4188662192 HAM 62 182 106.3784098828 HAM 182 62 106.3784098828 HAM 62 182 22.6009747399 HAM 182 62 22.6009747399 HAM 62 69 37.6104475000 HAM 69 62 37.6104475000 HAM 62 73 -27.9953544877 HAM 73 62 -27.9953544877 HAM 62 72 24.0044049238 HAM 72 62 24.0044049238 HAM 62 99 -241.5409125000 HAM 99 62 -241.5409125000 HAM 62 99 -55.7212100000 HAM 99 62 -55.7212100000 HAM 62 99 -57.1844450000 HAM 99 62 -57.1844450000 HAM 63 64 -3.6947004303 HAM 64 63 -3.6947004303 HAM 64 68 3.3831146002 HAM 68 64 3.3831146002 HAM 64 65 13.1422604482 HAM 65 64 13.1422604482 HAM 64 67 -2.3605951502 HAM 67 64 -2.3605951502 HAM 65 68 -4.8900590911 HAM 68 65 -4.8900590911 HAM 65 67 2.9936894426 HAM 67 65 2.9936894426 HAM 66 193 -300.4188662192 HAM 193 66 -300.4188662192 HAM 66 193 -355.2935556330 HAM 193 66 -355.2935556330 HAM 66 193 81.7595963413 HAM 193 66 81.7595963413 HAM 66 102 -170.7952171627 HAM 102 66 -170.7952171627 HAM 66 102 -11.6484528492 HAM 102 66 -11.6484528492 HAM 66 102 -24.3563531820 HAM 102 66 -24.3563531820 HAM 66 194 -357.0126665684 HAM 194 66 -357.0126665684 HAM 66 69 -6.2829398337 HAM 69 66 -6.2829398337 HAM 67 68 8.3491699926 HAM 68 67 8.3491699926 HAM 68 183 -300.4188662192 HAM 183 68 -300.4188662192 HAM 68 183 22.6009747399 HAM 183 68 22.6009747399 HAM 68 183 10.1816057936 HAM 183 68 10.1816057936 HAM 68 183 79.2570040213 HAM 183 68 79.2570040213 HAM 68 183 113.9080258357 HAM 183 68 113.9080258357 HAM 68 108 -241.5409125000 HAM 108 68 -241.5409125000 HAM 68 108 -57.1844450000 HAM 108 68 -57.1844450000 HAM 68 108 -81.6484550000 HAM 108 68 -81.6484550000 HAM 68 108 -24.7845100000 HAM 108 68 -24.7845100000 HAM 68 108 -25.3440800000 HAM 108 68 -25.3440800000 HAM 68 69 4.0418029756 HAM 69 68 4.0418029756 HAM 68 70 4.2461309736 HAM 70 68 4.2461309736 HAM 69 195 -238.8224875000 HAM 195 69 -238.8224875000 HAM 69 195 -424.8564350000 HAM 195 69 -424.8564350000 HAM 69 195 31.9626050000 HAM 195 69 31.9626050000 HAM 69 107 -241.5409125000 HAM 107 69 -241.5409125000 HAM 69 107 -12.2153400000 HAM 107 69 -12.2153400000 HAM 69 107 -57.1844450000 HAM 107 69 -57.1844450000 HAM 69 71 -7.1130613178 HAM 71 69 -7.1130613178 HAM 69 70 -3.8549362026 HAM 70 69 -3.8549362026 HAM 70 72 11.8807505823 HAM 72 70 11.8807505823 HAM 70 71 -3.6947004303 HAM 71 70 -3.6947004303 HAM 71 75 -101.2430327423 HAM 75 71 -101.2430327423 HAM 74 79 37.6104475000 HAM 79 74 37.6104475000 HAM 76 78 -6.3015848752 HAM 78 76 -6.3015848752 HAM 77 88 -49.1071906307 HAM 88 77 -49.1071906307 HAM 77 123 -101.2430327423 HAM 123 77 -101.2430327423 HAM 77 81 -10.7579072532 HAM 81 77 -10.7579072532 HAM 77 85 -11.8138657164 HAM 85 77 -11.8138657164 HAM 78 79 -4.9463631074 HAM 79 78 -4.9463631074 HAM 79 80 5.7159725845 HAM 80 79 5.7159725845 HAM 81 84 37.6104475000 HAM 84 81 37.6104475000 HAM 81 94 -4.2398725677 HAM 94 81 -4.2398725677 HAM 81 85 -3.7169879872 HAM 85 81 -3.7169879872 HAM 81 93 -3.4841136458 HAM 93 81 -3.4841136458 HAM 82 85 37.6104475000 HAM 85 82 37.6104475000 HAM 82 83 7.0230418611 HAM 83 82 7.0230418611 HAM 82 88 -16.6255961673 HAM 88 82 -16.6255961673 HAM 82 89 28.6307467166 HAM 89 82 28.6307467166 HAM 82 84 -3.7169879872 HAM 84 82 -3.7169879872 HAM 83 89 6.1300165675 HAM 89 83 6.1300165675 HAM 83 87 2.9936894426 HAM 87 83 2.9936894426 HAM 84 86 -27.7182772037 HAM 86 84 -27.7182772037 HAM 84 88 -15.1221891670 HAM 88 84 -15.1221891670 HAM 84 91 -4.2398725677 HAM 91 84 -4.2398725677 HAM 84 90 -3.4841136458 HAM 90 84 -3.4841136458 HAM 86 87 2.2106598596 HAM 87 86 2.2106598596 HAM 86 89 -10.5031177239 HAM 89 86 -10.5031177239 HAM 86 88 8.0159015621 HAM 88 86 8.0159015621 HAM 88 155 -143.1792700000 HAM 155 88 -143.1792700000 HAM 88 89 -7.4474313928 HAM 89 88 -7.4474313928 HAM 90 93 37.6104475000 HAM 93 90 37.6104475000 HAM 90 91 -2.0797773582 HAM 91 90 -2.0797773582 HAM 91 94 37.6104475000 HAM 94 91 37.6104475000 HAM 91 92 7.0230418611 HAM 92 91 7.0230418611 HAM 93 115 116.7014264260 HAM 115 93 116.7014264260 HAM 93 94 -2.0797773582 HAM 94 93 -2.0797773582 HAM 94 114 -141.6946317950 HAM 114 94 -141.6946317950 HAM 95 100 59.4872503449 HAM 100 95 59.4872503449 HAM 96 102 2.8175646831 HAM 102 96 2.8175646831 HAM 97 141 -58.3274140446 HAM 141 97 -58.3274140446 HAM 97 100 7.0230418611 HAM 100 97 7.0230418611 HAM 97 103 4.3345762837 HAM 103 97 4.3345762837 HAM 97 98 2.1168581056 HAM 98 97 2.1168581056 HAM 98 147 68.0216641669 HAM 147 98 68.0216641669 HAM 99 107 37.6104475000 HAM 107 99 37.6104475000 HAM 100 101 37.6104475000 HAM 101 100 37.6104475000 HAM 100 110 -5.3975604309 HAM 110 100 -5.3975604309 HAM 100 104 -11.7560717912 HAM 104 100 -11.7560717912 HAM 100 111 10.5189624984 HAM 111 100 10.5189624984 HAM 100 103 20.2449951537 HAM 103 100 20.2449951537 HAM 100 105 -5.3789536266 HAM 105 100 -5.3789536266 HAM 101 110 -69.4480550000 HAM 110 101 -69.4480550000 HAM 101 106 -9.3166209729 HAM 106 101 -9.3166209729 HAM 102 107 -8.8854187243 HAM 107 102 -8.8854187243 HAM 103 109 -5.4570533515 HAM 109 103 -5.4570533515 HAM 103 104 -7.4474313928 HAM 104 103 -7.4474313928 HAM 104 105 -5.3744735722 HAM 105 104 -5.3744735722 HAM 104 110 -5.9069328582 HAM 110 104 -5.9069328582 HAM 105 106 65.1432059654 HAM 106 105 65.1432059654 HAM 105 109 -19.5997817736 HAM 109 105 -19.5997817736 HAM 105 110 -10.6930025064 HAM 110 105 -10.6930025064 HAM 107 108 4.0418029756 HAM 108 107 4.0418029756 HAM 109 110 8.0159015621 HAM 110 109 8.0159015621 HAM 111 160 -82.4874200000 HAM 160 111 -82.4874200000 HAM 111 114 -4.9261354829 HAM 114 111 -4.9261354829 HAM 111 112 3.0918811583 HAM 112 111 3.0918811583 HAM 112 161 96.1971600000 HAM 161 112 96.1971600000 HAM 112 114 -5.4570533515 HAM 114 112 -5.4570533515 HAM 112 115 -4.7701024864 HAM 115 112 -4.7701024864 HAM 113 150 -100.1932350000 HAM 150 113 -100.1932350000 HAM 113 114 3.1263451552 HAM 114 113 3.1263451552 HAM 114 160 -100.1932350000 HAM 160 114 -100.1932350000 HAM 114 115 2.8175646831 HAM 115 114 2.8175646831 HAM 115 161 82.5203700000 HAM 161 115 82.5203700000 HAM 116 118 84.1276762261 HAM 118 116 84.1276762261 HAM 117 121 34.3449800000 HAM 121 117 34.3449800000 HAM 117 118 7.0230418611 HAM 118 117 7.0230418611 HAM 118 119 37.6104475000 HAM 119 118 37.6104475000 HAM 118 148 48.5711365154 HAM 148 118 48.5711365154 HAM 118 125 -3.8549362026 HAM 125 118 -3.8549362026 HAM 118 123 -7.6333031652 HAM 123 118 -7.6333031652 HAM 119 124 34.3449800000 HAM 124 119 34.3449800000 HAM 119 131 24.0044049238 HAM 131 119 24.0044049238 HAM 119 123 -98.2143812614 HAM 123 119 -98.2143812614 HAM 120 121 19.8641620983 HAM 121 120 19.8641620983 HAM 121 122 37.6104475000 HAM 122 121 37.6104475000 HAM 121 137 5.7159725845 HAM 137 121 5.7159725845 HAM 121 148 9.9320810491 HAM 148 121 9.9320810491 HAM 121 149 -21.2027686422 HAM 149 121 -21.2027686422 HAM 121 156 21.2276205027 HAM 156 121 21.2276205027 HAM 121 147 -12.9461802595 HAM 147 121 -12.9461802595 HAM 121 141 13.1969367648 HAM 141 121 13.1969367648 HAM 122 124 7.0230418611 HAM 124 122 7.0230418611 HAM 123 129 16.9736775000 HAM 129 123 16.9736775000 HAM 123 155 -69.4480550000 HAM 155 123 -69.4480550000 HAM 123 128 -3.7926138140 HAM 128 123 -3.7926138140 HAM 123 124 -7.6006534165 HAM 124 123 -7.6006534165 HAM 124 127 92.1264053727 HAM 127 124 92.1264053727 HAM 124 148 53.1892049414 HAM 148 124 53.1892049414 HAM 124 132 24.0044049238 HAM 132 124 24.0044049238 HAM 124 182 -170.7952171627 HAM 182 124 -170.7952171627 HAM 124 182 -8.6375497485 HAM 182 124 -8.6375497485 HAM 124 182 -39.4008454469 HAM 182 124 -39.4008454469 HAM 124 142 -7.1130613178 HAM 142 124 -7.1130613178 HAM 124 139 4.0418029756 HAM 139 124 4.0418029756 HAM 125 130 34.3449800000 HAM 130 125 34.3449800000 HAM 125 134 -7.6069891701 HAM 134 125 -7.6069891701 HAM 125 131 11.8807505823 HAM 131 125 11.8807505823 HAM 126 140 -49.1071906307 HAM 140 126 -49.1071906307 HAM 126 142 -11.8138657164 HAM 142 126 -11.8138657164 HAM 126 133 -3.7169879872 HAM 133 126 -3.7169879872 HAM 126 134 -5.2250954574 HAM 134 126 -5.2250954574 HAM 128 146 -49.1071906307 HAM 146 128 -49.1071906307 HAM 128 139 -6.9666076100 HAM 139 128 -6.9666076100 HAM 129 131 -69.4480550000 HAM 131 129 -69.4480550000 HAM 129 132 -5.3744735722 HAM 132 129 -5.3744735722 HAM 129 138 13.1422604482 HAM 138 129 13.1422604482 HAM 130 148 -5.4517030598 HAM 148 130 -5.4517030598 HAM 130 138 -11.7560717912 HAM 138 130 -11.7560717912 HAM 130 135 20.2449951537 HAM 135 130 20.2449951537 HAM 130 132 11.8807505823 HAM 132 130 11.8807505823 HAM 130 143 -3.6947004303 HAM 143 130 -3.6947004303 HAM 130 136 4.2461309736 HAM 136 130 4.2461309736 HAM 131 132 34.3449800000 HAM 132 131 34.3449800000 HAM 131 134 18.5859629661 HAM 134 131 18.5859629661 HAM 133 145 -49.1071906307 HAM 145 133 -49.1071906307 HAM 133 142 -10.7579072532 HAM 142 133 -10.7579072532 HAM 134 185 -170.7952171627 HAM 185 134 -170.7952171627 HAM 134 185 -19.7004227235 HAM 185 134 -19.7004227235 HAM 134 185 -19.3735590783 HAM 185 134 -19.3735590783 HAM 134 185 -17.5252950894 HAM 185 134 -17.5252950894 HAM 134 185 -2.2151039159 HAM 185 134 -2.2151039159 HAM 134 185 -9.9286949111 HAM 185 134 -9.9286949111 HAM 134 138 -49.1071906307 HAM 138 134 -49.1071906307 HAM 135 184 -170.7952171627 HAM 184 135 -170.7952171627 HAM 135 184 -17.5252950894 HAM 184 135 -17.5252950894 HAM 135 184 -17.9209708309 HAM 184 135 -17.9209708309 HAM 135 184 -11.6484528492 HAM 184 135 -11.6484528492 HAM 135 184 -9.9286949111 HAM 184 135 -9.9286949111 HAM 135 136 -4.9463631074 HAM 136 135 -4.9463631074 HAM 135 138 -7.4474313928 HAM 138 135 -7.4474313928 HAM 135 147 -3.6947004303 HAM 147 135 -3.6947004303 HAM 136 139 37.6104475000 HAM 139 136 37.6104475000 HAM 136 183 -170.7952171627 HAM 183 136 -170.7952171627 HAM 136 183 -8.6375497485 HAM 183 136 -8.6375497485 HAM 136 183 -57.7341762039 HAM 183 136 -57.7341762039 HAM 136 183 -17.5252950894 HAM 183 136 -17.5252950894 HAM 136 183 -17.9209708309 HAM 183 136 -17.9209708309 HAM 136 148 5.7159725845 HAM 148 136 5.7159725845 HAM 136 137 9.9320810491 HAM 137 136 9.9320810491 HAM 136 146 -8.5713498230 HAM 146 136 -8.5713498230 HAM 137 191 -170.7952171627 HAM 191 137 -170.7952171627 HAM 137 191 -57.7341762039 HAM 191 137 -57.7341762039 HAM 137 191 -17.5252950894 HAM 191 137 -17.5252950894 HAM 137 191 -35.8419416619 HAM 191 137 -35.8419416619 HAM 137 191 -42.3469492822 HAM 191 137 -42.3469492822 HAM 137 191 -24.3563531820 HAM 191 137 -24.3563531820 HAM 138 187 -170.7952171627 HAM 187 138 -170.7952171627 HAM 138 187 -19.3735590783 HAM 187 138 -19.3735590783 HAM 138 187 -17.5252950894 HAM 187 138 -17.5252950894 HAM 138 187 -7.6813044996 HAM 187 138 -7.6813044996 HAM 138 187 -9.9286949111 HAM 187 138 -9.9286949111 HAM 138 140 -5.2250954574 HAM 140 138 -5.2250954574 HAM 139 192 -170.7952171627 HAM 192 139 -170.7952171627 HAM 139 192 -39.4008454469 HAM 192 139 -39.4008454469 HAM 139 192 -57.7341762039 HAM 192 139 -57.7341762039 HAM 139 192 -17.5252950894 HAM 192 139 -17.5252950894 HAM 139 192 -17.9209708309 HAM 192 139 -17.9209708309 HAM 140 143 -16.6255961673 HAM 143 140 -16.6255961673 HAM 140 145 -3.7169879872 HAM 145 140 -3.7169879872 HAM 140 147 -10.5322584806 HAM 147 140 -10.5322584806 HAM 141 144 4.4213197191 HAM 144 141 4.4213197191 HAM 141 143 -19.5997817736 HAM 143 141 -19.5997817736 HAM 141 147 -11.3018940423 HAM 147 141 -11.3018940423 HAM 141 149 16.5567958789 HAM 149 141 16.5567958789 HAM 141 145 11.3361967038 HAM 145 141 11.3361967038 HAM 142 143 37.6104475000 HAM 143 142 37.6104475000 HAM 143 178 -101.2430327423 HAM 178 143 -101.2430327423 HAM 143 148 -10.0593877856 HAM 148 143 -10.0593877856 HAM 143 145 -15.1221891670 HAM 145 143 -15.1221891670 HAM 143 149 10.5189624984 HAM 149 143 10.5189624984 HAM 143 147 20.2449951537 HAM 147 143 20.2449951537 HAM 146 155 -5.3635658927 HAM 155 146 -5.3635658927 HAM 147 156 16.7080383106 HAM 156 147 16.7080383106 HAM 148 200 -584.9932334772 HAM 200 148 -584.9932334772 HAM 148 200 -520.3407398440 HAM 200 148 -520.3407398440 HAM 148 195 -170.7952171627 HAM 195 148 -170.7952171627 HAM 148 195 -17.2750994970 HAM 195 148 -17.2750994970 HAM 148 160 28.6213026111 HAM 160 148 28.6213026111 HAM 148 155 25.9737696380 HAM 155 148 25.9737696380 HAM 148 161 27.8825741033 HAM 161 148 27.8825741033 HAM 149 160 -7.1047956181 HAM 160 149 -7.1047956181 HAM 149 156 -10.5031177239 HAM 156 149 -10.5031177239 HAM 149 150 2.0538910306 HAM 150 149 2.0538910306 HAM 149 157 3.0918811583 HAM 157 149 3.0918811583 HAM 150 151 6.2526903104 HAM 151 150 6.2526903104 HAM 150 160 3.1263451552 HAM 160 150 3.1263451552 HAM 150 154 8.4009593024 HAM 154 150 8.4009593024 HAM 150 157 -11.3018940423 HAM 157 150 -11.3018940423 HAM 150 156 2.8175646831 HAM 156 150 2.8175646831 HAM 152 157 -6.3015848752 HAM 157 152 -6.3015848752 HAM 152 153 2.1168581056 HAM 153 152 2.1168581056 HAM 153 156 -6.3015848752 HAM 156 153 -6.3015848752 HAM 153 159 6.1300165675 HAM 159 153 6.1300165675 HAM 155 201 -300.4188662192 HAM 201 155 -300.4188662192 HAM 155 201 -678.0155455383 HAM 201 155 -678.0155455383 HAM 155 197 -170.7952171627 HAM 197 155 -170.7952171627 HAM 155 197 -15.3626089991 HAM 197 155 -15.3626089991 HAM 155 204 -504.8921550000 HAM 204 155 -504.8921550000 HAM 155 160 47.0691516705 HAM 160 155 47.0691516705 HAM 155 161 47.2649412718 HAM 161 155 47.2649412718 HAM 156 161 6.1300165675 HAM 161 156 6.1300165675 HAM 156 159 8.3491699926 HAM 159 156 8.3491699926 HAM 156 157 2.1168581056 HAM 157 156 2.1168581056 HAM 157 159 -3.4577939437 HAM 159 157 -3.4577939437 HAM 158 159 -7.4474313928 HAM 159 158 -7.4474313928 HAM 159 161 -6.3015848752 HAM 161 159 -6.3015848752 HAM 160 202 -300.4188662192 HAM 202 160 -300.4188662192 HAM 160 202 -718.3080314231 HAM 202 160 -718.3080314231 HAM 160 198 -170.7952171627 HAM 198 160 -170.7952171627 HAM 160 198 -19.9085348558 HAM 198 160 -19.9085348558 HAM 160 204 -504.8921550000 HAM 204 160 -504.8921550000 HAM 160 161 46.4776828590 HAM 161 160 46.4776828590 HAM 161 203 -300.4188662192 HAM 203 161 -300.4188662192 HAM 161 203 -710.5871112661 HAM 203 161 -710.5871112661 HAM 161 199 -170.7952171627 HAM 199 161 -170.7952171627 HAM 161 199 -23.2969056984 HAM 199 161 -23.2969056984 HAM 161 204 -504.8921550000 HAM 204 161 -504.8921550000 HAM 162 180 53.1892049414 HAM 180 162 53.1892049414 HAM 162 171 -7.1130613178 HAM 171 162 -7.1130613178 HAM 162 168 5.7159725845 HAM 168 162 5.7159725845 HAM 163 173 4.0418029756 HAM 173 163 4.0418029756 HAM 163 166 -6.9952137908 HAM 166 163 -6.9952137908 HAM 163 167 -7.4474313928 HAM 167 163 -7.4474313928 HAM 163 177 -3.6947004303 HAM 177 163 -3.6947004303 HAM 164 167 -49.1071906307 HAM 167 164 -49.1071906307 HAM 165 166 9.9320810491 HAM 166 165 9.9320810491 HAM 166 168 37.6104475000 HAM 168 166 37.6104475000 HAM 166 180 8.0836059512 HAM 180 166 8.0836059512 HAM 167 170 -5.2250954574 HAM 170 167 -5.2250954574 HAM 169 172 4.4213197191 HAM 172 169 4.4213197191 HAM 170 174 -16.6255961673 HAM 174 170 -16.6255961673 HAM 170 176 -3.7169879872 HAM 176 170 -3.7169879872 HAM 170 177 -10.5322584806 HAM 177 170 -10.5322584806 HAM 171 174 37.6104475000 HAM 174 171 37.6104475000 HAM 172 174 -19.5997817736 HAM 174 172 -19.5997817736 HAM 172 177 -11.3018940423 HAM 177 172 -11.3018940423 HAM 172 176 11.3361967038 HAM 176 172 11.3361967038 HAM 173 175 -7.1130613178 HAM 175 173 -7.1130613178 HAM 173 180 -6.9952137908 HAM 180 173 -6.9952137908 HAM 173 179 -14.2538605748 HAM 179 173 -14.2538605748 HAM 174 180 -10.0593877856 HAM 180 174 -10.0593877856 HAM 174 176 -15.1221891670 HAM 176 174 -15.1221891670 HAM 174 175 -4.9463631074 HAM 175 174 -4.9463631074 HAM 174 177 20.2449951537 HAM 177 174 20.2449951537 HAM 175 177 4.2461309736 HAM 177 175 4.2461309736 HAM 182 195 53.1892049414 HAM 195 182 53.1892049414 HAM 182 192 4.0418029756 HAM 192 182 4.0418029756 HAM 183 192 37.6104475000 HAM 192 183 37.6104475000 HAM 183 195 5.7159725845 HAM 195 183 5.7159725845 HAM 183 191 9.9320810491 HAM 191 183 9.9320810491 HAM 183 184 -4.9463631074 HAM 184 183 -4.9463631074 HAM 184 187 -7.4474313928 HAM 187 184 -7.4474313928 HAM 185 187 -49.1071906307 HAM 187 185 -49.1071906307 HAM 185 189 18.5859629661 HAM 189 185 18.5859629661 HAM 185 190 2.1168581056 HAM 190 185 2.1168581056 HAM 186 196 -15.0952817015 HAM 196 186 -15.0952817015 HAM 186 194 -8.5713498230 HAM 194 186 -8.5713498230 HAM 186 193 2.8175646831 HAM 193 186 2.8175646831 HAM 187 188 2.1168581056 HAM 188 187 2.1168581056 HAM 188 190 -49.1071906307 HAM 190 188 -49.1071906307 HAM 193 195 -8.8854187243 HAM 195 193 -8.8854187243 HAM 193 194 -14.2538605748 HAM 194 193 -14.2538605748 HAM 193 196 -7.9627288995 HAM 196 193 -7.9627288995 HAM 194 196 3.0918811583 HAM 196 194 3.0918811583 HAM 195 198 28.6213026111 HAM 198 195 28.6213026111 HAM 195 197 25.9737696380 HAM 197 195 25.9737696380 HAM 195 199 27.8825741033 HAM 199 195 27.8825741033 HAM 197 198 47.0691516705 HAM 198 197 47.0691516705 HAM 197 199 47.2649412718 HAM 199 197 47.2649412718 HAM 198 199 46.4776828590 HAM 199 198 46.4776828590 HAM 200 202 49.5735503011 HAM 202 200 49.5735503011 HAM 200 201 44.9878886771 HAM 201 200 44.9878886771 HAM 200 203 48.2940349926 HAM 203 200 48.2940349926 HAM 201 202 47.0691516705 HAM 202 201 47.0691516705 HAM 201 203 47.2649412718 HAM 203 201 47.2649412718 HAM 202 204 -21.3479521101 HAM 204 202 -21.3479521101 HAM 202 203 46.4776828590 HAM 203 202 46.4776828590 HAM 203 204 -20.1580029410 HAM 204 203 -20.1580029410 -------------- next part -------------- A non-text attachment was scrubbed... Name: slepctester2.c Type: text/x-csrc Size: 21658 bytes Desc: URL: From hzhang at mcs.anl.gov Sun May 8 09:30:32 2016 From: hzhang at mcs.anl.gov (Hong) Date: Sun, 8 May 2016 09:30:32 -0500 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF5A3.9080501@steinitz.org> References: <572EF5A3.9080501@steinitz.org> Message-ID: Dominic : > Hi Hong, > > I too am having compatibility issues. I looked at the git repos. > > v3.7 PETSc 3.7 >> v3.6.4 PETSc 3.6.4 >> v3.6.3 PETSc 3.6.3 >> v3.6.2 PETSc 3.6.2 >> v3.6.1 PETSc 3.6.1 >> > and > > v3.6.3 SLEPc 3.6.3 >> v3.6.2 SLEPc 3.6.2 >> v3.6.1 SLEPc 3.6.1 >> > If I just checkout master on both repos, won't I get PETSc 3.7 and SLEPc > 3.6.3? > If you checkout master on both repos, you get PETSc 3.7 and SLEPc 3.7. I did it few days ago, works well. Jose will release SLEPc 3.7 soon. Hong > > > BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for > announcements on new versions but nothing has been posted since June 2015. > > Marco: >> You may use petsc-master and slepc-master. >> Hong >> >> Dear All, >> > >> > first, let me congratulate and thank the PETSc team and all >> > contributors for release 3.7 . >> > >> > I just noticed that the changes in Options handling introduced in 3.7 >> > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is flagged as >> > incompatible with 3.7). >> > >> > Is there already a time estimate for a compatible release of SLEPc ? >> > >> > Thank you and kind regards, >> > >> > Marco Zocca >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun May 8 10:08:57 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 8 May 2016 10:08:57 -0500 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: References: <572EF5A3.9080501@steinitz.org> Message-ID: On Sun, 8 May 2016, Hong wrote: > If you checkout master on both repos, you get PETSc 3.7 and SLEPc 3.7. > I did it few days ago, works well. Jose will release SLEPc 3.7 soon. Until slepc 3.7 is released - one should use petsc-3.7 or petsc-maint (from git) with slepc-master (from git) Satish From balay at mcs.anl.gov Sun May 8 10:12:47 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 8 May 2016 10:12:47 -0500 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: <572EF952.20600@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572EF696.20003@iue.tuwien.ac.at> <572EF75D.9000001@steinitz.org> <572EF952.20600@steinitz.org> Message-ID: On Sun, 8 May 2016, Dominic Steinitz wrote: > Can someone advise me which version of PETSc and SLEPc are compatible? Should > I try PETSc 3.6.4 and SLEPc 3.6.3 or 3.6.0? To clarify - if using petsc-3.6 with slepc 3.6 - you should use the latest patched version for these packages. That would be petsc-3.6.4 with slepc-3.6.3. There is no realson to use petsc-3.6.0 or 3.6.1 etcc as petsc-3.6.4 privides bug fixes and superseeds them. [similarly with slepc-3.6.0 ..] Satish From jroman at dsic.upv.es Sun May 8 10:58:18 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 8 May 2016 17:58:18 +0200 Subject: [petsc-users] EPSSolve finds eigensolutions in deflation space In-Reply-To: References: Message-ID: <7D3F17BA-E328-45D3-91E6-F40EC353BBE9@dsic.upv.es> Giacomo: The custom function for stopping based on a user-defined criterion will be available in slepc-3.7, to be released in a few days. You can already try this functionality in the master branch of the development repo. Hope it fits your needs. Regarding the deflation space issue, I will try your example and get back to you soon. Jose > El 8 may 2016, a las 13:20, Giacomo Mulas escribi?: > > I found a problem in the default iterative eigensystem solver of SLEPC, > namely the Krilov-Schur one: it does not respect the deflation space that it > is given. This makes it very inefficient even in a case in which it should > be extremely effective. > > The problem I want to solve: I have a symmetric operator (an approximated > vibrational Hamiltonian, but that's unimportant) and I want to selectively > find a subset of eigenvectors so that some predefined vectors (which I call > "target" vectors) are very nearly contained in the eigenvector subspace. > > I therefore have both a well defined selection criterion (find eigenstates > which have maximum projection on the subspace of "target" vectors) and a > well defined condition for stopping (say, >99% of each of the "target" > vectors is contained in the subspace of the eigenstates found). > > The first one can be explicitly cast in a function and given to SLEPC via > EPSSetArbitrarySelection and EPSSetWhichEigenpairs, so that the solver is > guided to the solutions I am most interested in. I also make use of > EPSSetInitialSpace to have the solver start from a guess which is "rich" in > the "target" vectors (just the "target" vector which has the smallest > projection on the eigenstate space so far). > > As to stopping when I found just enough eigenstates, unfortunately it is > thus far impossible to give a custom function to slepc so that it will keep > finding eigenvectors until all are found or the user-defined function tells > it it's enough. Therefore, my second best solution was to repeatedly run > EPSSolve, aiming at a small number of eigensolutions in each call, putting > previously found solutions in the deflation space and passing it to the > solver via EPSSetDeflationSpace. Unfortunately, this did not simply work > out of the box. > > The first problem is that after the first few, easiest to find > eigensolutions are found, EPSSolve frequently does not find any solutions, > if run with a very conservative EPSSetDimensions (e.g. like "find me just > one eigensolution"). I thus wrote my code so that the number of requested > solutions in EPSSolve are increased if no new solutions are found by an > EPSSolve call, and decreased if too many were found in just one call. Inefficient as it may be, due to many useless EPSSolve calls, it works. > > However, when trying to use this on real cases, another problem arose, the > big one: frequently, EPSSolve would find and return eigenvectors that were > in the deflation space! Some eigensolutions would therefore appear more > than once in the set of solutions, of course breaking everything down the > road of my physics calculation before I figured out why. When I found out, I set up a check in my code that explicitly computes the > projection of newly found eigenstates on previously found ones, and discards > them if they are duplicates. But this is ineffective for several reasons: > first, such a check is a waste of time and should not be necessary, > otherwise what is the point in setting up a deflation space? second, and > most important, I have to run EPSSolve looking for lots of solutions to only > find one or two really new ones, because all the first ones are duplicates. Even if the initial guess is rich in the solution I actually want, and poor > in the duplicate ones. As a result, even if I would just need to find a > (relatively) small subset of the eigenvectors, lapack turns out to be always > much faster, which should not be the case. > > I am attaching a test code that shows this behaviour, in the hope that it > will help the SLEPC developers in finding why the deflation space is not > respected and, if possible, correct it. I know that being able to reproduce > a problem is the first, sometimes actually the most important, step to > attempt to solve it, so there it goes. Of course I also have a tiny hope > that they might actually include support for a user-defined check for > "enough solutions" in EPSSolve... There is in fact something not too > different already implemented, for EPSSolve can find all solutions with > eigenvalues in a given interval, without knowing in advance how many they > will be, without the user having to loop over many tentative EPSSolve calls > and finding lots of unneeded additional solutions in the process. > > To run the test code, just compile the source, and run it in the same > directory of the two data files. This is a small test case, so that it runs > fast enough, but I have much larger ones and can produce them more or less > as large as wanted on request. Upon running the code, you will see that > lots of tentative EPSSolve calls are made, many times finding no solutions > or only solutions already in the deflation space (which contains previously > found eigenvectors). > In this code I privileged clarity over efficiency, no attempt was made to > overoptimise anything. I put lots of comments in it, so that hopefully it > is self-explanatory. But of course I am more than willing to explain > whatever is not clear in it. Just ignore the part that reads the input > files, that's irrelevant. > > Of course, I would also very much appreciate it if I could get some clever > suggestions, from SLEPC developers or anyone else on the list, as to how to > solve my problem more efficiently. And of course I am perfectly willing > to thank them and/or to include them as authors in the scientific publication > describing my molecular spectroscopy code, if some substantial contribution > is offered. > > Best regards > Giacomo Mulas > > -- > _________________________________________________________________ > > Giacomo Mulas > _________________________________________________________________ > > INAF - Osservatorio Astronomico di Cagliari > via della scienza 5 - 09047 Selargius (CA) > > tel. +39 070 71180244 > mob. : +39 329 6603810 > _________________________________________________________________ > > "When the storms are raging around you, stay right where you are" > (Freddy Mercury) > _________________________________________________________________ From giacomo.mulas84 at gmail.com Sun May 8 11:47:32 2016 From: giacomo.mulas84 at gmail.com (Giacomo Mulas) Date: Sun, 8 May 2016 18:47:32 +0200 (CEST) Subject: [petsc-users] EPSSolve finds eigensolutions in deflation space In-Reply-To: <7D3F17BA-E328-45D3-91E6-F40EC353BBE9@dsic.upv.es> References: <7D3F17BA-E328-45D3-91E6-F40EC353BBE9@dsic.upv.es> Message-ID: On Sun, 8 May 2016, Jose E. Roman wrote: > The custom function for stopping based on a user-defined criterion will be > available in slepc-3.7, to be released in a few days. You can already try > this functionality in the master branch of the development repo. Hope it > fits your needs. That's great news, I'll definitely give it a try! If I can use it for my needs, it would also sidestep the deflation space issue, since I only use it to exclude already found eigenstates in subsequent calls of EPSSolve till I get enough. If I can get all the eigenstates I need, and (close to) no more than those I need, in one single EPSSolve call, I will not need to define the deflation space at all anymore. > Regarding the deflation space issue, I will try your example and get back > to you soon. thanks! Of course, if useful, I am more than willing to help with it. Let me know. Bye Giacomo -- _________________________________________________________________ Giacomo Mulas _________________________________________________________________ INAF - Osservatorio Astronomico di Cagliari via della scienza 5 - 09047 Selargius (CA) tel. +39 070 71180244 mob. : +39 329 6603810 _________________________________________________________________ "When the storms are raging around you, stay right where you are" (Freddy Mercury) _________________________________________________________________ From dominic at steinitz.org Sun May 8 16:17:27 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Sun, 8 May 2016 22:17:27 +0100 Subject: [petsc-users] petsc and slepc incompatibility In-Reply-To: References: <572EF5A3.9080501@steinitz.org> Message-ID: <572FACE7.6090704@steinitz.org> Ah ok - thanks - there isn't an actual tag marked V3.7 which confused me. I won't be able to do much now until next weekend. Hopefully by then there will be a release (and a tag :-)). On 08/05/2016 15:30, Hong wrote: > Dominic : > > Hi Hong, > > I too am having compatibility issues. I looked at the git repos. > > v3.7 PETSc 3.7 > v3.6.4 PETSc 3.6.4 > v3.6.3 PETSc 3.6.3 > v3.6.2 PETSc 3.6.2 > v3.6.1 PETSc 3.6.1 > > and > > v3.6.3 SLEPc 3.6.3 > v3.6.2 SLEPc 3.6.2 > v3.6.1 SLEPc 3.6.1 > > If I just checkout master on both repos, won't I get PETSc 3.7 and > SLEPc 3.6.3? > > If you checkout master on both repos, you get PETSc 3.7 and SLEPc 3.7. > I did it few days ago, works well. Jose will release SLEPc 3.7 soon. > > Hong > > > > BTW I checked https://listas.upv.es/pipermail/slepc-announce/ for > announcements on new versions but nothing has been posted since > June 2015. > > Marco: > You may use petsc-master and slepc-master. > Hong > > Dear All, > > > > first, let me congratulate and thank the PETSc team and all > > contributors for release 3.7 . > > > > I just noticed that the changes in Options handling > introduced in 3.7 > > broke the build of the latest SLEPc (3.6.1, since 3.6.3 is > flagged as > > incompatible with 3.7). > > > > Is there already a time estimate for a compatible release of > SLEPc ? > > > > Thank you and kind regards, > > > > Marco Zocca > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon May 9 10:06:51 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 9 May 2016 17:06:51 +0200 Subject: [petsc-users] EPSSolve finds eigensolutions in deflation space In-Reply-To: References: <7D3F17BA-E328-45D3-91E6-F40EC353BBE9@dsic.upv.es> Message-ID: <93C5333C-F86E-4D43-BD7C-6DD81D7FD9DA@dsic.upv.es> > El 8 may 2016, a las 18:47, Giacomo Mulas escribi?: > > On Sun, 8 May 2016, Jose E. Roman wrote: > >> The custom function for stopping based on a user-defined criterion will be >> available in slepc-3.7, to be released in a few days. You can already try >> this functionality in the master branch of the development repo. Hope it >> fits your needs. > > That's great news, I'll definitely give it a try! If I can use it for my > needs, it would also sidestep the deflation space issue, since I only use it > to exclude already found eigenstates in subsequent calls of EPSSolve till I > get enough. If I can get all the eigenstates I need, and (close to) no more > than those I need, in one single EPSSolve call, I will not need to define > the deflation space at all anymore. > >> Regarding the deflation space issue, I will try your example and get back >> to you soon. > > thanks! Of course, if useful, I am more than willing to help with it. Let me > know. > > Bye > Giacomo The manpage of EPSSetDeflationSpace() says: These vectors do not persist from one EPSSolve() call to the other, so the deflation space should be set every time. In your code, you almost always call EPSSetDeflationSpace() before EPSSolve(), except when the inner loop (line 238) is iterated more than once. Try placing the EPSSetDeflationSpace() call inside this loop. Jose From jonatan.midtgaard at gmail.com Mon May 9 11:03:42 2016 From: jonatan.midtgaard at gmail.com (Jonatan Midtgaard) Date: Mon, 9 May 2016 18:03:42 +0200 Subject: [petsc-users] "Petsc has generated inconsistent data" during EPSSolve Message-ID: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> Hi all. I am using the EPSCISS solver in SLEPc to calculate eigenvalues of a Hermitian matrix close to the origin. (Actually, it is only very nearly Hermitian, but this happens also for NHEP-routines). I am running with 4 cores, and the matrix prints fine with MatView() just before the EPSSolve. Anyhow, I get the following error: (I am new to both PETSc and this mailing list. Should error messages be attached or pasted into the body?) [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [2]PETSC ERROR: Petsc has generated inconsistent data [2]PETSC ERROR: ith 3844 block entry 7688 not owned by any process, upper bound 7688 [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [2]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 [2]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 [2]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex [2]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [2]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [2]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [2]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c [2]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c [2]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c [3]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [3]PETSC ERROR: Petsc has generated inconsistent data [3]PETSC ERROR: ith 1922 block entry 7688 not owned by any process, upper bound 7688 [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [3]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 [3]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 [3]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex [3]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [3]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [3]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [3]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c [3]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c [3]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c [3]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c [3]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Petsc has generated inconsistent data [1]PETSC ERROR: ith 5766 block entry 7688 not owned by any process, upper bound 7688 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 [1]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 [1]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex [1]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [1]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [1]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c [1]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c [1]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c [1]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c [1]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: [2]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c [2]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [2]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [2]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c [2]PETSC ERROR: #12 main() line 795 in hello.c [2]PETSC ERROR: PETSc Option Table entries: [2]PETSC ERROR: -n 31 [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c [3]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [3]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [3]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c [3]PETSC ERROR: #12 main() line 795 in hello.c [3]PETSC ERROR: PETSc Option Table entries: [3]PETSC ERROR: -n 31 #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [1]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [1]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c [1]PETSC ERROR: #12 main() line 795 in hello.c [1]PETSC ERROR: PETSc Option Table entries: [1]PETSC ERROR: -n 31 ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- [3]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- [1]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD with errorcode 77. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex [0]PETSC ERROR: #1 User provided function() line 0 in unknown file [st-d13411.nfit.au.dk:90621] 3 more processes have sent help message help-mpi-api.txt / mpi-abort [st-d13411.nfit.au.dk:90621] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages ??????????? When running with 1 core, I get the error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Error in external library [0]PETSC ERROR: Error in Lapack xGGES 154 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:59:03 2016 [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex [0]PETSC ERROR: #1 DSSolve_GNHEP() line 581 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/impls/gnhep/dsgnhep.c [0]PETSC ERROR: #2 DSSolve() line 543 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/interface/dsops.c [0]PETSC ERROR: #3 EPSSolve_CISS() line 943 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c [0]PETSC ERROR: #4 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c [0]PETSC ERROR: #5 main() line 795 in hello.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -n 31 [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Mon May 9 11:16:05 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 9 May 2016 18:16:05 +0200 Subject: [petsc-users] "Petsc has generated inconsistent data" during EPSSolve In-Reply-To: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> References: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> Message-ID: Is this with the master branch of SLEPc? Which options are setting for the CISS solver? Jose > El 9 may 2016, a las 18:03, Jonatan Midtgaard escribi?: > > Hi all. I am using the EPSCISS solver in SLEPc to calculate eigenvalues of a Hermitian matrix close to the origin. (Actually, it is only very nearly Hermitian, but this happens also for NHEP-routines). I am running with 4 cores, and the matrix prints fine with MatView() just before the EPSSolve. Anyhow, I get the following error: > (I am new to both PETSc and this mailing list. Should error messages be attached or pasted into the body?) > > [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [2]PETSC ERROR: Petsc has generated inconsistent data > [2]PETSC ERROR: ith 3844 block entry 7688 not owned by any process, upper bound 7688 > [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [2]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [2]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [2]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [2]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [2]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [2]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c > [2]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c > [2]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c > [3]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [3]PETSC ERROR: Petsc has generated inconsistent data > [3]PETSC ERROR: ith 1922 block entry 7688 not owned by any process, upper bound 7688 > [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [3]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [3]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [3]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [3]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [3]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [3]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c > [3]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c > [3]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c > [3]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [3]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: Petsc has generated inconsistent data > [1]PETSC ERROR: ith 5766 block entry 7688 not owned by any process, upper bound 7688 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [1]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [1]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [1]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [1]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [1]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [1]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c > [1]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c > [1]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: [2]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [2]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [2]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [2]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [2]PETSC ERROR: #12 main() line 795 in hello.c > [2]PETSC ERROR: PETSc Option Table entries: > [2]PETSC ERROR: -n 31 > [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [3]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [3]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [3]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [3]PETSC ERROR: #12 main() line 795 in hello.c > [3]PETSC ERROR: PETSc Option Table entries: > [3]PETSC ERROR: -n 31 > #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [1]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [1]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [1]PETSC ERROR: #12 main() line 795 in hello.c > [1]PETSC ERROR: PETSc Option Table entries: > [1]PETSC ERROR: -n 31 > ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > [3]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > [1]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD > with errorcode 77. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > [st-d13411.nfit.au.dk:90621] 3 more processes have sent help message help-mpi-api.txt / mpi-abort > [st-d13411.nfit.au.dk:90621] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > ??????????? > When running with 1 core, I get the error message: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Error in external library > [0]PETSC ERROR: Error in Lapack xGGES 154 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:59:03 2016 > [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [0]PETSC ERROR: #1 DSSolve_GNHEP() line 581 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/impls/gnhep/dsgnhep.c > [0]PETSC ERROR: #2 DSSolve() line 543 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/interface/dsops.c > [0]PETSC ERROR: #3 EPSSolve_CISS() line 943 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [0]PETSC ERROR: #4 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [0]PETSC ERROR: #5 main() line 795 in hello.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -n 31 > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- From jonatan.midtgaard at gmail.com Mon May 9 11:27:26 2016 From: jonatan.midtgaard at gmail.com (Jonatan Midtgaard) Date: Mon, 9 May 2016 18:27:26 +0200 Subject: [petsc-users] "Petsc has generated inconsistent data" during EPSSolve In-Reply-To: References: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> Message-ID: <44B30C67-464E-424D-A6FB-0858D056B78D@gmail.com> I am on Mac OS X, so I am using the Homebrew version, SLEPc 3.6.2. PETSc is 3.6.3. The options for the CISS have worked before, with simpler matrices. As for the options, I am using an ?ellipse? region (around 0, with radius=0.16 and vscale=0.5), the problem type is EPS_HEP. For the EPS settings, I set the tolerance to 1e-10 and maxit=200. - Jonatan > On 09 May 2016, at 18:16, Jose E. Roman wrote: > > Is this with the master branch of SLEPc? > Which options are setting for the CISS solver? > > Jose > > >> El 9 may 2016, a las 18:03, Jonatan Midtgaard escribi?: >> >> Hi all. I am using the EPSCISS solver in SLEPc to calculate eigenvalues of a Hermitian matrix close to the origin. (Actually, it is only very nearly Hermitian, but this happens also for NHEP-routines). I am running with 4 cores, and the matrix prints fine with MatView() just before the EPSSolve. Anyhow, I get the following error: >> (I am new to both PETSc and this mailing list. Should error messages be attached or pasted into the body?) >> >> [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [2]PETSC ERROR: Petsc has generated inconsistent data >> [2]PETSC ERROR: ith 3844 block entry 7688 not owned by any process, upper bound 7688 >> [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [2]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 >> [2]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 >> [2]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex >> [2]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [2]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [2]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [2]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c >> [2]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c >> [2]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c >> [3]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [3]PETSC ERROR: Petsc has generated inconsistent data >> [3]PETSC ERROR: ith 1922 block entry 7688 not owned by any process, upper bound 7688 >> [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [3]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 >> [3]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 >> [3]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex >> [3]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [3]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [3]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [3]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c >> [3]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c >> [3]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c >> [3]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c >> [3]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [1]PETSC ERROR: Petsc has generated inconsistent data >> [1]PETSC ERROR: ith 5766 block entry 7688 not owned by any process, upper bound 7688 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 >> [1]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 >> [1]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex >> [1]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [1]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [1]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c >> [1]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c >> [1]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c >> [1]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c >> [1]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c >> [1]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c >> [1]PETSC ERROR: [2]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c >> [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c >> [2]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [2]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [2]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c >> [2]PETSC ERROR: #12 main() line 795 in hello.c >> [2]PETSC ERROR: PETSc Option Table entries: >> [2]PETSC ERROR: -n 31 >> [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c >> [3]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [3]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [3]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c >> [3]PETSC ERROR: #12 main() line 795 in hello.c >> [3]PETSC ERROR: PETSc Option Table entries: >> [3]PETSC ERROR: -n 31 >> #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [1]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [1]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c >> [1]PETSC ERROR: #12 main() line 795 in hello.c >> [1]PETSC ERROR: PETSc Option Table entries: >> [1]PETSC ERROR: -n 31 >> ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- >> [3]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- >> [1]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- >> -------------------------------------------------------------------------- >> MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD >> with errorcode 77. >> >> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. >> You may or may not see output from other processes, depending on >> exactly when Open MPI kills them. >> -------------------------------------------------------------------------- >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run >> [0]PETSC ERROR: to get more information on the crash. >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Signal received >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 >> [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 >> [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex >> [0]PETSC ERROR: #1 User provided function() line 0 in unknown file >> [st-d13411.nfit.au.dk:90621] 3 more processes have sent help message help-mpi-api.txt / mpi-abort >> [st-d13411.nfit.au.dk:90621] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages >> >> ??????????? >> When running with 1 core, I get the error message: >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Error in external library >> [0]PETSC ERROR: Error in Lapack xGGES 154 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 >> [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:59:03 2016 >> [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex >> [0]PETSC ERROR: #1 DSSolve_GNHEP() line 581 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/impls/gnhep/dsgnhep.c >> [0]PETSC ERROR: #2 DSSolve() line 543 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/interface/dsops.c >> [0]PETSC ERROR: #3 EPSSolve_CISS() line 943 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c >> [0]PETSC ERROR: #4 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c >> [0]PETSC ERROR: #5 main() line 795 in hello.c >> [0]PETSC ERROR: PETSc Option Table entries: >> [0]PETSC ERROR: -n 31 >> [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > From jroman at dsic.upv.es Mon May 9 11:33:23 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 9 May 2016 18:33:23 +0200 Subject: [petsc-users] "Petsc has generated inconsistent data" during EPSSolve In-Reply-To: <44B30C67-464E-424D-A6FB-0858D056B78D@gmail.com> References: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> <44B30C67-464E-424D-A6FB-0858D056B78D@gmail.com> Message-ID: > El 9 may 2016, a las 18:27, Jonatan Midtgaard escribi?: > > I am on Mac OS X, so I am using the Homebrew version, SLEPc 3.6.2. PETSc is 3.6.3. > The options for the CISS have worked before, with simpler matrices. As for the options, I am using an ?ellipse? region (around 0, with radius=0.16 and vscale=0.5), the problem type is EPS_HEP. For the EPS settings, I set the tolerance to 1e-10 and maxit=200. > > - Jonatan It is better that you try this with slepc-3.7 (to be released in a few days) or with the master branch. It may be more robust than 3.6. Also, for difficult problems you may want to tune some parameters - they are documented here: http://slepc.upv.es/documentation/reports/str11.pdf Alternatively, send the matrix to my personal email and I will give it a try. Jose From praveenpetsc at gmail.com Mon May 9 12:11:25 2016 From: praveenpetsc at gmail.com (praveen kumar) Date: Mon, 9 May 2016 22:41:25 +0530 Subject: [petsc-users] no matching specific subroutine [DMDAVECGETARRAYF90] Message-ID: Hi, I called DMDAVecGet/RestoreArray inside main program portion and there were no errors. These errors pop up when I call DMDAVecGet/RestoreArray inside a subroutine. I have included all the headers in subroutine also. I have defined 't' as PetscScalar, pointer ::t(:,:). test.F90(358): error #6285: There is no matching specific subroutine for this generic subroutine call. [DMDAVECGETARRAYF90] call DMDAVecGetArrayF90(da,Lvec,t,ierr) -------------^ test.F90(389): error #6285: There is no matching specific subroutine for this generic subroutine call. [DMDAVECRESTOREARRAYF90] call DMDAVecRestoreArrayF90(da,Lvec,t,ierr) Thanks, Praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon May 9 13:09:26 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 May 2016 13:09:26 -0500 Subject: [petsc-users] no matching specific subroutine [DMDAVECGETARRAYF90] In-Reply-To: References: Message-ID: <6A418615-5D79-4EBC-B5EF-8DC47E9F89A6@mcs.anl.gov> Email a stand alone problem that reproduces the problem. > On May 9, 2016, at 12:11 PM, praveen kumar wrote: > > Hi, > > I called DMDAVecGet/RestoreArray inside main program portion and there were no errors. These errors pop up when I call DMDAVecGet/RestoreArray inside a subroutine. I have included all the headers in subroutine also. I have defined 't' as PetscScalar, pointer ::t(:,:). > > test.F90(358): error #6285: There is no matching specific subroutine for this generic subroutine call. [DMDAVECGETARRAYF90] > call DMDAVecGetArrayF90(da,Lvec,t,ierr) > -------------^ > test.F90(389): error #6285: There is no matching specific subroutine for this generic subroutine call. [DMDAVECRESTOREARRAYF90] > call DMDAVecRestoreArrayF90(da,Lvec,t,ierr) > > > Thanks, > Praveen From bsmith at mcs.anl.gov Mon May 9 13:16:09 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 May 2016 13:16:09 -0500 Subject: [petsc-users] "Petsc has generated inconsistent data" during EPSSolve In-Reply-To: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> References: <7D736081-5F29-4A44-9956-1552BFC207A7@gmail.com> Message-ID: <023BA795-5F21-45F2-8EF6-348DB75415CF@mcs.anl.gov> To eliminate the possibility that the problem is due to memory corruption due to some programming error you should run under valgrind: http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind Barry > On May 9, 2016, at 11:03 AM, Jonatan Midtgaard wrote: > > Hi all. I am using the EPSCISS solver in SLEPc to calculate eigenvalues of a Hermitian matrix close to the origin. (Actually, it is only very nearly Hermitian, but this happens also for NHEP-routines). I am running with 4 cores, and the matrix prints fine with MatView() just before the EPSSolve. Anyhow, I get the following error: > (I am new to both PETSc and this mailing list. Should error messages be attached or pasted into the body?) > > [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [2]PETSC ERROR: Petsc has generated inconsistent data > [2]PETSC ERROR: ith 3844 block entry 7688 not owned by any process, upper bound 7688 > [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [2]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [2]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [2]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [2]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [2]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [2]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [2]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c > [2]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c > [2]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c > [3]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [3]PETSC ERROR: Petsc has generated inconsistent data > [3]PETSC ERROR: ith 1922 block entry 7688 not owned by any process, upper bound 7688 > [3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [3]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [3]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [3]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [3]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [3]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [3]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [3]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c > [3]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c > [3]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c > [3]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [3]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: Petsc has generated inconsistent data > [1]PETSC ERROR: ith 5766 block entry 7688 not owned by any process, upper bound 7688 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [1]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [1]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [1]PETSC ERROR: #1 VecScatterCreate_PtoS() line 2339 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [1]PETSC ERROR: #2 VecScatterCreate_StoP() line 2795 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [1]PETSC ERROR: #3 VecScatterCreate_PtoP() line 2984 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vpscat.c > [1]PETSC ERROR: #4 VecScatterCreate() line 1570 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/vec/vec/utils/vscat.c > [1]PETSC ERROR: #5 PCSetUp_Redundant() line 133 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/impls/redundant/redundant.c > [1]PETSC ERROR: #6 PCSetUp() line 983 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: [2]PETSC ERROR: #7 KSPSetUp() line 332 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [2]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [2]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [2]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [2]PETSC ERROR: #12 main() line 795 in hello.c > [2]PETSC ERROR: PETSc Option Table entries: > [2]PETSC ERROR: -n 31 > [2]PETSC ERROR: #8 KSPSolve() line 546 in /private/tmp/petsc-20160503-43754-184nsu/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c > [3]PETSC ERROR: #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [3]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [3]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [3]PETSC ERROR: #12 main() line 795 in hello.c > [3]PETSC ERROR: PETSc Option Table entries: > [3]PETSC ERROR: -n 31 > #9 SolveLinearSystem() line 337 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [1]PETSC ERROR: #10 EPSSolve_CISS() line 868 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [1]PETSC ERROR: #11 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [1]PETSC ERROR: #12 main() line 795 in hello.c > [1]PETSC ERROR: PETSc Option Table entries: > [1]PETSC ERROR: -n 31 > ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > [3]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > [1]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD > with errorcode 77. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch system) has told this process to end > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:47:26 2016 > [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > [st-d13411.nfit.au.dk:90621] 3 more processes have sent help message help-mpi-api.txt / mpi-abort > [st-d13411.nfit.au.dk:90621] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > ??????????? > When running with 1 core, I get the error message: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Error in external library > [0]PETSC ERROR: Error in Lapack xGGES 154 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 > [0]PETSC ERROR: hello on a complex named st-d13411.nfit.au.dk by jonat4n Mon May 9 17:59:03 2016 > [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=0 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include/superlu_dist --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-superlu-include=/usr/local/Cellar/superlu43/4.3_1/include/superlu --with-superlu-lib="-L/usr/local/Cellar/superlu43/4.3_1/lib -lsuperlu" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.6.3_4/complex --with-scalar-type=complex > [0]PETSC ERROR: #1 DSSolve_GNHEP() line 581 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/impls/gnhep/dsgnhep.c > [0]PETSC ERROR: #2 DSSolve() line 543 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/sys/classes/ds/interface/dsops.c > [0]PETSC ERROR: #3 EPSSolve_CISS() line 943 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/impls/ciss/ciss.c > [0]PETSC ERROR: #4 EPSSolve() line 101 in /private/tmp/slepc-20160503-86919-187yrgp/slepc-3.6.2/src/eps/interface/epssolve.c > [0]PETSC ERROR: #5 main() line 795 in hello.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -n 31 > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- From jakub.kruzik at vsb.cz Tue May 10 03:54:06 2016 From: jakub.kruzik at vsb.cz (Jakub Kruzik) Date: Tue, 10 May 2016 10:54:06 +0200 Subject: [petsc-users] Task parallelism in PETSc Message-ID: <7859679a-7216-457d-8069-9ec586160cde@vsb.cz> Hi, what do you recommend for task parallelism in PETSc? What I want from it: - task dependencies - compiler independecy - active project The programm is intended to run on large clusters and uses MPI+OpenMP. Currently, I am thinking about StarPU or OpenMP, however I don't have any experience with using tasks in either of them. Best, Jakub From asmund.ervik at ntnu.no Wed May 11 02:45:21 2016 From: asmund.ervik at ntnu.no (=?UTF-8?Q?=c3=85smund_Ervik?=) Date: Wed, 11 May 2016 09:45:21 +0200 Subject: [petsc-users] Questions about DMForest scope and status Message-ID: <5732E311.1040501@ntnu.no> Dear all (Tobin and Matt in particular), I have some questions about DMForest, sparked by a mention of this in an earlier email to the list by Matt. As I understand it, DMForest is a DM which aims to provide a hierarchically refined grid, i.e. structured adaptive mesh refinement (AMR). Correct me if I'm wrong here. Now, my questions: What is the level of maturity of DMForest? Could I start using it today, or would it be better to wait for upcoming features/fixes? To what extent is DMForest intended to be a drop-in replacement for DMDA? For use in a CFD code already using DMDA, let's say finite difference, explicit time-integration, compressible Navier-Stokes should I expect switching to be a major undertaking? How encapsulated would the changes be? Is DMForest intended to provide dynamic adaptivity? Let's say I'm considering a fluid jet which creates a Kelvin-Helmholtz instability, and I want to refine the mesh to resolve the vortices appearing. Is this feasible? If possible, I assume there is some limitation on how frequently one can re-adapt to maintain good performance? What are my options for load-balancing with DMForest? Suppose I have some process coupled with the flow (e.g. chemistry, thermodynamics) that takes a significant amount of computing time, but not in a uniform way across all grid cells, is there (or could there be) an interface for specifying "these cells will have to do expensive calculations, these cells will do less expensive calculations"? This latter point would IMO also be useful for DMDA/DMPlex, so maybe it could be made generic? The biggest problem I guess is the tradeoff between the interface being simple enough and powerful enough. Perhaps a "good enough" option would be some sort of autotuning based on logging the time taken to arrive at an MPI sync point for all the MPI processes, and offloading work from the slowest ones to their neighbours. Thanks, ?smund -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From knepley at gmail.com Wed May 11 08:36:14 2016 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 May 2016 08:36:14 -0500 Subject: [petsc-users] Questions about DMForest scope and status In-Reply-To: <5732E311.1040501@ntnu.no> References: <5732E311.1040501@ntnu.no> Message-ID: On Wed, May 11, 2016 at 2:45 AM, ?smund Ervik wrote: > Dear all (Tobin and Matt in particular), > > I have some questions about DMForest, sparked by a mention of this in an > earlier email to the list by Matt. > > As I understand it, DMForest is a DM which aims to provide a > hierarchically refined grid, i.e. structured adaptive mesh refinement > (AMR). Correct me if I'm wrong here. > > Now, my questions: > Toby is more qualified to answer, but I will start: > What is the level of maturity of DMForest? Could I start using it today, > or would it be better to wait for upcoming features/fixes? > You can use DMForest purely as a Plex now, and I would characterize this as ready to be tried out. If you want to use it mainly as a forest (structured adaptive mesh), then I would say its alpha. > To what extent is DMForest intended to be a drop-in replacement for > DMDA? For use in a CFD code already using DMDA, let's say finite > difference, explicit time-integration, compressible Navier-Stokes should > I expect switching to be a major undertaking? How encapsulated would the > changes be? > Not at all. The major advantage of DMDA is the regular topology (gone) and the nice array interface (not there either). DMForest is mainly focused on cell-focused discretizations like DG, and also more traditional FEM. It is no coincidence that this is also how people use p4est on its own. > Is DMForest intended to provide dynamic adaptivity? Let's say I'm > considering a fluid jet which creates a Kelvin-Helmholtz instability, > and I want to refine the mesh to resolve the vortices appearing. Is this > feasible? If possible, I assume there is some limitation on how > frequently one can re-adapt to maintain good performance? > Yes. I would say its intended to do very frequent adaptation. > What are my options for load-balancing with DMForest? Suppose I have > some process coupled with the flow (e.g. chemistry, thermodynamics) that > takes a significant amount of computing time, but not in a uniform way > across all grid cells, is there (or could there be) an interface for > specifying "these cells will have to do expensive calculations, these > cells will do less expensive calculations"? > Both Forest and Plex will accept weights for partitioning. Its not quite clear what the final interface should be, but we would help you to get your weights in. Thanks, Matt > This latter point would IMO also be useful for DMDA/DMPlex, so maybe it > could be made generic? The biggest problem I guess is the tradeoff > between the interface being simple enough and powerful enough. Perhaps a > "good enough" option would be some sort of autotuning based on logging > the time taken to arrive at an MPI sync point for all the MPI processes, > and offloading work from the slowest ones to their neighbours. > > > Thanks, > ?smund > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From doougsini at gmail.com Wed May 11 23:29:51 2016 From: doougsini at gmail.com (Seungbum Koo) Date: Wed, 11 May 2016 23:29:51 -0500 Subject: [petsc-users] problem during configuring Message-ID: I am installing PETSc-3.7.0 on my new computer. The only thing that I installed after installing Ubuntu 16.04 is Intel compilers. While configuring, I faced with following error message. seungbum at asterix:~/install/petsc-3.7.0$ ./configure PETSC_DIR=/home/seungbum/install/petsc-3.7.0 PETSC_ARCH=arch-intel-debug --with-mpi-dir=/home/seungbum/install --with-debugging=1 --with-debugger=gdb --with-blas-lapack-dir=/home/seungbum/intel/mkl --download-hdf5=yes --download-superlu=yes --download-superlu_dist=yes --download-metis=yes --download-parmetis=yes --download-scalapack=yes --download-mumps=yes =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkCLibraries from config.compilers(config/BuildSystem/config/compilers.py:168) ******************************************************************************* UNABLE to EXECUTE BINARIES for ./configure ------------------------------------------------------------------------------- Cannot run executables created with FC. If this machine uses a batch system to submit jobs you will need to configure using ./configure with the additional option --with-batch. Otherwise there is problem with the compilers. Can you compile and run code with your compiler '/home/seungbum/install/bin/mpif90'? See http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf ******************************************************************************* seungbum at asterix:~/install/petsc-3.7.0$ It is confusing because I already installed PETSc-3.7.0 on the workstation in my lab with exact same configure options without any problems. Can anyone help? Seungbum -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: text/x-log Size: 1504841 bytes Desc: not available URL: From ztdepyahoo at 163.com Thu May 12 00:32:31 2016 From: ztdepyahoo at 163.com (ztdepyahoo at 163.com) Date: Thu, 12 May 2016 13:32:31 +0800 Subject: [petsc-users] how to obtain the vector value at the other processor. References: <201605030929463862822@163.com> Message-ID: <201605121332303518931@163.com> Dear professor: i need to access the vector values which are not locted in local cpu. how to obtain them. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu May 12 00:36:49 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 12 May 2016 00:36:49 -0500 Subject: [petsc-users] how to obtain the vector value at the other processor. In-Reply-To: <201605121332303518931@163.com> References: <201605030929463862822@163.com> <201605121332303518931@163.com> Message-ID: <30A6B175-8185-451F-AC99-2B35B1B088E6@mcs.anl.gov> If you wish all the entries on one or all processes then see http://www.mcs.anl.gov/petsc/documentation/faq.html#mpi-vec-to-seq-vec or http://www.mcs.anl.gov/petsc/documentation/faq.html#mpi-vec-to-mpi-vec if you want a subset of entries from other processes on each process then you need to see the manual page for VecScatterCreate(). Barry > On May 12, 2016, at 12:32 AM, ztdepyahoo at 163.com wrote: > > Dear professor: > i need to access the vector values which are not locted in local cpu. how to obtain them. > Regards From sdettrick at trialphaenergy.com Thu May 12 04:42:54 2016 From: sdettrick at trialphaenergy.com (Sean Dettrick) Date: Thu, 12 May 2016 02:42:54 -0700 Subject: [petsc-users] accessing DMDA Vec ghost values Message-ID: Hi, When discussing DMDAVecGetArrayDOF etc in section 2.4.4, the PETSc 3.7 manual says "The array is accessed using the usual global indexing on the entire grid, but the user may only refer to the local and ghost entries of this array as all other entries are undefined?. OK so far. But how to access the ghost entries? With a 2D DMDA, I can do this OK: PetscInt xs,xm,ys,ym; ierr=DMDAGetCorners(da,&xs,&ys,0,&xm,&ym,0);CHKERRQ(ierr); PetscScalar ***es; ierr=DMDAVecGetArrayDOF(da,Es,&es);CHKERRQ(ierr); for (int j=ys; j < ys+ym; j++) { for (int i=xs; i < xs+xm;i++) { es[j][i][0]=1.; es[j][i][1]=1.; } } ierr=DMDAVecRestoreArrayDOF(da,Es,&es);CHKERRQ(ierr); But if I replace DMDAGetCorners with DMDAGetGhostCorners, then the code crashes with a seg fault, presumably due to out of bounds memory access. Is that supposed to happen? What?s the remedy? Thanks very much! Sean Dettrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu May 12 04:48:18 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 12 May 2016 10:48:18 +0100 Subject: [petsc-users] accessing DMDA Vec ghost values In-Reply-To: References: Message-ID: On 12 May 2016 at 10:42, Sean Dettrick wrote: > Hi, > > When discussing DMDAVecGetArrayDOF etc in section 2.4.4, the PETSc 3.7 > manual says "The array is accessed using the usual global indexing on the > entire grid, but the user may only refer to the local and ghost entries > of this array as all other entries are undefined?. > > OK so far. But how to access the ghost entries? > > With a 2D DMDA, I can do this OK: > > > PetscInt xs,xm,ys,ym; > > ierr=DMDAGetCorners(da,&xs,&ys,0,&xm,&ym,0);CHKERRQ(ierr); > > PetscScalar ***es; > > ierr=DMDAVecGetArrayDOF(da,Es,&es);CHKERRQ(ierr); > > > for (int j=ys; j < ys+ym; j++) { > > for (int i=xs; i < xs+xm;i++) { > > es[j][i][0]=1.; > > es[j][i][1]=1.; > > } > > } > > ierr=DMDAVecRestoreArrayDOF(da,Es,&es);CHKERRQ(ierr); > > But if I replace DMDAGetCorners with DMDAGetGhostCorners, then the code > crashes with a seg fault, presumably due to out of bounds memory access. > > Is that supposed to happen? > If you created the vector Es using the function DM{Get,Create}GlobalVector(), then the answer is yes. > What?s the remedy? > If you want to access the ghost entries, you need to create the vector using the function DM{Get,Create}LocalVector(). Thanks, Dave > > Thanks very much! > > Sean Dettrick > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Federico.Miorelli at CGG.COM Thu May 12 05:36:26 2016 From: Federico.Miorelli at CGG.COM (Miorelli, Federico) Date: Thu, 12 May 2016 10:36:26 +0000 Subject: [petsc-users] DMDAGetAO and AODestroy Message-ID: <8D478341240222479E0DB361E050CB6A729332@msy-emb04.int.cggveritas.com> In one of my subroutines I'm calling DMDAGetAO to get the application ordering from a DM structure. After using it I was calling AODestroy. Everything worked fine until I called the subroutine for the second time, when the program crashed. Removing the call to AODestroy solved the crash. Am I supposed to AODestroy the output of DMDAGetAO or not? I was worried that DMDAGetAO would allocate memory that I need to release. Thanks, Federico ______ ______ ______ Federico Miorelli Senior R&D Geophysicist Subsurface Imaging - General Geophysics Italy CGG Electromagnetics (Italy) Srl This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu May 12 06:03:18 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 12 May 2016 12:03:18 +0100 Subject: [petsc-users] DMDAGetAO and AODestroy In-Reply-To: <8D478341240222479E0DB361E050CB6A729332@msy-emb04.int.cggveritas.com> References: <8D478341240222479E0DB361E050CB6A729332@msy-emb04.int.cggveritas.com> Message-ID: On 12 May 2016 at 11:36, Miorelli, Federico wrote: > In one of my subroutines I'm calling DMDAGetAO to get the application > ordering from a DM structure. > > After using it I was calling AODestroy. > > > > Everything worked fine until I called the subroutine for the second time, > when the program crashed. > > Removing the call to AODestroy solved the crash. > > > > Am I supposed to AODestroy the output of DMDAGetAO or not? I was worried > that DMDAGetAO would allocate memory that I need to release. > You are not supposed to call AODestroy() on the AO returned. The pointer being returned is used internally by the DMDA. Thanks Dave > > > Thanks, > > > > Federico > > > > *______* *______* *______* > > Federico Miorelli > > > > Senior R&D Geophysicist > > *Subsurface Imaging - General Geophysics **Italy* > > > > CGG Electromagnetics (Italy) Srl > > > *This email and any accompanying attachments are confidential. If you > received this email by mistake, please delete it from your system. Any > review, disclosure, copying, distribution, or use of the email by others is > strictly prohibited.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Federico.Miorelli at CGG.COM Thu May 12 07:01:30 2016 From: Federico.Miorelli at CGG.COM (Miorelli, Federico) Date: Thu, 12 May 2016 12:01:30 +0000 Subject: [petsc-users] DMDAGetAO and AODestroy In-Reply-To: References: <8D478341240222479E0DB361E050CB6A729332@msy-emb04.int.cggveritas.com> Message-ID: <8D478341240222479E0DB361E050CB6A72938F@msy-emb04.int.cggveritas.com> Dave, Thanks for your answer. For consistency with otehr PETSc routines it would perhaps make sense to create a DMDARestoreAO function? Regards, Federico ______ ______ ______ Federico Miorelli Senior R&D Geophysicist Subsurface Imaging - General Geophysics Italy CGG Electromagnetics (Italy) Srl From: Dave May [mailto:dave.mayhem23 at gmail.com] Sent: gioved? 12 maggio 2016 13:03 To: Miorelli, Federico Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] DMDAGetAO and AODestroy On 12 May 2016 at 11:36, Miorelli, Federico > wrote: In one of my subroutines I'm calling DMDAGetAO to get the application ordering from a DM structure. After using it I was calling AODestroy. Everything worked fine until I called the subroutine for the second time, when the program crashed. Removing the call to AODestroy solved the crash. Am I supposed to AODestroy the output of DMDAGetAO or not? I was worried that DMDAGetAO would allocate memory that I need to release. You are not supposed to call AODestroy() on the AO returned. The pointer being returned is used internally by the DMDA. Thanks Dave Thanks, Federico ______ ______ ______ Federico Miorelli Senior R&D Geophysicist Subsurface Imaging - General Geophysics Italy CGG Electromagnetics (Italy) Srl This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. This email and any accompanying attachments are confidential. If you received this email by mistake, please delete it from your system. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu May 12 09:37:48 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 12 May 2016 15:37:48 +0100 Subject: [petsc-users] DMDAGetAO and AODestroy In-Reply-To: <8D478341240222479E0DB361E050CB6A72938F@msy-emb04.int.cggveritas.com> References: <8D478341240222479E0DB361E050CB6A729332@msy-emb04.int.cggveritas.com> <8D478341240222479E0DB361E050CB6A72938F@msy-emb04.int.cggveritas.com> Message-ID: On 12 May 2016 at 13:01, Miorelli, Federico wrote: > Dave, > > > > Thanks for your answer. > > For consistency with otehr PETSc routines it would perhaps make sense to > create a DMDARestoreAO function? > Not really. The pattern used here is the same as DMGetCoordinateDM() DMGetCoordinates() etc I agree it's not always immediately obvious whether one should call destroy on the object returned. The best rule I can suggest to follow is that if the man page doesn't explicitly instruct you to call the destroy method, you should not call destroy. If a destroy is required, there will be a note in the man page indicating this, for example http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPChebyshevEstEigGetKSP.html or http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCompositeGetGlobalISs.html The man pages are not 100% consistent: Sometimes they will say "don't call destroy on object XXX as it is used internally by YYY". Other times it will mention the reference counter has been incremented. Other times nothing is stated (implicitly meaning no destroy is required). If in doubt, just email the petsc-users list :D Thanks, Dave > > > Regards, > > Federico > > > > *______* *______* *______* > > Federico Miorelli > > > > Senior R&D Geophysicist > > *Subsurface Imaging - General Geophysics **Italy* > > > > CGG Electromagnetics (Italy) Srl > > > > > > *From:* Dave May [mailto:dave.mayhem23 at gmail.com] > *Sent:* gioved? 12 maggio 2016 13:03 > *To:* Miorelli, Federico > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] DMDAGetAO and AODestroy > > > > > > > > On 12 May 2016 at 11:36, Miorelli, Federico > wrote: > > In one of my subroutines I'm calling DMDAGetAO to get the application > ordering from a DM structure. > > After using it I was calling AODestroy. > > > > Everything worked fine until I called the subroutine for the second time, > when the program crashed. > > Removing the call to AODestroy solved the crash. > > > > Am I supposed to AODestroy the output of DMDAGetAO or not? I was worried > that DMDAGetAO would allocate memory that I need to release. > > > > You are not supposed to call AODestroy() on the AO returned. > > The pointer being returned is used internally by the DMDA. > > Thanks > > Dave > > > > > > > > Thanks, > > > > Federico > > > > *______* *______* *______* > > Federico Miorelli > > > > Senior R&D Geophysicist > > *Subsurface Imaging - General Geophysics **Italy* > > > > CGG Electromagnetics (Italy) Srl > > > *This email and any accompanying attachments are confidential. If you > received this email by mistake, please delete it from your system. Any > review, disclosure, copying, distribution, or use of the email by others is > strictly prohibited.* > > > > > *This email and any accompanying attachments are confidential. If you > received this email by mistake, please delete it from your system. Any > review, disclosure, copying, distribution, or use of the email by others is > strictly prohibited.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdettrick at trialphaenergy.com Thu May 12 10:50:08 2016 From: sdettrick at trialphaenergy.com (Sean Dettrick) Date: Thu, 12 May 2016 08:50:08 -0700 Subject: [petsc-users] accessing DMDA Vec ghost values In-Reply-To: References: Message-ID: From: > on behalf of Dave May > Date: Thursday, May 12, 2016 at 2:48 AM To: Sean Dettrick > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] accessing DMDA Vec ghost values On 12 May 2016 at 10:42, Sean Dettrick > wrote: Hi, When discussing DMDAVecGetArrayDOF etc in section 2.4.4, the PETSc 3.7 manual says "The array is accessed using the usual global indexing on the entire grid, but the user may only refer to the local and ghost entries of this array as all other entries are undefined?. OK so far. But how to access the ghost entries? With a 2D DMDA, I can do this OK: PetscInt xs,xm,ys,ym; ierr=DMDAGetCorners(da,&xs,&ys,0,&xm,&ym,0);CHKERRQ(ierr); PetscScalar ***es; ierr=DMDAVecGetArrayDOF(da,Es,&es);CHKERRQ(ierr); for (int j=ys; j < ys+ym; j++) { for (int i=xs; i < xs+xm;i++) { es[j][i][0]=1.; es[j][i][1]=1.; } } ierr=DMDAVecRestoreArrayDOF(da,Es,&es);CHKERRQ(ierr); But if I replace DMDAGetCorners with DMDAGetGhostCorners, then the code crashes with a seg fault, presumably due to out of bounds memory access. Is that supposed to happen? If you created the vector Es using the function DM{Get,Create}GlobalVector(), then the answer is yes. What?s the remedy? If you want to access the ghost entries, you need to create the vector using the function DM{Get,Create}LocalVector(). Thanks! Somehow I missed DM{Get,Create}LocalVector(). BTW what is the difference between the Get and Create versions? It is not obvious from the documentation. Also, can you explain the difference between DMDAVecGetArrayDOF and DMDAVecGetArrayDOFRead? Thanks again, Sean Thanks, Dave Thanks very much! Sean Dettrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu May 12 10:55:53 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 12 May 2016 10:55:53 -0500 Subject: [petsc-users] accessing DMDA Vec ghost values In-Reply-To: References: Message-ID: On Thu, May 12, 2016 at 10:50 AM, Sean Dettrick < sdettrick at trialphaenergy.com> wrote: > > > From: on behalf of Dave May < > dave.mayhem23 at gmail.com> > Date: Thursday, May 12, 2016 at 2:48 AM > To: Sean Dettrick > Cc: "petsc-users at mcs.anl.gov" > Subject: Re: [petsc-users] accessing DMDA Vec ghost values > > > > On 12 May 2016 at 10:42, Sean Dettrick > wrote: > >> Hi, >> >> When discussing DMDAVecGetArrayDOF etc in section 2.4.4, the PETSc 3.7 >> manual says "The array is accessed using the usual global indexing on >> the entire grid, but the user may only refer to the local and ghost >> entries of this array as all other entries are undefined?. >> >> OK so far. But how to access the ghost entries? >> >> With a 2D DMDA, I can do this OK: >> >> >> PetscInt xs,xm,ys,ym; >> >> ierr=DMDAGetCorners(da,&xs,&ys,0,&xm,&ym,0);CHKERRQ(ierr); >> >> PetscScalar ***es; >> >> ierr=DMDAVecGetArrayDOF(da,Es,&es);CHKERRQ(ierr); >> >> >> for (int j=ys; j < ys+ym; j++) { >> >> for (int i=xs; i < xs+xm;i++) { >> >> es[j][i][0]=1.; >> >> es[j][i][1]=1.; >> >> } >> >> } >> >> ierr=DMDAVecRestoreArrayDOF(da,Es,&es);CHKERRQ(ierr); >> >> But if I replace DMDAGetCorners with DMDAGetGhostCorners, then the code >> crashes with a seg fault, presumably due to out of bounds memory access. >> >> Is that supposed to happen? >> > > If you created the vector Es using > the function DM{Get,Create}GlobalVector(), then the answer is yes. > > >> What?s the remedy? >> > > If you want to access the ghost entries, you need to create the vector > using the function DM{Get,Create}LocalVector(). > > > > > Thanks! Somehow I missed DM{Get,Create}LocalVector(). BTW what is the > difference between the Get and Create versions? It is not obvious from the > documentation. > Create gives back a vector that you own and need to VecDestroy(). Get gives back a vector we own and you need to call Restore. > Also, can you explain the difference between DMDAVecGetArrayDOF and > DMDAVecGetArrayDOFRead? > One allows writing into the storage, whereas the other does not. Matt > Thanks again, > Sean > > > > Thanks, > Dave > > > >> >> Thanks very much! >> >> Sean Dettrick >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu May 12 10:58:51 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 12 May 2016 16:58:51 +0100 Subject: [petsc-users] accessing DMDA Vec ghost values In-Reply-To: References: Message-ID: Matt beat me to the punch... :D Anyway, here is my more detailed answer. > Thanks! Somehow I missed DM{Get,Create}LocalVector(). BTW what is the > difference between the Get and Create versions? It is not obvious from the > documentation. > The DMDA contains a pool of vectors (both local and global) which can be re-used by the user. This avoids the need to continually allocate and deallocate memory. Thus, the Get methods are simply an optimization. The Get methods retrieve from the pool, a vector which isn't currently in use. In this case, You can think of Restore as returning the vector back to the pool to be used somewhere else. If all vectors in the pool are in use, a new one will be allocated for you. In this case, Restore will actually deallocate memory. Since Get methods may return vectors which have been used else where in the code, you should always call VecZeroEntries() on them. The Create methods ALWAYS allocate new memory and thus you ALWAYS need to call Destroy on them. Vectors obtained via VecCreate() will always be initialized with 0's. Thanks, Dave > > Also, can you explain the difference between DMDAVecGetArrayDOF and > DMDAVecGetArrayDOFRead? > > Thanks again, > Sean > > > > Thanks, > Dave > > > >> >> Thanks very much! >> >> Sean Dettrick >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdettrick at trialphaenergy.com Thu May 12 11:05:40 2016 From: sdettrick at trialphaenergy.com (Sean Dettrick) Date: Thu, 12 May 2016 09:05:40 -0700 Subject: [petsc-users] accessing DMDA Vec ghost values In-Reply-To: References: Message-ID: Thanks Matt and Dave for the explanation, that is very helpful. Best Sean On Thu, May 12, 2016 at 8:58 AM -0700, "Dave May" > wrote: Matt beat me to the punch... :D Anyway, here is my more detailed answer. Thanks! Somehow I missed DM{Get,Create}LocalVector(). BTW what is the difference between the Get and Create versions? It is not obvious from the documentation. The DMDA contains a pool of vectors (both local and global) which can be re-used by the user. This avoids the need to continually allocate and deallocate memory. Thus, the Get methods are simply an optimization. The Get methods retrieve from the pool, a vector which isn't currently in use. In this case, You can think of Restore as returning the vector back to the pool to be used somewhere else. If all vectors in the pool are in use, a new one will be allocated for you. In this case, Restore will actually deallocate memory. Since Get methods may return vectors which have been used else where in the code, you should always call VecZeroEntries() on them. The Create methods ALWAYS allocate new memory and thus you ALWAYS need to call Destroy on them. Vectors obtained via VecCreate() will always be initialized with 0's. Thanks, Dave Also, can you explain the difference between DMDAVecGetArrayDOF and DMDAVecGetArrayDOFRead? Thanks again, Sean Thanks, Dave Thanks very much! Sean Dettrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu May 12 14:04:32 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 12 May 2016 14:04:32 -0500 Subject: [petsc-users] problem during configuring In-Reply-To: References: Message-ID: >>>>>>>>>>> compilers: Check that C libraries can be used from Fortran Pushing language FC Popping language FC Pushing language FC Popping language FC Pushing language FC Popping language FC **** Configure header /tmp/petsc-9NYxlk/confdefs.h **** <<<<<<<<<<<<< configure.log is missing the log corresponding to the error encountered. Usually this happens when the binaries cannot be run. One reason for that is - some of the .so files couldn't be loaded by ld.so You appear to have LD_LIBRARY_PATH set - so I'm not sure why you get this error. Matt would have to check configure.log to see why the corresponding errors are mising from configure.log Satish On Wed, 11 May 2016, Seungbum Koo wrote: > I am installing PETSc-3.7.0 on my new computer. The only thing that I > installed after installing Ubuntu 16.04 is Intel compilers. > > While configuring, I faced with following error message. > > seungbum at asterix:~/install/petsc-3.7.0$ ./configure > PETSC_DIR=/home/seungbum/install/petsc-3.7.0 PETSC_ARCH=arch-intel-debug > --with-mpi-dir=/home/seungbum/install --with-debugging=1 > --with-debugger=gdb --with-blas-lapack-dir=/home/seungbum/intel/mkl > --download-hdf5=yes --download-superlu=yes --download-superlu_dist=yes > --download-metis=yes --download-parmetis=yes --download-scalapack=yes > --download-mumps=yes > =============================================================================== > Configuring PETSc to compile on your > system > =============================================================================== > TESTING: checkCLibraries from > config.compilers(config/BuildSystem/config/compilers.py:168) > ******************************************************************************* > UNABLE to EXECUTE BINARIES for ./configure > ------------------------------------------------------------------------------- > Cannot run executables created with FC. If this machine uses a batch system > to submit jobs you will need to configure using ./configure with the > additional option --with-batch. > Otherwise there is problem with the compilers. Can you compile and run > code with your compiler '/home/seungbum/install/bin/mpif90'? > See http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf > ******************************************************************************* > > seungbum at asterix:~/install/petsc-3.7.0$ > > It is confusing because I already installed PETSc-3.7.0 on the workstation > in my lab with exact same configure options without any problems. > > Can anyone help? > > Seungbum > From mlohry at princeton.edu Fri May 13 12:13:40 2016 From: mlohry at princeton.edu (Mark Lohry) Date: Fri, 13 May 2016 13:13:40 -0400 Subject: [petsc-users] Setting Vec raw contiguous data pointer, VecPlaceArray Message-ID: <57360B44.5010506@princeton.edu> I'm trying to interface to petsc with code using the Eigen C++ matrix library. I can get the raw data pointer for a petsc vec at have Eigen "map" that data without moving it using VecGetArray like so (correct me if anything is wrong here): void PetscVecToEigen(const Vec& pvec,unsigned int nrow,unsigned int ncol,Eigen::MatrixXd& emat){ PetscScalar *pdata; // Returns a pointer to a contiguous array containing this processor's portion // of the vector data. For standard vectors this doesn't use any copies. // If the the petsc vector is not in a contiguous array then it will copy // it to a contiguous array. VecGetArray(pvec,&pdata); // Make the Eigen type a map to the data. Need to be mindful of anything that // changes the underlying data location like re-allocations. emat = Eigen::Map(pdata,nrow,ncol); VecRestoreArray(pvec,&pdata); } and then I can use the Eigen::Matrix as usual, directly modifying the data in the petscvec. Is VecPlaceArray the inverse of this for setting the petsc vector data pointer? e.g. this function: void PetscEigenToVec(const Eigen::MatrixXd& emat,Vec& pvec){ VecPlaceArray(pvec,emat.data()); } How do I then set the corresponding petsc vec size? Would it just be VecSetSizes(pvec, emat.rows() * emat.cols(), PETSC_DECIDE)? Does VecSetSizes do any reallocations? Thanks in advance, Mark Lohry From mlohry at princeton.edu Fri May 13 12:33:35 2016 From: mlohry at princeton.edu (Mark Lohry) Date: Fri, 13 May 2016 13:33:35 -0400 Subject: [petsc-users] Setting Vec raw contiguous data pointer, VecPlaceArray In-Reply-To: <57360B44.5010506@princeton.edu> References: <57360B44.5010506@princeton.edu> Message-ID: <57360FEF.2020504@princeton.edu> Sorry, disregard that. A few more minutes of looking seems like VecCreateSeqWithArray is what I wanted and this should work: void EigenToPetscVec(const Eigen::MatrixXd& emat,Vec& pvec){ VecCreateSeqWithArray(PESTC_COMM_SELF,1,emat.rows()*emat.cols(),emat.data,&pvec); } On 05/13/2016 01:13 PM, Mark Lohry wrote: > I'm trying to interface to petsc with code using the Eigen C++ matrix > library. I can get the raw data pointer for a petsc vec at have Eigen > "map" that data without moving it using VecGetArray like so (correct > me if anything is wrong here): > > void PetscVecToEigen(const Vec& pvec,unsigned int nrow,unsigned int > ncol,Eigen::MatrixXd& emat){ > PetscScalar *pdata; > // Returns a pointer to a contiguous array containing this > processor's portion > // of the vector data. For standard vectors this doesn't use any > copies. > // If the the petsc vector is not in a contiguous array then it will > copy > // it to a contiguous array. > VecGetArray(pvec,&pdata); > // Make the Eigen type a map to the data. Need to be mindful of > anything that > // changes the underlying data location like re-allocations. > emat = Eigen::Map(pdata,nrow,ncol); > VecRestoreArray(pvec,&pdata); > } > > and then I can use the Eigen::Matrix as usual, directly modifying the > data in the petscvec. > > Is VecPlaceArray the inverse of this for setting the petsc vector data > pointer? e.g. this function: > > void PetscEigenToVec(const Eigen::MatrixXd& emat,Vec& pvec){ > VecPlaceArray(pvec,emat.data()); > } > > > How do I then set the corresponding petsc vec size? Would it just be > VecSetSizes(pvec, emat.rows() * emat.cols(), PETSC_DECIDE)? > > Does VecSetSizes do any reallocations? > > > > > Thanks in advance, > Mark Lohry From dominic at steinitz.org Mon May 16 06:47:31 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Mon, 16 May 2016 12:47:31 +0100 Subject: [petsc-users] Problem Running Example In-Reply-To: <5738A51D.9060805@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572FACE7.6090704@steinitz.org> <5738A51D.9060805@steinitz.org> Message-ID: <5739B353.7050407@steinitz.org> On 15/05/2016 17:34, Dominic Steinitz wrote: > HYDU_create_process (utils/launch/launch.c:75): Sorry for the noise. I understand why I got the above error message but now I get a different one where previously the example used to work. I have no idea what I have done differently. > ~/petsc/src/ksp/ksp/examples/tutorials $ > /Users/dom/petsc/arch-darwin-c-debug/bin/mpiexec -n 1 ./ex1 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Pointer: Parameter # 2 > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.6.3, unknown > [0]PETSC ERROR: ./ex1 on a arch-darwin-c-debug named > Dominics-MacBook-Pro.local by dom Mon May 16 12:40:21 2016 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack --download-mpich > [0]PETSC ERROR: #1 PetscOptionsGetInt() line 1480 in > /Users/dom/petsc/src/sys/objects/options.c > [0]PETSC ERROR: #2 main() line 39 in > /Users/dom/petsc/src/ksp/ksp/examples/tutorials/ex1.c > [0]PETSC ERROR: No PETSc Option Table entries > [0]PETSC ERROR: ----------------End of Error Message -------send > entire error message to petsc-maint at mcs.anl.gov---------- Here's how I built the example > ~/petsc/src/ksp/ksp/examples/tutorials $ make clean > ~/petsc/src/ksp/ksp/examples/tutorials $ make ex1 > /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc -o ex1.o -c -fPIC -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 > -I/Users/dom/petsc/include > -I/Users/dom/petsc/arch-darwin-c-debug/include -I/opt/X11/include > -I/usr/local/include `pwd`/ex1.c > /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc > -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress > -Wl,-commons,use_dylibs -Wl,-search_paths_first > -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress > -Wl,-commons,use_dylibs -Wl,-search_paths_first -fPIC -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -o > ex1 ex1.o -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib > -L/Users/dom/petsc/arch-darwin-c-debug/lib -lpetsc > -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib -lflapack -lfblas > -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 -Wl,-rpath,/usr/local/lib > -L/usr/local/lib -lhwloc > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin > -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin > -lmpifort -lgfortran > -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 > -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 > -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 > -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib > -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib > -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lclang_rt.osx -lmpicxx -lc++ > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -lclang_rt.osx -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib > -L/Users/dom/petsc/arch-darwin-c-debug/lib -ldl -lmpi -lpmpi -lSystem > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -lclang_rt.osx -ldl > /bin/rm -f ex1.o From dominic at steinitz.org Sun May 15 11:34:37 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Sun, 15 May 2016 17:34:37 +0100 Subject: [petsc-users] Problem Running Example In-Reply-To: <572FACE7.6090704@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572FACE7.6090704@steinitz.org> Message-ID: <5738A51D.9060805@steinitz.org> I am trying to run the advection-diffusion example but getting an error. Can anyone tell me what I have done wrong? > ~/petsc/src/ts/examples/tests (git)-[master] % > /Users/dom/petsc/arch-darwin-c-debug/bin/mpiexec -n 1 ex3 > [proxy:0:0 at Dominics-MacBook-Pro.local] > HYDU_create_process (utils/launch/launch.c:75): > execvp error on file ex3 (No such file or directory) Many thanks Dominic Steinitz dominic at steinitz.org http://idontgetoutmuch.wordpress.com From dominic at steinitz.org Mon May 16 08:31:17 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Mon, 16 May 2016 06:31:17 -0700 Subject: [petsc-users] Problem Running Example In-Reply-To: References: <572EF5A3.9080501@steinitz.org> <572FACE7.6090704@steinitz.org> <5738A51D.9060805@steinitz.org> <5739B353.7050407@steinitz.org> Message-ID: <5739CBA5.9030705@steinitz.org> Thanks - I downloaded the maint branch (so I am 11 hours out of date!) and built that - the advection diffusion example now runs. Somewhat annoyingly, it doesn't seem to produce any output. Time to add a few printf statements I guess. Dominic. On 16/05/2016 06:24, Matthew Knepley wrote: > On Mon, May 16, 2016 at 6:47 AM, Dominic Steinitz > > wrote: > > On 15/05/2016 17:34, Dominic Steinitz wrote: > > HYDU_create_process (utils/launch/launch.c:75): > > > Sorry for the noise. I understand why I got the above error > message but now I get a different one where previously the example > used to work. I have no idea what I have done differently. > > ~/petsc/src/ksp/ksp/examples/tutorials $ > /Users/dom/petsc/arch-darwin-c-debug/bin/mpiexec -n 1 ./ex1 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Pointer: Parameter # 2 > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for > trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.6.3, unknown > [0]PETSC ERROR: ./ex1 on a arch-darwin-c-debug named > Dominics-MacBook-Pro.local by dom Mon May 16 12:40:21 2016 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack --download-mpich > [0]PETSC ERROR: #1 PetscOptionsGetInt() line 1480 in > /Users/dom/petsc/src/sys/objects/options.c > [0]PETSC ERROR: #2 main() line 39 in > /Users/dom/petsc/src/ksp/ksp/examples/tutorials/ex1.c > [0]PETSC ERROR: No PETSc Option Table entries > [0]PETSC ERROR: ----------------End of Error Message > -------send entire error message to > petsc-maint at mcs.anl.gov---------- > > > The argument list for PetscOptionsGetInt() changed in this release (so > we could have reentrant option functions). It added an > additional argument at the beginning. I am guessing there is a > mismatch here between the version from which ex1.c comes > and the version you are linking. > > Matt > > Here's how I built the example > > ~/petsc/src/ksp/ksp/examples/tutorials $ make clean > ~/petsc/src/ksp/ksp/examples/tutorials $ make ex1 > /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc -o ex1.o -c > -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -g3 -O0 -I/Users/dom/petsc/include > -I/Users/dom/petsc/arch-darwin-c-debug/include > -I/opt/X11/include -I/usr/local/include `pwd`/ex1.c > /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc > -Wl,-multiply_defined,suppress -Wl,-multiply_defined > -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first > -Wl,-multiply_defined,suppress -Wl,-multiply_defined > -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first > -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -g3 -O0 -o ex1 ex1.o > -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib > -L/Users/dom/petsc/arch-darwin-c-debug/lib -lpetsc > -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib -lflapack > -lfblas -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 > -Wl,-rpath,/usr/local/lib -L/usr/local/lib -lhwloc > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin > -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin > -lmpifort -lgfortran > -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 > -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 > -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 > -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib > -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib > -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lclang_rt.osx > -lmpicxx -lc++ > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -lclang_rt.osx > -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib > -L/Users/dom/petsc/arch-darwin-c-debug/lib -ldl -lmpi -lpmpi > -lSystem > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin > -lclang_rt.osx -ldl > /bin/rm -f ex1.o > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon May 16 08:24:31 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 May 2016 08:24:31 -0500 Subject: [petsc-users] Problem Running Example In-Reply-To: <5739B353.7050407@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572FACE7.6090704@steinitz.org> <5738A51D.9060805@steinitz.org> <5739B353.7050407@steinitz.org> Message-ID: On Mon, May 16, 2016 at 6:47 AM, Dominic Steinitz wrote: > On 15/05/2016 17:34, Dominic Steinitz wrote: > >> HYDU_create_process (utils/launch/launch.c:75): >> > > Sorry for the noise. I understand why I got the above error message but > now I get a different one where previously the example used to work. I have > no idea what I have done differently. > > ~/petsc/src/ksp/ksp/examples/tutorials $ >> /Users/dom/petsc/arch-darwin-c-debug/bin/mpiexec -n 1 ./ex1 >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Null argument, when expecting valid pointer >> [0]PETSC ERROR: Null Pointer: Parameter # 2 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.6.3, unknown >> [0]PETSC ERROR: ./ex1 on a arch-darwin-c-debug named >> Dominics-MacBook-Pro.local by dom Mon May 16 12:40:21 2016 >> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ >> --with-fc=gfortran --download-fblaslapack --download-mpich >> [0]PETSC ERROR: #1 PetscOptionsGetInt() line 1480 in >> /Users/dom/petsc/src/sys/objects/options.c >> [0]PETSC ERROR: #2 main() line 39 in >> /Users/dom/petsc/src/ksp/ksp/examples/tutorials/ex1.c >> [0]PETSC ERROR: No PETSc Option Table entries >> [0]PETSC ERROR: ----------------End of Error Message -------send entire >> error message to petsc-maint at mcs.anl.gov---------- > > The argument list for PetscOptionsGetInt() changed in this release (so we could have reentrant option functions). It added an additional argument at the beginning. I am guessing there is a mismatch here between the version from which ex1.c comes and the version you are linking. Matt > Here's how I built the example > > ~/petsc/src/ksp/ksp/examples/tutorials $ make clean >> ~/petsc/src/ksp/ksp/examples/tutorials $ make ex1 >> /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc -o ex1.o -c -fPIC -Wall >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 >> -I/Users/dom/petsc/include -I/Users/dom/petsc/arch-darwin-c-debug/include >> -I/opt/X11/include -I/usr/local/include `pwd`/ex1.c >> /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc >> -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress >> -Wl,-commons,use_dylibs -Wl,-search_paths_first >> -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress >> -Wl,-commons,use_dylibs -Wl,-search_paths_first -fPIC -Wall >> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -o ex1 >> ex1.o -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib >> -L/Users/dom/petsc/arch-darwin-c-debug/lib -lpetsc >> -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib -lflapack -lfblas >> -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 -Wl,-rpath,/usr/local/lib >> -L/usr/local/lib -lhwloc >> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin >> -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin >> -lmpifort -lgfortran >> -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 >> -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 >> -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 >> -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 >> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib >> -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib >> -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lclang_rt.osx -lmpicxx -lc++ >> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >> -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >> -lclang_rt.osx -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib >> -L/Users/dom/petsc/arch-darwin-c-debug/lib -ldl -lmpi -lpmpi -lSystem >> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >> -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >> -lclang_rt.osx -ldl >> /bin/rm -f ex1.o >> > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon May 16 09:23:13 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 May 2016 09:23:13 -0500 Subject: [petsc-users] Problem Running Example In-Reply-To: <5739CBA5.9030705@steinitz.org> References: <572EF5A3.9080501@steinitz.org> <572FACE7.6090704@steinitz.org> <5738A51D.9060805@steinitz.org> <5739B353.7050407@steinitz.org> <5739CBA5.9030705@steinitz.org> Message-ID: On Mon, May 16, 2016 at 8:31 AM, Dominic Steinitz wrote: > Thanks - I downloaded the maint branch (so I am 11 hours out of date!) and > built that - the advection diffusion example now runs. Somewhat annoyingly, > it doesn't seem to produce any output. Time to add a few printf statements > I guess. > By default, PETSc produces no output since printing on a supercomputer can take more time than your run. If you want output, I suggest using the monitors and viewers: -ksp_monitor -ksp_view -log_summary Thanks, Matt > Dominic. > > On 16/05/2016 06:24, Matthew Knepley wrote: > > On Mon, May 16, 2016 at 6:47 AM, Dominic Steinitz < > dominic at steinitz.org> wrote: > >> On 15/05/2016 17:34, Dominic Steinitz wrote: >> >>> HYDU_create_process (utils/launch/launch.c:75): >>> >> >> Sorry for the noise. I understand why I got the above error message but >> now I get a different one where previously the example used to work. I have >> no idea what I have done differently. >> >> ~/petsc/src/ksp/ksp/examples/tutorials $ >>> /Users/dom/petsc/arch-darwin-c-debug/bin/mpiexec -n 1 ./ex1 >>> [0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>> [0]PETSC ERROR: Null Pointer: Parameter # 2 >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.6.3, unknown >>> [0]PETSC ERROR: ./ex1 on a arch-darwin-c-debug named >>> Dominics-MacBook-Pro.local by dom Mon May 16 12:40:21 2016 >>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ >>> --with-fc=gfortran --download-fblaslapack --download-mpich >>> [0]PETSC ERROR: #1 PetscOptionsGetInt() line 1480 in >>> /Users/dom/petsc/src/sys/objects/options.c >>> [0]PETSC ERROR: #2 main() line 39 in >>> /Users/dom/petsc/src/ksp/ksp/examples/tutorials/ex1.c >>> [0]PETSC ERROR: No PETSc Option Table entries >>> [0]PETSC ERROR: ----------------End of Error Message -------send entire >>> error message to petsc-maint at mcs.anl.gov---------- >> >> > The argument list for PetscOptionsGetInt() changed in this release (so we > could have reentrant option functions). It added an > additional argument at the beginning. I am guessing there is a mismatch > here between the version from which ex1.c comes > and the version you are linking. > > Matt > > >> Here's how I built the example >> >> ~/petsc/src/ksp/ksp/examples/tutorials $ make clean >>> ~/petsc/src/ksp/ksp/examples/tutorials $ make ex1 >>> /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc -o ex1.o -c -fPIC -Wall >>> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 >>> -I/Users/dom/petsc/include -I/Users/dom/petsc/arch-darwin-c-debug/include >>> -I/opt/X11/include -I/usr/local/include `pwd`/ex1.c >>> /Users/dom/petsc/arch-darwin-c-debug/bin/mpicc >>> -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress >>> -Wl,-commons,use_dylibs -Wl,-search_paths_first >>> -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress >>> -Wl,-commons,use_dylibs -Wl,-search_paths_first -fPIC -Wall >>> -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -O0 -o ex1 >>> ex1.o -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib >>> -L/Users/dom/petsc/arch-darwin-c-debug/lib -lpetsc >>> -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib -lflapack -lfblas >>> -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 -Wl,-rpath,/usr/local/lib >>> -L/usr/local/lib -lhwloc >>> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin >>> -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.3.0/lib/darwin >>> -lmpifort -lgfortran >>> -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 >>> -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5/gcc/x86_64-apple-darwin15.4.0/5.3.0 >>> -Wl,-rpath,/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 >>> -L/usr/local/Cellar/gcc/5.3.0/lib/gcc/5 >>> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib >>> -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/lib >>> -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lclang_rt.osx -lmpicxx -lc++ >>> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >>> -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >>> -lclang_rt.osx -Wl,-rpath,/Users/dom/petsc/arch-darwin-c-debug/lib >>> -L/Users/dom/petsc/arch-darwin-c-debug/lib -ldl -lmpi -lpmpi -lSystem >>> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >>> -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.3.0/lib/darwin >>> -lclang_rt.osx -ldl >>> /bin/rm -f ex1.o >>> >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhaj at iki.fi Mon May 16 22:21:53 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Tue, 17 May 2016 04:21:53 +0100 Subject: [petsc-users] petsc4py problem Message-ID: <1931875.DcQZnQGkct@vega> Dear list, I cannot get petsc4py 3.7 to work. I have petsc compiled with a bunch of options to suit my needs (sorry for the un-escaped line breaks): --COPTFLAGS="-Ofast " --CXXOPTFLAGS="-Ofast " --CXX_LINKER_FLAGS="-Wl,--no-as- needed -lgfortran" --FOPTFLAGS="-Ofast " --LDFLAGS="-lgfortran -g" --LIBS=- lgfortran --download-chombo=yes --download-sprng=no -- prefix=/home/juhaj/progs+3.7/petsc --with-64-bit-indices --with-afterimage- include=/usr/include/libAfterImage --with-afterimage-lib=/usr/lib/x86_64- linux-gnu/libAfterImage.so --with-afterimage=1 --with-blas-lapack- include=/usr/include/openblas --with-blas-lib=/usr/lib/openblas- base/libblas.so --with-boost=1 --with-c-support --with-clanguage=C++ --with- debugging=0 --with-external-packages-dir=/home/juhaj/progs+3.7/automatic- downloads --with-fftw-include=/usr/include --with-fftw-lib="[/usr/lib/x86_64- linux-gnu/libfftw3.so,/usr/lib/x86_64-linux-gnu/libfftw3_mpi.so]" --with- fftw=1 --with-fortran-interfaces=1 --with-gnu-compilers=1 --with-hdf5- include=/usr/include/hdf5/openmpi --with-hdf5-lib=/usr/lib/x86_64-linux- gnu/libhdf5_openmpi.so --with-hdf5=1 --with-hwloc=1 --with-hypre- include=/usr/include --with-hypre-lib=/usr/lib/libHYPRE_IJ_mv-2.8.0b.so -- with-hypre=1 --with-lapack-lib=/usr/lib/openblas-base/liblapack.so --with- log=0 --with-memalign=64 --with-metis=0 --with-mpi-shared=1 --with-mpi=1 -- with-mumps=0 --with-netcdf-include=/usr/include/ --with-netcdf- lib=/usr/lib/libnetcdf.so --with-netcdf=1 --with-numpy=1 --with-openmp=1 -- with-parmetis=0 --with-pic=1 --with-scalapack-include=/usr/include --with- scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-scalapack=1 --with- scalar-type=real --with-shared-libraries --with-spooles- include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with- spooles=1 --with-suitesparse=0 --with-tetgen=0 --with-triangle=0 --with- valgrind=1 and after installing petsc, pip install petsc4py fails with http://paste.debian.net/686482/ I don't really know where to start looking for the problem. Any idea how to fix this? Cheers, Juha From balay at mcs.anl.gov Mon May 16 22:45:14 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 16 May 2016 22:45:14 -0500 Subject: [petsc-users] petsc4py problem In-Reply-To: <1931875.DcQZnQGkct@vega> References: <1931875.DcQZnQGkct@vega> Message-ID: Looks like there is an issue with petsc4py and --with-log=0 . Try rebuilding without this option. Are you sure you really need petsc with these options? --with-numpy=1 --with-boost=1 --with-c-support --with-clanguage=C++ --with-memalign=64 etc don't really make sense to me for a general use install.. And most externalpackages are not tested with --with-64-bit-indices Satish On Mon, 16 May 2016, Juha Jaykka wrote: > Dear list, > > I cannot get petsc4py 3.7 to work. I have petsc compiled with a bunch of > options to suit my needs (sorry for the un-escaped line breaks): > > --COPTFLAGS="-Ofast " --CXXOPTFLAGS="-Ofast " --CXX_LINKER_FLAGS="-Wl,--no-as- > needed -lgfortran" --FOPTFLAGS="-Ofast " --LDFLAGS="-lgfortran -g" --LIBS=- > lgfortran --download-chombo=yes --download-sprng=no -- > prefix=/home/juhaj/progs+3.7/petsc --with-64-bit-indices --with-afterimage- > include=/usr/include/libAfterImage --with-afterimage-lib=/usr/lib/x86_64- > linux-gnu/libAfterImage.so --with-afterimage=1 --with-blas-lapack- > include=/usr/include/openblas --with-blas-lib=/usr/lib/openblas- > base/libblas.so --with-boost=1 --with-c-support --with-clanguage=C++ --with- > debugging=0 --with-external-packages-dir=/home/juhaj/progs+3.7/automatic- > downloads --with-fftw-include=/usr/include --with-fftw-lib="[/usr/lib/x86_64- > linux-gnu/libfftw3.so,/usr/lib/x86_64-linux-gnu/libfftw3_mpi.so]" --with- > fftw=1 --with-fortran-interfaces=1 --with-gnu-compilers=1 --with-hdf5- > include=/usr/include/hdf5/openmpi --with-hdf5-lib=/usr/lib/x86_64-linux- > gnu/libhdf5_openmpi.so --with-hdf5=1 --with-hwloc=1 --with-hypre- > include=/usr/include --with-hypre-lib=/usr/lib/libHYPRE_IJ_mv-2.8.0b.so -- > with-hypre=1 --with-lapack-lib=/usr/lib/openblas-base/liblapack.so --with- > log=0 --with-memalign=64 --with-metis=0 --with-mpi-shared=1 --with-mpi=1 -- > with-mumps=0 --with-netcdf-include=/usr/include/ --with-netcdf- > lib=/usr/lib/libnetcdf.so --with-netcdf=1 --with-numpy=1 --with-openmp=1 -- > with-parmetis=0 --with-pic=1 --with-scalapack-include=/usr/include --with- > scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-scalapack=1 --with- > scalar-type=real --with-shared-libraries --with-spooles- > include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with- > spooles=1 --with-suitesparse=0 --with-tetgen=0 --with-triangle=0 --with- > valgrind=1 > > and after installing petsc, pip install petsc4py fails with > > http://paste.debian.net/686482/ > > I don't really know where to start looking for the problem. Any idea how to > fix this? > > Cheers, > Juha > > From juhaj at iki.fi Mon May 16 22:54:37 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Tue, 17 May 2016 04:54:37 +0100 Subject: [petsc-users] petsc4py problem In-Reply-To: <1931875.DcQZnQGkct@vega> References: <1931875.DcQZnQGkct@vega> Message-ID: <1986743.at5EmK5uWl@vega> Sorted my problem, sorry for the noise. For those interested > --with-log=0 is not compatible with petsc4py. Cheers, Juha From juhaj at iki.fi Mon May 16 23:11:00 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Tue, 17 May 2016 05:11 +0100 Subject: [petsc-users] petsc4py problem In-Reply-To: References: <1931875.DcQZnQGkct@vega> Message-ID: <1799393.fMCANx67e6@vega> > Looks like there is an issue with petsc4py and --with-log=0 . Try rebuilding Yea, noticed that, too. > --with-numpy=1 --with-boost=1 --with-c-support --with-clanguage=C++ > --with-memalign=64 etc don't really make sense to me for a general use > install.. I'm not sure I know what you mean by "general use" but this is for my use, so I'd guess not what you'd mean by "general". Anyway, numpy and boost I can probably live without, C++ is necessary because of Chombo (it might be possible to tweak things to work without it); c-support I don't know: I thought there wouldn't be C bindings otherwise, just C++? 64-bit alignment is something I'm testing, but will probably be a good idea on Knights Landing. Cheers, Juha > > And most externalpackages are not tested with --with-64-bit-indices > > Satish > > On Mon, 16 May 2016, Juha Jaykka wrote: > > Dear list, > > > > I cannot get petsc4py 3.7 to work. I have petsc compiled with a bunch of > > options to suit my needs (sorry for the un-escaped line breaks): > > > > --COPTFLAGS="-Ofast " --CXXOPTFLAGS="-Ofast " > > --CXX_LINKER_FLAGS="-Wl,--no-as- needed -lgfortran" --FOPTFLAGS="-Ofast " > > --LDFLAGS="-lgfortran -g" --LIBS=- lgfortran --download-chombo=yes > > --download-sprng=no -- > > prefix=/home/juhaj/progs+3.7/petsc --with-64-bit-indices > > --with-afterimage- > > include=/usr/include/libAfterImage --with-afterimage-lib=/usr/lib/x86_64- > > linux-gnu/libAfterImage.so --with-afterimage=1 --with-blas-lapack- > > include=/usr/include/openblas --with-blas-lib=/usr/lib/openblas- > > base/libblas.so --with-boost=1 --with-c-support --with-clanguage=C++ > > --with- debugging=0 > > --with-external-packages-dir=/home/juhaj/progs+3.7/automatic- downloads > > --with-fftw-include=/usr/include --with-fftw-lib="[/usr/lib/x86_64- > > linux-gnu/libfftw3.so,/usr/lib/x86_64-linux-gnu/libfftw3_mpi.so]" --with- > > fftw=1 --with-fortran-interfaces=1 --with-gnu-compilers=1 --with-hdf5- > > include=/usr/include/hdf5/openmpi --with-hdf5-lib=/usr/lib/x86_64-linux- > > gnu/libhdf5_openmpi.so --with-hdf5=1 --with-hwloc=1 --with-hypre- > > include=/usr/include --with-hypre-lib=/usr/lib/libHYPRE_IJ_mv-2.8.0b.so -- > > with-hypre=1 --with-lapack-lib=/usr/lib/openblas-base/liblapack.so --with- > > log=0 --with-memalign=64 --with-metis=0 --with-mpi-shared=1 --with-mpi=1 > > -- > > with-mumps=0 --with-netcdf-include=/usr/include/ --with-netcdf- > > lib=/usr/lib/libnetcdf.so --with-netcdf=1 --with-numpy=1 --with-openmp=1 > > -- > > with-parmetis=0 --with-pic=1 --with-scalapack-include=/usr/include --with- > > scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-scalapack=1 --with- > > scalar-type=real --with-shared-libraries --with-spooles- > > include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so > > --with- spooles=1 --with-suitesparse=0 --with-tetgen=0 --with-triangle=0 > > --with- valgrind=1 > > > > and after installing petsc, pip install petsc4py fails with > > > > http://paste.debian.net/686482/ > > > > I don't really know where to start looking for the problem. Any idea how > > to > > fix this? > > > > Cheers, > > Juha From bsmith at mcs.anl.gov Mon May 16 23:13:15 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 16 May 2016 23:13:15 -0500 Subject: [petsc-users] petsc4py problem In-Reply-To: <1799393.fMCANx67e6@vega> References: <1931875.DcQZnQGkct@vega> <1799393.fMCANx67e6@vega> Message-ID: You should be able to build PETSc with C and use it from Chombo even though Chombo is built in C++. You shouldn't need --with-c-support --with-clanguage=C++ Barry > On May 16, 2016, at 11:11 PM, Juha Jaykka wrote: > >> Looks like there is an issue with petsc4py and --with-log=0 . Try rebuilding > > Yea, noticed that, too. > >> --with-numpy=1 --with-boost=1 --with-c-support --with-clanguage=C++ >> --with-memalign=64 etc don't really make sense to me for a general use >> install.. > > I'm not sure I know what you mean by "general use" but this is for my use, so > I'd guess not what you'd mean by "general". > > Anyway, numpy and boost I can probably live without, C++ is necessary because > of Chombo (it might be possible to tweak things to work without it); c-support > I don't know: I thought there wouldn't be C bindings otherwise, just C++? > > 64-bit alignment is something I'm testing, but will probably be a good idea on > Knights Landing. > > Cheers, > Juha > >> >> And most externalpackages are not tested with --with-64-bit-indices >> >> Satish >> >> On Mon, 16 May 2016, Juha Jaykka wrote: >>> Dear list, >>> >>> I cannot get petsc4py 3.7 to work. I have petsc compiled with a bunch of >>> options to suit my needs (sorry for the un-escaped line breaks): >>> >>> --COPTFLAGS="-Ofast " --CXXOPTFLAGS="-Ofast " >>> --CXX_LINKER_FLAGS="-Wl,--no-as- needed -lgfortran" --FOPTFLAGS="-Ofast " >>> --LDFLAGS="-lgfortran -g" --LIBS=- lgfortran --download-chombo=yes >>> --download-sprng=no -- >>> prefix=/home/juhaj/progs+3.7/petsc --with-64-bit-indices >>> --with-afterimage- >>> include=/usr/include/libAfterImage --with-afterimage-lib=/usr/lib/x86_64- >>> linux-gnu/libAfterImage.so --with-afterimage=1 --with-blas-lapack- >>> include=/usr/include/openblas --with-blas-lib=/usr/lib/openblas- >>> base/libblas.so --with-boost=1 --with-c-support --with-clanguage=C++ >>> --with- debugging=0 >>> --with-external-packages-dir=/home/juhaj/progs+3.7/automatic- downloads >>> --with-fftw-include=/usr/include --with-fftw-lib="[/usr/lib/x86_64- >>> linux-gnu/libfftw3.so,/usr/lib/x86_64-linux-gnu/libfftw3_mpi.so]" --with- >>> fftw=1 --with-fortran-interfaces=1 --with-gnu-compilers=1 --with-hdf5- >>> include=/usr/include/hdf5/openmpi --with-hdf5-lib=/usr/lib/x86_64-linux- >>> gnu/libhdf5_openmpi.so --with-hdf5=1 --with-hwloc=1 --with-hypre- >>> include=/usr/include --with-hypre-lib=/usr/lib/libHYPRE_IJ_mv-2.8.0b.so -- >>> with-hypre=1 --with-lapack-lib=/usr/lib/openblas-base/liblapack.so --with- >>> log=0 --with-memalign=64 --with-metis=0 --with-mpi-shared=1 --with-mpi=1 >>> -- >>> with-mumps=0 --with-netcdf-include=/usr/include/ --with-netcdf- >>> lib=/usr/lib/libnetcdf.so --with-netcdf=1 --with-numpy=1 --with-openmp=1 >>> -- >>> with-parmetis=0 --with-pic=1 --with-scalapack-include=/usr/include --with- >>> scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-scalapack=1 --with- >>> scalar-type=real --with-shared-libraries --with-spooles- >>> include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so >>> --with- spooles=1 --with-suitesparse=0 --with-tetgen=0 --with-triangle=0 >>> --with- valgrind=1 >>> >>> and after installing petsc, pip install petsc4py fails with >>> >>> http://paste.debian.net/686482/ >>> >>> I don't really know where to start looking for the problem. Any idea how >>> to >>> fix this? >>> >>> Cheers, >>> Juha > From balay at mcs.anl.gov Mon May 16 23:27:26 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 16 May 2016 23:27:26 -0500 Subject: [petsc-users] petsc4py problem In-Reply-To: References: <1931875.DcQZnQGkct@vega> <1799393.fMCANx67e6@vega> Message-ID: well if you are using numpy, boost from your code - you can always do that. '--with-numpy=1 --with-boost=1' suggests that you want petsc to use these externalpackages. But PETSc doesn't provide interface to these packages [they are provides to satisfy dependencies of some externalpackages - for eg: '--downlaod-boost --download-trilinos'] So when so many unneeded options are listed - its not clear if you really were seeking this functionality [which doesn't exist] or trying out a generic build by enabling all possible options. memalign is a property of compiler - and configure attempts to detect thisvalue . Not sure what you would get by changing this value. [other than test if something in petsc breaks with it or not - esp when it doesn't match the compiler behavior..] And you have lots of options enabled - so it wasn't clear if you actually needed them.. [like --with-log=0 --with-pic=1 LDFLAGS="-lgfortran -g" --LIBS=- lgfortran] Satish On Mon, 16 May 2016, Barry Smith wrote: > > You should be able to build PETSc with C and use it from Chombo even though Chombo is built in C++. You shouldn't need --with-c-support --with-clanguage=C++ > > Barry > > > On May 16, 2016, at 11:11 PM, Juha Jaykka wrote: > > > >> Looks like there is an issue with petsc4py and --with-log=0 . Try rebuilding > > > > Yea, noticed that, too. > > > >> --with-numpy=1 --with-boost=1 --with-c-support --with-clanguage=C++ > >> --with-memalign=64 etc don't really make sense to me for a general use > >> install.. > > > > I'm not sure I know what you mean by "general use" but this is for my use, so > > I'd guess not what you'd mean by "general". > > > > Anyway, numpy and boost I can probably live without, C++ is necessary because > > of Chombo (it might be possible to tweak things to work without it); c-support > > I don't know: I thought there wouldn't be C bindings otherwise, just C++? > > > > 64-bit alignment is something I'm testing, but will probably be a good idea on > > Knights Landing. > > > > Cheers, > > Juha > > > >> > >> And most externalpackages are not tested with --with-64-bit-indices > >> > >> Satish > >> > >> On Mon, 16 May 2016, Juha Jaykka wrote: > >>> Dear list, > >>> > >>> I cannot get petsc4py 3.7 to work. I have petsc compiled with a bunch of > >>> options to suit my needs (sorry for the un-escaped line breaks): > >>> > >>> --COPTFLAGS="-Ofast " --CXXOPTFLAGS="-Ofast " > >>> --CXX_LINKER_FLAGS="-Wl,--no-as- needed -lgfortran" --FOPTFLAGS="-Ofast " > >>> --LDFLAGS="-lgfortran -g" --LIBS=- lgfortran --download-chombo=yes > >>> --download-sprng=no -- > >>> prefix=/home/juhaj/progs+3.7/petsc --with-64-bit-indices > >>> --with-afterimage- > >>> include=/usr/include/libAfterImage --with-afterimage-lib=/usr/lib/x86_64- > >>> linux-gnu/libAfterImage.so --with-afterimage=1 --with-blas-lapack- > >>> include=/usr/include/openblas --with-blas-lib=/usr/lib/openblas- > >>> base/libblas.so --with-boost=1 --with-c-support --with-clanguage=C++ > >>> --with- debugging=0 > >>> --with-external-packages-dir=/home/juhaj/progs+3.7/automatic- downloads > >>> --with-fftw-include=/usr/include --with-fftw-lib="[/usr/lib/x86_64- > >>> linux-gnu/libfftw3.so,/usr/lib/x86_64-linux-gnu/libfftw3_mpi.so]" --with- > >>> fftw=1 --with-fortran-interfaces=1 --with-gnu-compilers=1 --with-hdf5- > >>> include=/usr/include/hdf5/openmpi --with-hdf5-lib=/usr/lib/x86_64-linux- > >>> gnu/libhdf5_openmpi.so --with-hdf5=1 --with-hwloc=1 --with-hypre- > >>> include=/usr/include --with-hypre-lib=/usr/lib/libHYPRE_IJ_mv-2.8.0b.so -- > >>> with-hypre=1 --with-lapack-lib=/usr/lib/openblas-base/liblapack.so --with- > >>> log=0 --with-memalign=64 --with-metis=0 --with-mpi-shared=1 --with-mpi=1 > >>> -- > >>> with-mumps=0 --with-netcdf-include=/usr/include/ --with-netcdf- > >>> lib=/usr/lib/libnetcdf.so --with-netcdf=1 --with-numpy=1 --with-openmp=1 > >>> -- > >>> with-parmetis=0 --with-pic=1 --with-scalapack-include=/usr/include --with- > >>> scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-scalapack=1 --with- > >>> scalar-type=real --with-shared-libraries --with-spooles- > >>> include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so > >>> --with- spooles=1 --with-suitesparse=0 --with-tetgen=0 --with-triangle=0 > >>> --with- valgrind=1 > >>> > >>> and after installing petsc, pip install petsc4py fails with > >>> > >>> http://paste.debian.net/686482/ > >>> > >>> I don't really know where to start looking for the problem. Any idea how > >>> to > >>> fix this? > >>> > >>> Cheers, > >>> Juha > > > > From sebastian at prebtec.de Tue May 17 03:58:10 2016 From: sebastian at prebtec.de (Sebastian Uharek) Date: Tue, 17 May 2016 10:58:10 +0200 Subject: [petsc-users] Multiblock structured grid Message-ID: Hi, I have a multi block structured grid, where the individual blocks share information at the boundaries through ghost cells. I would like to solve a poisson equation with PETSc on this grid. My original code updated the ghost cell information after every iteration. At the current stage I?ve treated this ghost cell variables in my PETSc code explicitly, but I don?t know if it?s possible (or a good idea) to update the RHS after every iteration. The alternative would be to deal with the interblock connectivities implicitly, leading to a different matrix structure. Are there any suggestions, what would be the best choice for my problem? Thanks, Sebastian From dalcinl at gmail.com Tue May 17 04:45:49 2016 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Tue, 17 May 2016 12:45:49 +0300 Subject: [petsc-users] petsc4py problem In-Reply-To: <1986743.at5EmK5uWl@vega> References: <1931875.DcQZnQGkct@vega> <1986743.at5EmK5uWl@vega> Message-ID: On 17 May 2016 at 06:54, Juha Jaykka wrote: > Sorted my problem, sorry for the noise. > > For those interested > >> --with-log=0 > > is not compatible with petsc4py. > Just pushed a trivial fix in petsc4py/maint and merged into master. Now you should be able to use PETSc configured --with-log=0 https://bitbucket.org/petsc/petsc4py/commits/a07489ceb0b21505e483513d8ccb5ad4a53a321b -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 0109 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 From knepley at gmail.com Tue May 17 06:43:23 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 May 2016 06:43:23 -0500 Subject: [petsc-users] Multiblock structured grid In-Reply-To: References: Message-ID: On Tue, May 17, 2016 at 3:58 AM, Sebastian Uharek wrote: > Hi, > I have a multi block structured grid, where the individual blocks share > information at the boundaries through ghost cells. I would like to solve a > poisson equation with PETSc on this grid. My original code updated the > ghost cell information after every iteration. At the current stage I?ve > treated this ghost cell variables in my PETSc code explicitly, but I don?t > know if it?s possible (or a good idea) to update the RHS after every > iteration. The alternative would be to deal with the interblock > connectivities implicitly, leading to a different matrix structure. Are > there any suggestions, what would be the best choice for my problem? > If you only pass information at the boundary, you are basically using the original Schwarz method to solve it. This is pretty slowly convergent. If you couple everything implicitly, you could use a much more efficient solver, like AMG for Poisson. Thanks, Matt > Thanks, > Sebastian -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 17 09:03:16 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 May 2016 09:03:16 -0500 Subject: [petsc-users] block matrix in serial In-Reply-To: <5714633E.8090709@auckland.ac.nz> References: <55F8CE54.5010707@auckland.ac.nz> <5714633E.8090709@auckland.ac.nz> Message-ID: On Sun, Apr 17, 2016 at 11:31 PM, Adrian Croucher wrote: > hi > > Matt, did you (or anyone else) ever get a chance to look into these two bugs? > > AFAICT the blocksize discovery code hasn't been changed since then. > > I'm currently running a problem where the linear solver is struggling (running out of iterations), > and I'm wondering if it might perform better if the matrix blocksize were being set correctly. > May be nothing to do with it of course, but would at least be useful to eliminate that possibility. > > I have finally pushed what I think are the right fixes into master. Can you take a look and let me know if it is fixed for you? Thanks, Matt > Cheers, Adrian>>* On 16/09/15 12:40, Matthew Knepley wrote: > *>* > On Tue, Sep 15, 2015 at 9:05 PM, Adrian Croucher > *>* > >> > *>* > wrote: > *>* > > *>* > hi > *>* > > *>* > I have a test code (attached) that sets up a finite volume mesh > *>* > using DMPlex, with 2 degrees of freedom per cell. > *>* > > *>* > I then create a matrix using DMCreateMatrix(), having used > *>* > DMSetMatType() to set the matrix type to MATBAIJ or MATMPIBAIJ, to > *>* > take advantage of the block structure. > *>* > > *>* > This works OK and gives me the expected matrix structure when I > *>* > run on > 1 processor, but neither MATBAIJ or MATMPIBAIJ works if I > *>* > run it in serial: > *>* > > *>* > > *>* > Plex should automatically discover the block size from the Section. > *>* > If not, it uses block size 1. I have to look at the example to see > *>* > why the discovery is not working. Do you have any constraints? > *> > >* It looks like block discovery in parallel effectively always > *>* determines a block size of 1. Running with -mat_view ::ascii_info gives: > *>>* Mat Object: 2 MPI processes > *>* type: mpibaij > *>* rows=20, cols=20 > *>* total: nonzeros=112, allocated nonzeros=112 > *>* total number of mallocs used during MatSetValues calls =0 > *>* block size is 1 > *>* ^^^ > *>>* The block size discovery does this: > *>>* for (p = pStart; p < pEnd; ++p) { > *>* ierr = PetscSectionGetDof(sectionGlobal, p, &dof);CHKERRQ(ierr); > *>* ierr = PetscSectionGetConstraintDof(sectionGlobal, p, > *>* &cdof);CHKERRQ(ierr); > *>* if (dof-cdof) { > *>* if (bs < 0) { > *>* bs = dof-cdof; > *>* } else if (bs != dof-cdof) { > *>* /* Layout does not admit a pointwise block size */ > *>* bs = 1; > *>* break; > *>* } > *>* } > *>* } > *>>* In parallel, some of the dofs are remote (encoded as negative). So we > *>* run through seeing (dof - cdof) == 2, until we hit a "remote" point at > *>* when we see (dof - cdof) != 2 and then we break out and set bs = 1. > *>>* In serial, we correctly determine bs == 2. But then > *>* DMGetLocalToGlobalMapping always does > *>>* ierr = ISLocalToGlobalMappingCreate(PETSC_COMM_SELF, 1,size, > *>* ltog, PETSC_OWN_POINTER, &dm->ltogmap);CHKERRQ(ierr); > *>>>* i.e. is set with block size == 1. > *>>* So there are two bugs here. > *>>* 1. In parallel, block size discovery in Mat creation has gone wrong > *> > Crap. Hoist on my own petard. Okay I will fix this. > > > >* 2. (Always), the lgmap has block size of 1, irrespective of the > *>* discovered block size from the section. > *> > Yep. This can also be fixed. It should work regardless, but would be better > with blocking. > > Thanks > > Matt > > -- > Dr Adrian Croucher > Senior Research Fellow > Department of Engineering Science > University of Auckland, New Zealand > email: a.croucher at auckland.ac.nz > tel: +64 (0)9 923 84611 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 17 09:10:20 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 May 2016 09:10:20 -0500 Subject: [petsc-users] example of Parallel running DG code with dynamic mesh refinement? In-Reply-To: <20AD3C37-0AFD-4294-B70C-47B6912C3E95@163.com> References: <20AD3C37-0AFD-4294-B70C-47B6912C3E95@163.com> Message-ID: I do not have any DG examples, but I do regularly refined CG for Poisson in SNES ex12. Basically, you can use DMPlex to represent an unstructured grid, and regularly refine using -dm_refine. If you want adaptive refinement, that is possible in serial using mesh generators, and in parallel using Pragmatic, but the interface is not well-developed. Matt On Thu, Feb 11, 2016 at 9:40 PM, Wei Zhang wrote: > Dear everyone: > I am planning to write a parallel running DG N-S solver where I want to > do mesh refinement. I want to know if there is any example I can start with? > > /Wei > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppratapa at gatech.edu Tue May 17 12:21:41 2016 From: ppratapa at gatech.edu (Pratapa, Phanisri P) Date: Tue, 17 May 2016 17:21:41 +0000 Subject: [petsc-users] User defined solver for PCMG Message-ID: Hi, I am trying to find out if one can use a user defined linear solver function in PCMG (instead of the default GMRES). According to the petsc manual, I can change the solver/smoother through the KSP context and the available solvers, but I am interested in using my own function (solver). Thank you, Regards, Pradeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 17 12:28:31 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 May 2016 12:28:31 -0500 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: References: Message-ID: On Tue, May 17, 2016 at 12:21 PM, Pratapa, Phanisri P wrote: > Hi, > > I am trying to find out if one can use a user defined linear solver > function in PCMG (instead of the default GMRES). According to the petsc > manual, I can change the solver/smoother through the KSP context and the > available solvers, but I am interested in using my own function (solver). > > You can do this the Right Way: 1) Implement your solver as a PETSc KSP (see any of the implementations, like KSPCG) 2) Then just use -pc_mg_levels_ksp_type mysolver or the Hard Way 1) Pull out each level KSP ( http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGGetSmoother.html ) 2) Set the type to KSPREPONLY 3) Set the preconditioner to PCSHELL 4) Set your solver to the apply function for the PCSHELL Thanks, Matt > Thank you, > > > Regards, > > > Pradeep > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue May 17 13:08:18 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 17 May 2016 13:08:18 -0500 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: References: Message-ID: > On May 17, 2016, at 12:21 PM, Pratapa, Phanisri P wrote: > > Hi, > > I am trying to find out if one can use a user defined linear solver function in PCMG (instead of the default GMRES). According to the petsc manual, I can change the solver/smoother through the KSP context and the available solvers, but I am interested in using my own function (solver). What do you mean here by "solver"? Do you want to provide a new Krylov method that is not currently in PETSc or a new preconditioner that is specific to your problem and cannot be written as a composition of preconditioners and Krylov methods already in PETSc? An example of your own custom preconditioner could be an SOR iteration that you hand code based on the stencil and doesn't use a stored matrix. In that case you would access the PC on the level and use PCSetType(subpc,PCSHELL) and PCShellSetApply(subpc, your custom function). Barry > > Thank you, > > Regards, > > Pradeep From ppratapa at gatech.edu Tue May 17 13:14:11 2016 From: ppratapa at gatech.edu (Pratapa, Phanisri P) Date: Tue, 17 May 2016 18:14:11 +0000 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: References: , Message-ID: Barry, What I mean is that: I want to implement multigrid preconditioning on a new Krylov method that is not currently in PETSc. For this, I was hoping that I could replace the KSPFGMRES smoother (default) with the "new solver" I have. But I do not have the new solver as a PETSc KSP yet. Thank you, Regards, Pradeep ________________________________________ From: Barry Smith Sent: Tuesday, May 17, 2016 2:08:18 PM To: Pratapa, Phanisri P Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] User defined solver for PCMG > On May 17, 2016, at 12:21 PM, Pratapa, Phanisri P wrote: > > Hi, > > I am trying to find out if one can use a user defined linear solver function in PCMG (instead of the default GMRES). According to the petsc manual, I can change the solver/smoother through the KSP context and the available solvers, but I am interested in using my own function (solver). What do you mean here by "solver"? Do you want to provide a new Krylov method that is not currently in PETSc or a new preconditioner that is specific to your problem and cannot be written as a composition of preconditioners and Krylov methods already in PETSc? An example of your own custom preconditioner could be an SOR iteration that you hand code based on the stencil and doesn't use a stored matrix. In that case you would access the PC on the level and use PCSetType(subpc,PCSHELL) and PCShellSetApply(subpc, your custom function). Barry > > Thank you, > > Regards, > > Pradeep From jed at jedbrown.org Tue May 17 13:26:44 2016 From: jed at jedbrown.org (Jed Brown) Date: Tue, 17 May 2016 11:26:44 -0700 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: References: Message-ID: <87a8jo7d7f.fsf@jedbrown.org> "Pratapa, Phanisri P" writes: > What I mean is that: I want to implement multigrid preconditioning on > a new Krylov method that is not currently in PETSc. Sure, implement it as a KSP. It's not difficult. > For this, I was hoping that I could replace the KSPFGMRES smoother > (default) I don't understand this statement. FGMRES is not the default, nor is it typically used as a smoother. Do you want to change the smoother (Chebyshev/SOR by default) or the Krylov accelerator? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue May 17 13:26:59 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 17 May 2016 13:26:59 -0500 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: References: Message-ID: <110A483F-3094-42AF-B077-6D54B7122242@mcs.anl.gov> > On May 17, 2016, at 1:14 PM, Pratapa, Phanisri P wrote: > > Barry, > > What I mean is that: I want to implement multigrid preconditioning on a new Krylov method that is not currently in PETSc. For this, I was hoping that I could replace the KSPFGMRES smoother (default) with the "new solver" I have. But I do not have the new solver as a PETSc KSP yet. Understood. We don't provide a KSPSHELL because we consider it so easy to implement a new KSP in PETSc that having a KSPSHELL is unnecessary. If your new Krylov method is "stand-alone" and not, for example, a modification of GMRES here is how to proceed. Say your KSP is called mykrylov Make a directory src/ksp/ksp/impls/mykrylov Copy src/ksp/ksp/impls/cg/cg.c and src/ksp/ksp/impls/cg/cgimpl.c and src/ksp/ksp/impls/cg/makefile to that directory changing the .c and .h file names. Then edit the three files to reflect the new names. Edit src/ksp/ksp/impls/makefile and add the mykrylov directory to the list of directories (variable DIRS). Then code your new Krylov method inside the two new files you copied over following the directions given in the file. If your new Krylov method is an extension/modification of GMRES then it is possible to reuse most of the GMRES implementation in PETSc but a bit more involved. Under the src/ksp/ksp/impls/gmres directory are several variants fgmres, pgmres, lgmres, pipefgmres, agmres, and dgmres. I would recommend picking the one that most closely resembles your new method and copying it as above and modifying it to match your algorithm. Barry > > Thank you, > > Regards, > > Pradeep > ________________________________________ > From: Barry Smith > Sent: Tuesday, May 17, 2016 2:08:18 PM > To: Pratapa, Phanisri P > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] User defined solver for PCMG > >> On May 17, 2016, at 12:21 PM, Pratapa, Phanisri P wrote: >> >> Hi, >> >> I am trying to find out if one can use a user defined linear solver function in PCMG (instead of the default GMRES). According to the petsc manual, I can change the solver/smoother through the KSP context and the available solvers, but I am interested in using my own function (solver). > > What do you mean here by "solver"? Do you want to provide a new Krylov method that is not currently in PETSc or a new preconditioner that is specific to your problem and cannot be written as a composition of preconditioners and Krylov methods already in PETSc? > > An example of your own custom preconditioner could be an SOR iteration that you hand code based on the stencil and doesn't use a stored matrix. In that case you would access the PC on the level and use PCSetType(subpc,PCSHELL) and PCShellSetApply(subpc, your custom function). > > Barry > >> >> Thank you, >> >> Regards, >> >> Pradeep > From ppratapa at gatech.edu Tue May 17 13:49:39 2016 From: ppratapa at gatech.edu (Pratapa, Phanisri P) Date: Tue, 17 May 2016 18:49:39 +0000 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: <110A483F-3094-42AF-B077-6D54B7122242@mcs.anl.gov> References: , <110A483F-3094-42AF-B077-6D54B7122242@mcs.anl.gov> Message-ID: @Barry: Thank you for the information. @Jed: The notes on PCMG (http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMG.html#PCMG) says: "By default this uses GMRES on the fine grid smoother so this should be used with KSPFGMRES or the smoother changed to not use GMRES". I understood this statement to mean that GMRES is the default smoother. I want to change the Krylov method (and not the smoother) that is being preconditioned. So I guess if I just implement my solver as KSP and use PCMG as preconditioner that should solve my problem. Please let me know if this sounds fine. Thank you, Regards, Pradeep ________________________________________ From: Barry Smith Sent: Tuesday, May 17, 2016 2:26:59 PM To: Pratapa, Phanisri P Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] User defined solver for PCMG > On May 17, 2016, at 1:14 PM, Pratapa, Phanisri P wrote: > > Barry, > > What I mean is that: I want to implement multigrid preconditioning on a new Krylov method that is not currently in PETSc. For this, I was hoping that I could replace the KSPFGMRES smoother (default) with the "new solver" I have. But I do not have the new solver as a PETSc KSP yet. Understood. We don't provide a KSPSHELL because we consider it so easy to implement a new KSP in PETSc that having a KSPSHELL is unnecessary. If your new Krylov method is "stand-alone" and not, for example, a modification of GMRES here is how to proceed. Say your KSP is called mykrylov Make a directory src/ksp/ksp/impls/mykrylov Copy src/ksp/ksp/impls/cg/cg.c and src/ksp/ksp/impls/cg/cgimpl.c and src/ksp/ksp/impls/cg/makefile to that directory changing the .c and .h file names. Then edit the three files to reflect the new names. Edit src/ksp/ksp/impls/makefile and add the mykrylov directory to the list of directories (variable DIRS). Then code your new Krylov method inside the two new files you copied over following the directions given in the file. If your new Krylov method is an extension/modification of GMRES then it is possible to reuse most of the GMRES implementation in PETSc but a bit more involved. Under the src/ksp/ksp/impls/gmres directory are several variants fgmres, pgmres, lgmres, pipefgmres, agmres, and dgmres. I would recommend picking the one that most closely resembles your new method and copying it as above and modifying it to match your algorithm. Barry > > Thank you, > > Regards, > > Pradeep > ________________________________________ > From: Barry Smith > Sent: Tuesday, May 17, 2016 2:08:18 PM > To: Pratapa, Phanisri P > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] User defined solver for PCMG > >> On May 17, 2016, at 12:21 PM, Pratapa, Phanisri P wrote: >> >> Hi, >> >> I am trying to find out if one can use a user defined linear solver function in PCMG (instead of the default GMRES). According to the petsc manual, I can change the solver/smoother through the KSP context and the available solvers, but I am interested in using my own function (solver). > > What do you mean here by "solver"? Do you want to provide a new Krylov method that is not currently in PETSc or a new preconditioner that is specific to your problem and cannot be written as a composition of preconditioners and Krylov methods already in PETSc? > > An example of your own custom preconditioner could be an SOR iteration that you hand code based on the stencil and doesn't use a stored matrix. In that case you would access the PC on the level and use PCSetType(subpc,PCSHELL) and PCShellSetApply(subpc, your custom function). > > Barry > >> >> Thank you, >> >> Regards, >> >> Pradeep > From bsmith at mcs.anl.gov Tue May 17 15:25:33 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 17 May 2016 15:25:33 -0500 Subject: [petsc-users] User defined solver for PCMG In-Reply-To: References: <110A483F-3094-42AF-B077-6D54B7122242@mcs.anl.gov> Message-ID: <8F88FF42-9B24-480C-BA63-EAFADAC7542B@mcs.anl.gov> > On May 17, 2016, at 1:49 PM, Pratapa, Phanisri P wrote: > > @Barry: Thank you for the information. > > @Jed: The notes on PCMG (http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMG.html#PCMG) says: > "By default this uses GMRES on the fine grid smoother so this should be used with KSPFGMRES or the smoother changed to not use GMRES". I understood this statement to mean that GMRES is the default smoother. Thanks for letting us know, this is no longer correct. We now use Chebyshev as the default smoother KSP. I have fixed the documentation. > I want to change the Krylov method (and not the smoother) that is being preconditioned. So I guess if I just implement my solver as KSP and use PCMG as preconditioner that should solve my problem. Please let me know if this sounds fine. > Yes > Thank you, > > Regards, > > Pradeep > > ________________________________________ > From: Barry Smith > Sent: Tuesday, May 17, 2016 2:26:59 PM > To: Pratapa, Phanisri P > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] User defined solver for PCMG > >> On May 17, 2016, at 1:14 PM, Pratapa, Phanisri P wrote: >> >> Barry, >> >> What I mean is that: I want to implement multigrid preconditioning on a new Krylov method that is not currently in PETSc. For this, I was hoping that I could replace the KSPFGMRES smoother (default) with the "new solver" I have. But I do not have the new solver as a PETSc KSP yet. > > Understood. We don't provide a KSPSHELL because we consider it so easy to implement a new KSP in PETSc that having a KSPSHELL is unnecessary. If your new Krylov method is "stand-alone" and not, for example, a modification of GMRES here is how to proceed. Say your KSP is called mykrylov > > Make a directory src/ksp/ksp/impls/mykrylov Copy src/ksp/ksp/impls/cg/cg.c and src/ksp/ksp/impls/cg/cgimpl.c and src/ksp/ksp/impls/cg/makefile to that directory changing the .c and .h file names. Then edit the three files to reflect the new names. Edit > src/ksp/ksp/impls/makefile and add the mykrylov directory to the list of directories (variable DIRS). Then code your new Krylov method inside the two new files you copied over following the directions given in the file. > > If your new Krylov method is an extension/modification of GMRES then it is possible to reuse most of the GMRES implementation in PETSc but a bit more involved. Under the src/ksp/ksp/impls/gmres directory are several variants fgmres, pgmres, lgmres, pipefgmres, agmres, and dgmres. I would recommend picking the one that most closely resembles your new method and copying it as above and modifying it to match your algorithm. > > Barry > > > >> >> Thank you, >> >> Regards, >> >> Pradeep >> ________________________________________ >> From: Barry Smith >> Sent: Tuesday, May 17, 2016 2:08:18 PM >> To: Pratapa, Phanisri P >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] User defined solver for PCMG >> >>> On May 17, 2016, at 12:21 PM, Pratapa, Phanisri P wrote: >>> >>> Hi, >>> >>> I am trying to find out if one can use a user defined linear solver function in PCMG (instead of the default GMRES). According to the petsc manual, I can change the solver/smoother through the KSP context and the available solvers, but I am interested in using my own function (solver). >> >> What do you mean here by "solver"? Do you want to provide a new Krylov method that is not currently in PETSc or a new preconditioner that is specific to your problem and cannot be written as a composition of preconditioners and Krylov methods already in PETSc? >> >> An example of your own custom preconditioner could be an SOR iteration that you hand code based on the stencil and doesn't use a stored matrix. In that case you would access the PC on the level and use PCSetType(subpc,PCSHELL) and PCShellSetApply(subpc, your custom function). >> >> Barry >> >>> >>> Thank you, >>> >>> Regards, >>> >>> Pradeep >> > From juhaj at iki.fi Wed May 18 13:38:53 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Wed, 18 May 2016 19:38:53 +0100 Subject: [petsc-users] snes failures Message-ID: <2345493.9IZagzfzqj@vega> Dear list, I'm designing a short training course on HPC, and decided to use PETSc as an example of a good way of getting things done quick, easy, and with good performance, and without needing to write one's own code for things like linear or non-linear solvers etc. However, my SNES example turned out to be problematic: I chose the (static) sine-Gordon equation for my example, mostly because its exact solution is known so it is easy to compare with numerics and also because it is, after all, a dead simple equation. Yet my code refuses to converge most of the time! Using -snes_type ngs always succeeds, but is also very slow. Any other type will fail once I increase the domain size from ~100 points (the actual number depends on the type). I always keep the lattice spacing at 0.1. The failure is also always the same: DIVERGED_LINE_SEARCH. Some types manage to take one step and get stuck, some types manage to decrease the norm once and then continue forever without decreasing the norm but not complaining about divergence either (unless they hit one of the max_it-type limits), and ncg is the worst of all: it always (with any lattice size!) fails at the very first step. I've checked the Jacobian, and I suspect it is ok as ngs converges and the other types except ncg also converge nicely unless the domain is too big. Any ideas of where this could go wrong? Cheers, Juha P.S. I can share the whole code, if that is needed, but it is presently quite messy thanks to all my efforts at trying to sort this out. From juhaj at iki.fi Wed May 18 13:45:22 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Wed, 18 May 2016 19:45:22 +0100 Subject: [petsc-users] snes failures In-Reply-To: <2345493.9IZagzfzqj@vega> References: <2345493.9IZagzfzqj@vega> Message-ID: <1741977.kPu2lQm2TH@vega> Oh, and one additional point I forgot: *any* type except ncg will converge if I change my initial conditions from a straight line from 0 to 2pi to a two- domain solution phi(x) where phi(x>=0) == 2pi and phi(x<0) == 0. Ncg still fails, though. In fact, it goes NAN with the default LS type (cp), but fails completely with bt. Ncg even fails if my initial condition is the exact solution itself (or, rather, the exact solution restricted to the numerical interval). Cheers, Juha On Wednesday 18 May 2016 19:38:53 Juha Jaykka wrote: > Dear list, > > I'm designing a short training course on HPC, and decided to use PETSc as an > example of a good way of getting things done quick, easy, and with good > performance, and without needing to write one's own code for things like > linear or non-linear solvers etc. > > However, my SNES example turned out to be problematic: I chose the (static) > sine-Gordon equation for my example, mostly because its exact solution is > known so it is easy to compare with numerics and also because it is, after > all, a dead simple equation. Yet my code refuses to converge most of the > time! > > Using -snes_type ngs always succeeds, but is also very slow. Any other type > will fail once I increase the domain size from ~100 points (the actual > number depends on the type). I always keep the lattice spacing at 0.1. The > failure is also always the same: DIVERGED_LINE_SEARCH. Some types manage to > take one step and get stuck, some types manage to decrease the norm once > and then continue forever without decreasing the norm but not complaining > about divergence either (unless they hit one of the max_it-type limits), > and ncg is the worst of all: it always (with any lattice size!) fails at > the very first step. > > I've checked the Jacobian, and I suspect it is ok as ngs converges and the > other types except ncg also converge nicely unless the domain is too big. > > Any ideas of where this could go wrong? > > Cheers, > Juha > > P.S. I can share the whole code, if that is needed, but it is presently > quite messy thanks to all my efforts at trying to sort this out. From knepley at gmail.com Wed May 18 13:48:52 2016 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 May 2016 13:48:52 -0500 Subject: [petsc-users] snes failures In-Reply-To: <2345493.9IZagzfzqj@vega> References: <2345493.9IZagzfzqj@vega> Message-ID: On Wed, May 18, 2016 at 1:38 PM, Juha Jaykka wrote: > Dear list, > > I'm designing a short training course on HPC, and decided to use PETSc as > an > example of a good way of getting things done quick, easy, and with good > performance, and without needing to write one's own code for things like > linear or non-linear solvers etc. > > However, my SNES example turned out to be problematic: I chose the (static) > sine-Gordon equation for my example, mostly because its exact solution is > known so it is easy to compare with numerics and also because it is, after > all, a dead simple equation. Yet my code refuses to converge most of the > time! > > Using -snes_type ngs always succeeds, but is also very slow. Any other type > will fail once I increase the domain size from ~100 points (the actual > number > depends on the type). I always keep the lattice spacing at 0.1. The > failure is > also always the same: DIVERGED_LINE_SEARCH. Some types manage to take one > step > and get stuck, some types manage to decrease the norm once and then > continue > forever without decreasing the norm but not complaining about divergence > either (unless they hit one of the max_it-type limits), and ncg is the > worst > of all: it always (with any lattice size!) fails at the very first step. > > I've checked the Jacobian, and I suspect it is ok as ngs converges and the > other types except ncg also converge nicely unless the domain is too big. > Nope, ngs does not use the Jacobian, and small problems can converge with wrong Jacobians. Any ideas of where this could go wrong? 1) Just run with -snes_fd_color -snes_fd_color_use_mat -mat_coloring_type greedy and see if it converges. 2) Check out http://scicomp.stackexchange.com/questions/30/why-is-newtons-method-not-converging Matt > Cheers, > Juha > > P.S. I can share the whole code, if that is needed, but it is presently > quite > messy thanks to all my efforts at trying to sort this out. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhaj at iki.fi Wed May 18 16:17:53 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Wed, 18 May 2016 22:17:53 +0100 Subject: [petsc-users] snes failures In-Reply-To: References: <2345493.9IZagzfzqj@vega> Message-ID: <1495692.HZxQxc1InI@vega> On Wednesday 18 May 2016 13:48:52 Matthew Knepley wrote: > On Wed, May 18, 2016 at 1:38 PM, Juha Jaykka wrote: > > Dear list, > > > > I'm designing a short training course on HPC, and decided to use PETSc as > > an > > example of a good way of getting things done quick, easy, and with good > > performance, and without needing to write one's own code for things like > > linear or non-linear solvers etc. > > > > However, my SNES example turned out to be problematic: I chose the > > (static) > > sine-Gordon equation for my example, mostly because its exact solution is > > known so it is easy to compare with numerics and also because it is, after > > all, a dead simple equation. Yet my code refuses to converge most of the > > time! > > > > Using -snes_type ngs always succeeds, but is also very slow. Any other > > type > > will fail once I increase the domain size from ~100 points (the actual > > number > > depends on the type). I always keep the lattice spacing at 0.1. The > > failure is > > also always the same: DIVERGED_LINE_SEARCH. Some types manage to take one > > step > > and get stuck, some types manage to decrease the norm once and then > > continue > > forever without decreasing the norm but not complaining about divergence > > either (unless they hit one of the max_it-type limits), and ncg is the > > worst > > of all: it always (with any lattice size!) fails at the very first step. > > > > I've checked the Jacobian, and I suspect it is ok as ngs converges and the > > other types except ncg also converge nicely unless the domain is too big. > > Nope, ngs does not use the Jacobian, and small problems can converge with > wrong Jacobians. > > Any ideas of where this could go wrong? > > > 1) Just run with -snes_fd_color -snes_fd_color_use_mat -mat_coloring_type > greedy and > see if it converges. It does not. And I should have mentioned earlier, that I tried -snes_mf, - snes_mf_operator, -snes_fd and -snes_fd_color already and none of those converges. Your suggested options result in 0 SNES Function norm 1.002496882788e+00 Line search: lambdas = [1., 0.], ftys = [1.01105, 1.005] Line search terminated: lambda = 168.018, fnorms = 1.58978 1 SNES Function norm 1.589779063742e+00 Line search: lambdas = [1., 0.], ftys = [5.57144, 4.11598] Line search terminated: lambda = 4.82796, fnorms = 8.93164 2 SNES Function norm 8.931639387159e+00 Line search: lambdas = [1., 0.], ftys = [2504.72, 385.612] Line search terminated: lambda = 2.18197, fnorms = 157.043 3 SNES Function norm 1.570434892800e+02 Line search: lambdas = [1., 0.], ftys = [1.89092e+08, 1.48956e+06] Line search terminated: lambda = 2.00794, fnorms = 40941.5 4 SNES Function norm 4.094149042511e+04 Line search: lambdas = [1., 0.], ftys = [8.60081e+17, 2.56063e+13] Line search terminated: lambda = 2.00003, fnorms = 2.75067e+09 5 SNES Function norm 2.750671622274e+09 Line search: lambdas = [1., 0.], ftys = [1.75232e+37, 7.76449e+27] Line search terminated: lambda = 2., fnorms = 1.24157e+19 6 SNES Function norm 1.241565256983e+19 Line search: lambdas = [1., 0.], ftys = [7.27339e+75, 7.14012e+56] Line search terminated: lambda = 2., fnorms = 2.52948e+38 7 SNES Function norm 2.529479470902e+38 Line search: lambdas = [1., 0.], ftys = [1.25309e+153, 6.03796e+114] Line search terminated: lambda = 2., fnorms = 1.04992e+77 8 SNES Function norm 1.049915566775e+77 Line search: lambdas = [1., 0.], ftys = [3.71943e+307, 4.31777e+230] Line search terminated: lambda = 2., fnorms = inf. 9 SNES Function norm inf Which is very similar (perhaps even identical) to what ncg does with cp linesearch even without your suggestions. And yes, I also forgot to say, all the results I referred to were with -snes_linesearch_type bt. While testing a bit more, though, I noticed that when using -snes_type ngs the norm first goes UP before starting to decrease: 0 SNES Function norm 1.002496882788e+00 1 SNES Function norm 1.264791228033e+00 2 SNES Function norm 1.296062264876e+00 3 SNES Function norm 1.290207363235e+00 4 SNES Function norm 1.289395207346e+00 etc until 1952 SNES Function norm 9.975720236748e-09 > http://scicomp.stackexchange.com/questions/30/why-is-newtons-method-not-conv > erging None of this flags up any problems and -snes_check_jacobian consistently gives something like 9.55762e-09 = ||J - Jfd||/||J|| 3.97595e-06 = ||J - Jfd|| and looking at the values themselves with -snes_check_jacobian_view does not flag any odd points which might be wrong but not show up in the above norm. There is just one point which I found in all this testing. Running with a normal run but with -mat_mffd_type ds added, fails with Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 2 Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0 instead of failing the line search. Where did the indefinite PC suddenly come from? Another point perhaps worth noting is that at a particular grid size, all the failing solves always produce the same result with the same function norm (which at 200 points equals 4.6458600451067145e-01), so at least they are failing somewhat consistently. This is except the mffd above, of course. The resulting iterate in the failing cases has an oscillatory nature, with the number of oscillations increasing with the domain increasing: if my domain is smaller than about -6 to +6 all the methods converge. If the domain is about -13 to +13, the "solution" starts to pick up another oscillation etc. Could there be something hairy in the sin() term of the sine-Gordon, somehow? An oscillatory solution seems to point the finger towards an oscillatory term in the equation, but I cannot see how or why it should cause oscillations. This is also irrespective of whether my Jacobian gets called, so I think I can be pretty confident the problem is not in the Jacobian, but someplace else instead. (That said, the Jacobian may still of course have some other problem.) Cheers, Juha From knepley at gmail.com Wed May 18 17:04:30 2016 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 May 2016 17:04:30 -0500 Subject: [petsc-users] snes failures In-Reply-To: <1495692.HZxQxc1InI@vega> References: <2345493.9IZagzfzqj@vega> <1495692.HZxQxc1InI@vega> Message-ID: On Wed, May 18, 2016 at 4:17 PM, Juha Jaykka wrote: > On Wednesday 18 May 2016 13:48:52 Matthew Knepley wrote: > > On Wed, May 18, 2016 at 1:38 PM, Juha Jaykka wrote: > > > Dear list, > > > > > > I'm designing a short training course on HPC, and decided to use PETSc > as > > > an > > > example of a good way of getting things done quick, easy, and with good > > > performance, and without needing to write one's own code for things > like > > > linear or non-linear solvers etc. > > > > > > However, my SNES example turned out to be problematic: I chose the > > > (static) > > > sine-Gordon equation for my example, mostly because its exact solution > is > > > known so it is easy to compare with numerics and also because it is, > after > > > all, a dead simple equation. Yet my code refuses to converge most of > the > > > time! > > > > > > Using -snes_type ngs always succeeds, but is also very slow. Any other > > > type > > > will fail once I increase the domain size from ~100 points (the actual > > > number > > > depends on the type). I always keep the lattice spacing at 0.1. The > > > failure is > > > also always the same: DIVERGED_LINE_SEARCH. Some types manage to take > one > > > step > > > and get stuck, some types manage to decrease the norm once and then > > > continue > > > forever without decreasing the norm but not complaining about > divergence > > > either (unless they hit one of the max_it-type limits), and ncg is the > > > worst > > > of all: it always (with any lattice size!) fails at the very first > step. > > > > > > I've checked the Jacobian, and I suspect it is ok as ngs converges and > the > > > other types except ncg also converge nicely unless the domain is too > big. > > > > Nope, ngs does not use the Jacobian, and small problems can converge with > > wrong Jacobians. > > > > Any ideas of where this could go wrong? > > > > > > 1) Just run with -snes_fd_color -snes_fd_color_use_mat > -mat_coloring_type > > greedy and > > see if it converges. > > It does not. And I should have mentioned earlier, that I tried -snes_mf, - > snes_mf_operator, -snes_fd and -snes_fd_color already and none of those > converges. Your suggested options result in > For solver questions, I always need to see the result of -snes_view -snes_monitor -ksp_converged_reason -ksp_monitor_true_residual -snes_converged_reason Turn on LU for the linear solver so it plays no role. Thanks, Matt > 0 SNES Function norm 1.002496882788e+00 > Line search: lambdas = [1., 0.], ftys = [1.01105, 1.005] > Line search terminated: lambda = 168.018, fnorms = 1.58978 > 1 SNES Function norm 1.589779063742e+00 > Line search: lambdas = [1., 0.], ftys = [5.57144, 4.11598] > Line search terminated: lambda = 4.82796, fnorms = 8.93164 > 2 SNES Function norm 8.931639387159e+00 > Line search: lambdas = [1., 0.], ftys = [2504.72, 385.612] > Line search terminated: lambda = 2.18197, fnorms = 157.043 > 3 SNES Function norm 1.570434892800e+02 > Line search: lambdas = [1., 0.], ftys = [1.89092e+08, 1.48956e+06] > Line search terminated: lambda = 2.00794, fnorms = 40941.5 > 4 SNES Function norm 4.094149042511e+04 > Line search: lambdas = [1., 0.], ftys = [8.60081e+17, 2.56063e+13] > Line search terminated: lambda = 2.00003, fnorms = 2.75067e+09 > 5 SNES Function norm 2.750671622274e+09 > Line search: lambdas = [1., 0.], ftys = [1.75232e+37, 7.76449e+27] > Line search terminated: lambda = 2., fnorms = 1.24157e+19 > 6 SNES Function norm 1.241565256983e+19 > Line search: lambdas = [1., 0.], ftys = [7.27339e+75, 7.14012e+56] > Line search terminated: lambda = 2., fnorms = 2.52948e+38 > 7 SNES Function norm 2.529479470902e+38 > Line search: lambdas = [1., 0.], ftys = [1.25309e+153, 6.03796e+114] > Line search terminated: lambda = 2., fnorms = 1.04992e+77 > 8 SNES Function norm 1.049915566775e+77 > Line search: lambdas = [1., 0.], ftys = [3.71943e+307, 4.31777e+230] > Line search terminated: lambda = 2., fnorms = inf. > 9 SNES Function norm inf > > Which is very similar (perhaps even identical) to what ncg does with cp > linesearch even without your suggestions. And yes, I also forgot to say, > all > the results I referred to were with -snes_linesearch_type bt. > > While testing a bit more, though, I noticed that when using -snes_type ngs > the > norm first goes UP before starting to decrease: > > 0 SNES Function norm 1.002496882788e+00 > 1 SNES Function norm 1.264791228033e+00 > 2 SNES Function norm 1.296062264876e+00 > 3 SNES Function norm 1.290207363235e+00 > 4 SNES Function norm 1.289395207346e+00 > etc until > 1952 SNES Function norm 9.975720236748e-09 > > > > > http://scicomp.stackexchange.com/questions/30/why-is-newtons-method-not-conv > > erging > > None of this flags up any problems and -snes_check_jacobian consistently > gives > something like > > 9.55762e-09 = ||J - Jfd||/||J|| 3.97595e-06 = ||J - Jfd|| > > and looking at the values themselves with -snes_check_jacobian_view does > not > flag any odd points which might be wrong but not show up in the above norm. > > There is just one point which I found in all this testing. Running with a > normal run but with -mat_mffd_type ds added, fails with > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 2 > Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0 > > > instead of failing the line search. Where did the indefinite PC suddenly > come > from? > > Another point perhaps worth noting is that at a particular grid size, all > the > failing solves always produce the same result with the same function norm > (which at 200 points equals 4.6458600451067145e-01), so at least they are > failing somewhat consistently. This is except the mffd above, of course. > The > resulting iterate in the failing cases has an oscillatory nature, with the > number of oscillations increasing with the domain increasing: if my domain > is > smaller than about -6 to +6 all the methods converge. If the domain is > about > -13 to +13, the "solution" starts to pick up another oscillation etc. > > Could there be something hairy in the sin() term of the sine-Gordon, > somehow? > An oscillatory solution seems to point the finger towards an oscillatory > term > in the equation, but I cannot see how or why it should cause oscillations. > > This is also irrespective of whether my Jacobian gets called, so I think I > can > be pretty confident the problem is not in the Jacobian, but someplace else > instead. (That said, the Jacobian may still of course have some other > problem.) > > Cheers, > Juha > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed May 18 17:16:38 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 18 May 2016 17:16:38 -0500 Subject: [petsc-users] snes failures In-Reply-To: <1495692.HZxQxc1InI@vega> References: <2345493.9IZagzfzqj@vega> <1495692.HZxQxc1InI@vega> Message-ID: Send the code and I can play with it. Barry > On May 18, 2016, at 4:17 PM, Juha Jaykka wrote: > > On Wednesday 18 May 2016 13:48:52 Matthew Knepley wrote: >> On Wed, May 18, 2016 at 1:38 PM, Juha Jaykka wrote: >>> Dear list, >>> >>> I'm designing a short training course on HPC, and decided to use PETSc as >>> an >>> example of a good way of getting things done quick, easy, and with good >>> performance, and without needing to write one's own code for things like >>> linear or non-linear solvers etc. >>> >>> However, my SNES example turned out to be problematic: I chose the >>> (static) >>> sine-Gordon equation for my example, mostly because its exact solution is >>> known so it is easy to compare with numerics and also because it is, after >>> all, a dead simple equation. Yet my code refuses to converge most of the >>> time! >>> >>> Using -snes_type ngs always succeeds, but is also very slow. Any other >>> type >>> will fail once I increase the domain size from ~100 points (the actual >>> number >>> depends on the type). I always keep the lattice spacing at 0.1. The >>> failure is >>> also always the same: DIVERGED_LINE_SEARCH. Some types manage to take one >>> step >>> and get stuck, some types manage to decrease the norm once and then >>> continue >>> forever without decreasing the norm but not complaining about divergence >>> either (unless they hit one of the max_it-type limits), and ncg is the >>> worst >>> of all: it always (with any lattice size!) fails at the very first step. >>> >>> I've checked the Jacobian, and I suspect it is ok as ngs converges and the >>> other types except ncg also converge nicely unless the domain is too big. >> >> Nope, ngs does not use the Jacobian, and small problems can converge with >> wrong Jacobians. >> >> Any ideas of where this could go wrong? >> >> >> 1) Just run with -snes_fd_color -snes_fd_color_use_mat -mat_coloring_type >> greedy and >> see if it converges. > > It does not. And I should have mentioned earlier, that I tried -snes_mf, - > snes_mf_operator, -snes_fd and -snes_fd_color already and none of those > converges. Your suggested options result in > > 0 SNES Function norm 1.002496882788e+00 > Line search: lambdas = [1., 0.], ftys = [1.01105, 1.005] > Line search terminated: lambda = 168.018, fnorms = 1.58978 > 1 SNES Function norm 1.589779063742e+00 > Line search: lambdas = [1., 0.], ftys = [5.57144, 4.11598] > Line search terminated: lambda = 4.82796, fnorms = 8.93164 > 2 SNES Function norm 8.931639387159e+00 > Line search: lambdas = [1., 0.], ftys = [2504.72, 385.612] > Line search terminated: lambda = 2.18197, fnorms = 157.043 > 3 SNES Function norm 1.570434892800e+02 > Line search: lambdas = [1., 0.], ftys = [1.89092e+08, 1.48956e+06] > Line search terminated: lambda = 2.00794, fnorms = 40941.5 > 4 SNES Function norm 4.094149042511e+04 > Line search: lambdas = [1., 0.], ftys = [8.60081e+17, 2.56063e+13] > Line search terminated: lambda = 2.00003, fnorms = 2.75067e+09 > 5 SNES Function norm 2.750671622274e+09 > Line search: lambdas = [1., 0.], ftys = [1.75232e+37, 7.76449e+27] > Line search terminated: lambda = 2., fnorms = 1.24157e+19 > 6 SNES Function norm 1.241565256983e+19 > Line search: lambdas = [1., 0.], ftys = [7.27339e+75, 7.14012e+56] > Line search terminated: lambda = 2., fnorms = 2.52948e+38 > 7 SNES Function norm 2.529479470902e+38 > Line search: lambdas = [1., 0.], ftys = [1.25309e+153, 6.03796e+114] > Line search terminated: lambda = 2., fnorms = 1.04992e+77 > 8 SNES Function norm 1.049915566775e+77 > Line search: lambdas = [1., 0.], ftys = [3.71943e+307, 4.31777e+230] > Line search terminated: lambda = 2., fnorms = inf. > 9 SNES Function norm inf > > Which is very similar (perhaps even identical) to what ncg does with cp > linesearch even without your suggestions. And yes, I also forgot to say, all > the results I referred to were with -snes_linesearch_type bt. > > While testing a bit more, though, I noticed that when using -snes_type ngs the > norm first goes UP before starting to decrease: > > 0 SNES Function norm 1.002496882788e+00 > 1 SNES Function norm 1.264791228033e+00 > 2 SNES Function norm 1.296062264876e+00 > 3 SNES Function norm 1.290207363235e+00 > 4 SNES Function norm 1.289395207346e+00 > etc until > 1952 SNES Function norm 9.975720236748e-09 > > >> http://scicomp.stackexchange.com/questions/30/why-is-newtons-method-not-conv >> erging > > None of this flags up any problems and -snes_check_jacobian consistently gives > something like > > 9.55762e-09 = ||J - Jfd||/||J|| 3.97595e-06 = ||J - Jfd|| > > and looking at the values themselves with -snes_check_jacobian_view does not > flag any odd points which might be wrong but not show up in the above norm. > > There is just one point which I found in all this testing. Running with a > normal run but with -mat_mffd_type ds added, fails with > > Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 2 > Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0 > > > instead of failing the line search. Where did the indefinite PC suddenly come > from? > > Another point perhaps worth noting is that at a particular grid size, all the > failing solves always produce the same result with the same function norm > (which at 200 points equals 4.6458600451067145e-01), so at least they are > failing somewhat consistently. This is except the mffd above, of course. The > resulting iterate in the failing cases has an oscillatory nature, with the > number of oscillations increasing with the domain increasing: if my domain is > smaller than about -6 to +6 all the methods converge. If the domain is about > -13 to +13, the "solution" starts to pick up another oscillation etc. > > Could there be something hairy in the sin() term of the sine-Gordon, somehow? > An oscillatory solution seems to point the finger towards an oscillatory term > in the equation, but I cannot see how or why it should cause oscillations. > > This is also irrespective of whether my Jacobian gets called, so I think I can > be pretty confident the problem is not in the Jacobian, but someplace else > instead. (That said, the Jacobian may still of course have some other > problem.) > > Cheers, > Juha > From a.croucher at auckland.ac.nz Thu May 19 00:27:40 2016 From: a.croucher at auckland.ac.nz (Adrian Croucher) Date: Thu, 19 May 2016 17:27:40 +1200 Subject: [petsc-users] snes failures Message-ID: <573D4ECC.3020607@auckland.ac.nz> Might be unrelated, but I am also seeing a lot of SNES failures, on problems that used to converge fine, since I pulled the latest PETSc 'next' branch yesterday. This is using a FD Jacobian so it isn't a Jacobian coding problem. Same behaviour with LU linear solver, so it's not the linear solver either. I'm trying to bisect and find out where it stopped working. Cheers, Adrian -- Dr Adrian Croucher Senior Research Fellow Department of Engineering Science University of Auckland, New Zealand email: a.croucher at auckland.ac.nz tel: +64 (0)9 923 84611 From elias.karabelas at medunigraz.at Thu May 19 02:48:13 2016 From: elias.karabelas at medunigraz.at (Elias Karabelas) Date: Thu, 19 May 2016 09:48:13 +0200 Subject: [petsc-users] Question about KSP Message-ID: <573D6FBD.30303@medunigraz.at> Dear all, I have a question about preconditioned solvers. So, I have a Sparsematrix, say A, and now for some reason I would like to add some rank-one term u v^T to that matrix. As far as I know about Petsc, I can define the action of this matrix with MatShell. But is it possible to adapt the preconditioner (like a AMG) of my KSP to handle that kind of MatShell? Kind Regards Elias -- Dr Elias Karabelas Medical University of Graz Institute of Biophysics Harrachgasse 21/IV 8010 Graz, Austria Phone: +43 316 380 7759 Email: elias.karabelas at medunigraz.at Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas From security-noreply at linkedin.com Thu May 19 06:04:55 2016 From: security-noreply at linkedin.com (LinkedIn Security) Date: Thu, 19 May 2016 11:04:55 +0000 (UTC) Subject: [petsc-users] Reset your LinkedIn password Message-ID: <2114310015.449438.1463655895745.JavaMail.app@ela4-app8378.prod.linkedin.com> Hi Matt, To make sure you continue having the best experience possible on LinkedIn, we're regularly monitoring our site and the Internet to keep your account information safe. We've recently noticed a potential risk to your LinkedIn account coming from outside LinkedIn. Just to be safe, you'll need to reset your password the next time you log in. Here's how: 1. Go to the LinkedIn website. 2. Next to the password field, click the "Forgot your password" link, and enter your email address. 3. You'll get an email from LinkedIn asking you to click a link that will help you reset your password. 4. Once you've reset your password, a confirmation email will be sent to the confirmed email addresses on your account. Thanks for helping us keep your account safe, The LinkedIn Team ..................................... This email was intended for Matt Funk (Software Engineer at SAP). Learn why we included this: https://www.linkedin.com/e/v2?e=2557fb-ioe6xf02-w3&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=email_password_invalidated_01&articleId=4788 ? 2016 LinkedIn Corporation, 2029 Stierlin Court, Mountain View CA 94043. LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From security-noreply at linkedin.com Thu May 19 06:55:03 2016 From: security-noreply at linkedin.com (LinkedIn Security) Date: Thu, 19 May 2016 11:55:03 +0000 (UTC) Subject: [petsc-users] Matt, here's the link to reset your password Message-ID: <176379543.1130928.1463658903805.JavaMail.app@ltx1-app6751.prod.linkedin.com> Hi Matt, You recently requested a password reset. To change your LinkedIn password, paste the following link into your browser: https://www.linkedin.com/e/rpp/129573335/petsc-users%40mcs%2Eanl%2Egov/-1779438074809062173/?hs=true&tok=08SwC6OuhbeDg1 The link will expire in 24 hours, so be sure to use it right away. Thanks for using LinkedIn! The LinkedIn Team This email was intended for Matt Funk (Software Engineer at SAP). Learn why we included this: https://www.linkedin.com/e/v2?e=2557fb-ioe8py1h-bz&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_password_reset&articleId=4788 If you need assistance or have questions, please contact LinkedIn Customer Service: https://www.linkedin.com/e/v2?e=2557fb-ioe8py1h-bz&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_password_reset © 2016 LinkedIn Corporation, 2029 Stierlin Court, Mountain View CA 94043. LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From security-noreply at linkedin.com Thu May 19 06:55:53 2016 From: security-noreply at linkedin.com (LinkedIn Security) Date: Thu, 19 May 2016 11:55:53 +0000 (UTC) Subject: [petsc-users] Matt, your password was successfully reset Message-ID: <1701988862.1095562.1463658953821.JavaMail.app@ltx1-app6732.prod.linkedin.com> Hi Matt, You've successfully changed your LinkedIn password. Thanks for using LinkedIn! The LinkedIn Team When and where this happened: Date:May 19, 2016, 12:55 PM Browser:Chrome Operating System:Linux Approximate Location:Salford, Salford, United Kingdom Didn't do this? Be sure to change your password right away: https://www.linkedin.com/e/v2?e=2557fb-ioe8qywc-r9&a=uas-request-password-reset&midToken=AQEKWfzqDe1kYg&ek=security_reset_password_notification This email was intended for Matt Funk (Software Engineer at SAP). Learn why we included this: https://www.linkedin.com/e/v2?e=2557fb-ioe8qywc-r9&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_reset_password_notification&articleId=4788 If you need assistance or have questions, please contact LinkedIn Customer Service: https://www.linkedin.com/e/v2?e=2557fb-ioe8qywc-r9&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_reset_password_notification © 2016 LinkedIn Corporation, 2029 Stierlin Court, Mountain View CA 94043. LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From security-noreply at linkedin.com Thu May 19 06:57:21 2016 From: security-noreply at linkedin.com (LinkedIn Security) Date: Thu, 19 May 2016 11:57:21 +0000 (UTC) Subject: [petsc-users] Matt, your password was successfully reset Message-ID: <245694475.1096840.1463659041039.JavaMail.app@ltx1-app6732.prod.linkedin.com> Hi Matt, You've successfully changed your LinkedIn password. Thanks for using LinkedIn! The LinkedIn Team When and where this happened: Date:May 19, 2016, 1:57 PM Browser:Chrome Operating System:Windows Approximate Location:Walldorf, Baden-W?rttemberg Region, Germany Didn't do this? Be sure to change your password right away: https://www.linkedin.com/e/v2?e=2557fb-ioe8ssrn-f9&a=uas-request-password-reset&midToken=AQEKWfzqDe1kYg&ek=security_reset_password_notification This email was intended for Matt Funk (Software Engineer at SAP). Learn why we included this: https://www.linkedin.com/e/v2?e=2557fb-ioe8ssrn-f9&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_reset_password_notification&articleId=4788 If you need assistance or have questions, please contact LinkedIn Customer Service: https://www.linkedin.com/e/v2?e=2557fb-ioe8ssrn-f9&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_reset_password_notification © 2016 LinkedIn Corporation, 2029 Stierlin Court, Mountain View CA 94043. LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From security-noreply at linkedin.com Thu May 19 07:27:27 2016 From: security-noreply at linkedin.com (LinkedIn Security) Date: Thu, 19 May 2016 12:27:27 +0000 (UTC) Subject: [petsc-users] Matt, here's the link to reset your password Message-ID: <1892845474.1079403.1463660847593.JavaMail.app@lva1-app1796.prod.linkedin.com> Hi Matt, You recently requested a password reset. To change your LinkedIn password, paste the following link into your browser: https://www.linkedin.com/e/rpp/129573335/petsc-users%40mcs%2Eanl%2Egov/-2966815231266960378/?hs=true&tok=15tdIxylhHeDg1 The link will expire in 24 hours, so be sure to use it right away. Thanks for using LinkedIn! The LinkedIn Team This email was intended for Matt Funk (Software Engineer at SAP). Learn why we included this: https://www.linkedin.com/e/v2?e=2557fb-ioe9vlvp-4b&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_password_reset&articleId=4788 If you need assistance or have questions, please contact LinkedIn Customer Service: https://www.linkedin.com/e/v2?e=2557fb-ioe9vlvp-4b&a=customerServiceUrl&midToken=AQEKWfzqDe1kYg&ek=security_password_reset © 2016 LinkedIn Corporation, 2029 Stierlin Court, Mountain View CA 94043. LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu May 19 07:28:13 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 May 2016 07:28:13 -0500 Subject: [petsc-users] Question about KSP In-Reply-To: <573D6FBD.30303@medunigraz.at> References: <573D6FBD.30303@medunigraz.at> Message-ID: On Thu, May 19, 2016 at 2:48 AM, Elias Karabelas < elias.karabelas at medunigraz.at> wrote: > Dear all, > > I have a question about preconditioned solvers. So, I have a Sparsematrix, > say A, and now for some reason I would like to add some rank-one term u v^T > to that matrix. > As far as I know about Petsc, I can define the action of this matrix with > MatShell. But is it possible to adapt the preconditioner (like a AMG) of my > KSP to handle that kind of MatShell? > I do not know how you would do that. Its possible other people have ideas. Thanks, Matt > Kind Regards > Elias > > -- > Dr Elias Karabelas > > Medical University of Graz > Institute of Biophysics > Harrachgasse 21/IV > 8010 Graz, Austria > > Phone: +43 316 380 7759 > Email: elias.karabelas at medunigraz.at > Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From elias.karabelas at medunigraz.at Thu May 19 07:43:43 2016 From: elias.karabelas at medunigraz.at (Elias Karabelas) Date: Thu, 19 May 2016 14:43:43 +0200 Subject: [petsc-users] Question about KSP In-Reply-To: References: <573D6FBD.30303@medunigraz.at> <573DB1BA.4000003@medunigraz.at> <573DB3EA.7040602@medunigraz.at> Message-ID: <573DB4FF.1060306@medunigraz.at> Well I don't bother to much about which PC is use. I just wanted to look what's possible. One can of course come up with some kind of augmented Lagrange formulation, but then you end up with a saddle point problem. On 19.05.2016 14:41, Matthew Knepley wrote: > On Thu, May 19, 2016 at 7:39 AM, Elias Karabelas > > > wrote: > > Ok maybe I should go a little bit more into detail. I have some > stokes problem in a bifurcating Y-tube. Now I would like to > enforce some prescribed flux splits at the outlets. After some > literature research i found a Nitsche method to do that. And if > you consider the additional billinear forms you end up with some > nonlocal coupling terms at the outlets (which are those vectors I > talked earlier) > > > Keep the Cc. > > I can see that you might have that formulation, but how you would > incorporate that into AMG, I have no idea. > > Matt > > > On 19.05.2016 14:31, Matthew Knepley wrote: >> On Thu, May 19, 2016 at 7:29 AM, Elias Karabelas >> > > wrote: >> >> maybe something with PCSHELL? >> >> >> No, I mean that I do not understand what algorithm you might use >> (apart from the implementation). >> >> Matt >> >> >> On 19.05.2016 14:28, Matthew Knepley wrote: >>> On Thu, May 19, 2016 at 2:48 AM, Elias Karabelas >>> >> > wrote: >>> >>> Dear all, >>> >>> I have a question about preconditioned solvers. So, I >>> have a Sparsematrix, say A, and now for some reason I >>> would like to add some rank-one term u v^T to that matrix. >>> As far as I know about Petsc, I can define the action of >>> this matrix with MatShell. But is it possible to adapt >>> the preconditioner (like a AMG) of my KSP to handle that >>> kind of MatShell? >>> >>> >>> I do not know how you would do that. Its possible other >>> people have ideas. >>> >>> Thanks, >>> >>> Matt >>> >>> Kind Regards >>> Elias >>> >>> -- >>> Dr Elias Karabelas >>> >>> Medical University of Graz >>> Institute of Biophysics >>> Harrachgasse 21/IV >>> 8010 Graz, Austria >>> >>> Phone: +43 316 380 7759 >>> Email: elias.karabelas at medunigraz.at >>> >>> Web : >>> http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >> >> -- >> Dr Elias Karabelas >> >> Medical University of Graz >> Institute of Biophysics >> Harrachgasse 21/IV >> 8010 Graz, Austria >> >> Phone:+43 316 380 7759 >> Email:elias.karabelas at medunigraz.at >> >> Web :http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > -- > Dr Elias Karabelas > > Medical University of Graz > Institute of Biophysics > Harrachgasse 21/IV > 8010 Graz, Austria > > Phone:+43 316 380 7759 > Email:elias.karabelas at medunigraz.at > Web :http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -- Dr Elias Karabelas Medical University of Graz Institute of Biophysics Harrachgasse 21/IV 8010 Graz, Austria Phone: +43 316 380 7759 Email: elias.karabelas at medunigraz.at Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu May 19 07:56:01 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 May 2016 07:56:01 -0500 Subject: [petsc-users] Question about KSP In-Reply-To: <573DB4FF.1060306@medunigraz.at> References: <573D6FBD.30303@medunigraz.at> <573DB1BA.4000003@medunigraz.at> <573DB3EA.7040602@medunigraz.at> <573DB4FF.1060306@medunigraz.at> Message-ID: On Thu, May 19, 2016 at 7:43 AM, Elias Karabelas < elias.karabelas at medunigraz.at> wrote: > Well I don't bother to much about which PC is use. I just wanted to look > what's possible. One can of course come up with some kind of augmented > Lagrange formulation, but then you end up with a saddle point problem. > We do understand PCs for saddle point problems, but I do not know of anything for A+rank 1, except updated factorizations. Matt > On 19.05.2016 14:41, Matthew Knepley wrote: > > On Thu, May 19, 2016 at 7:39 AM, Elias Karabelas < > elias.karabelas at medunigraz.at> wrote: > >> Ok maybe I should go a little bit more into detail. I have some stokes >> problem in a bifurcating Y-tube. Now I would like to enforce some >> prescribed flux splits at the outlets. After some literature research i >> found a Nitsche method to do that. And if you consider the additional >> billinear forms you end up with some nonlocal coupling terms at the outlets >> (which are those vectors I talked earlier) >> > > Keep the Cc. > > I can see that you might have that formulation, but how you would > incorporate that into AMG, I have no idea. > > Matt > > >> >> On 19.05.2016 14:31, Matthew Knepley wrote: >> >> On Thu, May 19, 2016 at 7:29 AM, Elias Karabelas < >> elias.karabelas at medunigraz.at> wrote: >> >>> maybe something with PCSHELL? >>> >> >> No, I mean that I do not understand what algorithm you might use (apart >> from the implementation). >> >> Matt >> >> >>> >>> On 19.05.2016 14:28, Matthew Knepley wrote: >>> >>> On Thu, May 19, 2016 at 2:48 AM, Elias Karabelas < >>> elias.karabelas at medunigraz.at> wrote: >>> >>>> Dear all, >>>> >>>> I have a question about preconditioned solvers. So, I have a >>>> Sparsematrix, say A, and now for some reason I would like to add some >>>> rank-one term u v^T to that matrix. >>>> As far as I know about Petsc, I can define the action of this matrix >>>> with MatShell. But is it possible to adapt the preconditioner (like a AMG) >>>> of my KSP to handle that kind of MatShell? >>>> >>> >>> I do not know how you would do that. Its possible other people have >>> ideas. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Kind Regards >>>> Elias >>>> >>>> -- >>>> Dr Elias Karabelas >>>> >>>> Medical University of Graz >>>> Institute of Biophysics >>>> Harrachgasse 21/IV >>>> 8010 Graz, Austria >>>> >>>> Phone: +43 316 380 7759 <%2B43%20316%20380%207759> >>>> Email: elias.karabelas at medunigraz.at >>>> Web : >>>> http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >>> -- >>> Dr Elias Karabelas >>> >>> Medical University of Graz >>> Institute of Biophysics >>> Harrachgasse 21/IV >>> 8010 Graz, Austria >>> >>> Phone: +43 316 380 7759 >>> Email: elias.karabelas at medunigraz.at >>> Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> -- >> Dr Elias Karabelas >> >> Medical University of Graz >> Institute of Biophysics >> Harrachgasse 21/IV >> 8010 Graz, Austria >> >> Phone: +43 316 380 7759 >> Email: elias.karabelas at medunigraz.at >> Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- > Dr Elias Karabelas > > Medical University of Graz > Institute of Biophysics > Harrachgasse 21/IV > 8010 Graz, Austria > > Phone: +43 316 380 7759 > Email: elias.karabelas at medunigraz.at > Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Thu May 19 13:07:46 2016 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Thu, 19 May 2016 11:07:46 -0700 Subject: [petsc-users] GAMG Indefinite Message-ID: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> I am trying to solve a very ordinary nonlinear elasticity problem using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine in PETSc 3.5.3. The problem I am seeing is on my first Newton iteration, the Ax=b solve returns with and Indefinite Preconditioner error (KSPGetConvergedReason == -8): (log_view.txt output also attached) 0 KSP Residual norm 8.411630828687e-02 1 KSP Residual norm 2.852209578900e-02 NO CONVERGENCE REASON: Indefinite Preconditioner NO CONVERGENCE REASON: Indefinite Preconditioner On the next and subsequent Newton iterations, I see perfectly normal behavior and the problem converges quadratically. The results look fine. I tried the same problem with -pc_type jacobi as well as super-lu, and mumps and they all work without complaint. My run line for GAMG is: -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left The code flow looks like: ! If no matrix allocation yet if(Kmat.eq.0) then call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) call MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) call MatSetBlockSize(Kmat,nsbk,ierr) call MatSetFromOptions(Kmat, ierr) call MatSetType(Kmat,MATAIJ,ierr) call MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) call MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) endif call MatZeroEntries(Kmat,ierr) ! Code to set values in matrix call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) ! If no rhs allocation yet if(rhs.eq.0) then call VecCreate (PETSC_COMM_WORLD, rhs, ierr) call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) call VecSetFromOptions(rhs, ierr) endif ! Code to set values in RHS call VecAssemblyBegin(rhs, ierr) call VecAssemblyEnd(rhs, ierr) if(kspsol_exists) then call KSPDestroy(kspsol,ierr) endif call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) call KSPSetOperators(kspsol, Kmat, Kmat, ierr) call KSPSetFromOptions(kspsol,ierr) call KSPGetPC(kspsol, pc , ierr) call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) call KSPSolve(kspsol, rhs, sol, ierr) call KSPGetConvergedReason(kspsol,reason,ierr) ! update solution, go back to the top reason is coming back as -8 on my first Ax=b solve and 2 or 3 after that (with gamg). With the other solvers it is coming back as 2 or 3 for iterative options and 4 if I use one of the direct solvers. Any ideas on what is causing the Indefinite PC on the first iteration with GAMG? Thanks in advance, -sanjay -- ----------------------------------------------- Sanjay Govindjee, PhD, PE Professor of Civil Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu http://www.ce.berkeley.edu/~sanjay ----------------------------------------------- Books: Engineering Mechanics of Deformable Solids: A Presentation with Exercises http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 http://ukcatalogue.oup.com/product/9780199651641.do http://amzn.com/0199651647 Engineering Mechanics 3 (Dynamics) 2nd Edition http://www.springer.com/978-3-642-53711-0 http://amzn.com/3642537111 Engineering Mechanics 3, Supplementary Problems: Dynamics http://www.amzn.com/B00SOXN8JU ----------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /Users/sg/Feap/ver85/parfeap/feap on a intel named localhost with 2 processors, by sg Thu May 19 10:56:48 2016 Using Petsc Release Version 3.7.0, Apr, 25, 2016 Max Max/Min Avg Total Time (sec): 1.470e+01 1.00000 1.470e+01 Objects: 1.436e+03 1.00701 1.431e+03 Flops: 2.443e+07 1.12507 2.307e+07 4.615e+07 Flops/sec: 1.662e+06 1.12507 1.570e+06 3.140e+06 MPI Messages: 6.865e+02 1.00000 6.865e+02 1.373e+03 MPI Message Lengths: 4.680e+05 1.00000 6.817e+02 9.360e+05 MPI Reductions: 2.026e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.4698e+01 100.0% 4.6146e+07 100.0% 1.373e+03 100.0% 6.817e+02 100.0% 2.025e+03 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 702 1.0 5.1281e-03 1.1 1.06e+07 1.1 7.4e+02 5.5e+02 0.0e+00 0 44 54 43 0 0 44 54 43 0 3930 MatMultAdd 78 1.0 1.1404e-03 5.2 2.96e+05 1.2 3.9e+01 3.1e+02 0.0e+00 0 1 3 1 0 0 1 3 1 0 469 MatMultTranspose 78 1.0 3.2711e-04 1.3 2.96e+05 1.2 3.9e+01 3.1e+02 0.0e+00 0 1 3 1 0 0 1 3 1 0 1635 MatSolve 39 0.0 3.0041e-05 0.0 5.97e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 199 MatSOR 578 1.0 3.9439e-03 1.1 8.20e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 33 0 0 0 0 33 0 0 0 3912 MatLUFactorSym 5 1.0 4.8161e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 5 1.0 4.6730e-05 7.3 2.24e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 48 MatScale 30 1.0 1.2112e-04 1.1 7.26e+04 1.1 1.0e+01 2.8e+02 0.0e+00 0 0 1 0 0 0 0 1 0 0 1124 MatResidual 78 1.0 5.8436e-04 1.1 1.16e+06 1.1 7.8e+01 5.5e+02 0.0e+00 0 5 6 5 0 0 5 6 5 0 3738 MatAssemblyBegin 230 1.0 2.0638e-03 3.1 0.00e+00 0.0 6.0e+01 1.1e+03 1.7e+02 0 0 4 7 8 0 0 4 7 8 0 MatAssemblyEnd 230 1.0 4.3290e-03 1.0 0.00e+00 0.0 1.3e+02 6.9e+01 5.4e+02 0 0 10 1 26 0 0 10 1 26 0 MatGetRow 12260 1.1 1.3771e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 5 0.0 1.1921e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrix 10 1.0 2.0628e-03 1.0 0.00e+00 0.0 3.5e+01 1.4e+03 1.7e+02 0 0 3 5 8 0 0 3 5 8 0 MatGetOrdering 5 0.0 1.0109e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 10 1.0 4.8208e-04 1.0 0.00e+00 0.0 5.5e+01 2.9e+02 1.5e+01 0 0 4 2 1 0 0 4 2 1 0 MatZeroEntries 15 1.0 1.3089e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 10 1.0 8.3613e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 1 0 0 0 0 1 0 MatMatMult 10 1.0 3.6321e-03 1.0 4.16e+05 1.1 6.0e+01 7.4e+02 1.6e+02 0 2 4 5 8 0 2 4 5 8 223 MatMatMultSym 10 1.0 2.8400e-03 1.0 0.00e+00 0.0 5.0e+01 5.6e+02 1.4e+02 0 0 4 3 7 0 0 4 3 7 0 MatMatMultNum 10 1.0 7.6842e-04 1.0 4.16e+05 1.1 1.0e+01 1.7e+03 2.0e+01 0 2 1 2 1 0 2 1 2 1 1056 MatPtAP 10 1.0 7.5903e-03 1.0 1.89e+06 1.3 1.1e+02 1.4e+03 1.7e+02 0 7 8 16 8 0 7 8 16 8 447 MatPtAPSymbolic 10 1.0 5.4739e-03 1.0 0.00e+00 0.0 6.0e+01 1.6e+03 7.0e+01 0 0 4 10 3 0 0 4 10 3 0 MatPtAPNumeric 10 1.0 2.1083e-03 1.0 1.89e+06 1.3 5.0e+01 1.1e+03 1.0e+02 0 7 4 6 5 0 7 4 6 5 1609 MatTrnMatMult 5 1.0 7.2398e-03 1.0 5.69e+05 1.2 6.0e+01 2.5e+03 9.5e+01 0 2 4 16 5 0 2 4 16 5 146 MatTrnMatMultSym 5 1.0 4.4360e-03 1.0 0.00e+00 0.0 5.0e+01 1.1e+03 8.5e+01 0 0 4 6 4 0 0 4 6 4 0 MatTrnMatMultNum 5 1.0 2.7852e-03 1.0 5.69e+05 1.2 1.0e+01 9.7e+03 1.0e+01 0 2 1 10 0 0 2 1 10 0 379 MatGetLocalMat 40 1.0 5.4884e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetBrAoCol 30 1.0 4.6182e-04 1.0 0.00e+00 0.0 7.0e+01 1.8e+03 0.0e+00 0 0 5 13 0 0 0 5 13 0 0 VecDot 5 1.0 2.1219e-05 1.7 4.42e+03 1.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 415 VecMDot 200 1.0 6.1178e-04 1.6 5.42e+05 1.1 0.0e+00 0.0e+00 2.0e+02 0 2 0 0 10 0 2 0 0 10 1680 VecTDot 69 1.0 2.8133e-04 4.3 6.09e+04 1.0 0.0e+00 0.0e+00 6.9e+01 0 0 0 0 3 0 0 0 0 3 432 VecNorm 259 1.0 3.7003e-04 1.2 1.43e+05 1.1 0.0e+00 0.0e+00 2.6e+02 0 1 0 0 13 0 1 0 0 13 742 VecScale 220 1.0 5.5313e-05 1.2 5.43e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1861 VecCopy 108 1.0 3.4094e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 512 1.0 8.6308e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 88 1.0 2.4319e-05 1.0 6.97e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5702 VecAYPX 653 1.0 2.3270e-04 1.1 2.18e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1789 VecAXPBYCZ 312 1.0 1.6689e-04 1.3 3.85e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 4375 VecMAXPY 220 1.0 2.4390e-04 1.1 6.42e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 4989 VecAssemblyBegin 115 1.0 9.1553e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 115 1.0 1.0848e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 110 1.0 5.8174e-05 1.3 2.72e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 885 VecScatterBegin 983 1.0 3.4928e-04 1.1 0.00e+00 0.0 9.5e+02 5.4e+02 0.0e+00 0 0 69 54 0 0 0 69 54 0 0 VecScatterEnd 983 1.0 1.2131e-03 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 10 1.0 7.6532e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 220 1.0 4.5967e-04 1.2 1.63e+05 1.1 0.0e+00 0.0e+00 2.2e+02 0 1 0 0 11 0 1 0 0 11 672 BuildTwoSided 10 1.0 9.7036e-05 1.1 0.00e+00 0.0 5.0e+00 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BuildTwoSidedF 110 1.0 6.7568e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPGMRESOrthog 200 1.0 9.0504e-04 1.3 1.09e+06 1.1 0.0e+00 0.0e+00 2.0e+02 0 4 0 0 10 0 4 0 0 10 2273 KSPSetUp 45 1.0 4.7064e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 1 0 0 0 0 1 0 KSPSolve 5 1.0 5.5117e-02 1.0 2.44e+07 1.1 1.4e+03 6.8e+02 2.0e+03 0100100100 98 0100100100 98 837 PCGAMGGraph_AGG 10 1.0 9.7408e-03 1.0 3.46e+04 1.1 5.0e+01 1.1e+02 2.6e+02 0 0 4 1 13 0 0 4 1 13 7 PCGAMGCoarse_AGG 10 1.0 8.4352e-03 1.0 5.69e+05 1.2 1.6e+02 1.2e+03 1.3e+02 0 2 12 21 6 0 2 12 21 6 125 PCGAMGProl_AGG 10 1.0 2.5477e-03 1.0 0.00e+00 0.0 1.2e+02 4.5e+02 3.1e+02 0 0 9 6 15 0 0 9 6 15 0 PCGAMGPOpt_AGG 10 1.0 7.8416e-03 1.0 2.62e+06 1.1 1.6e+02 6.2e+02 4.7e+02 0 11 12 11 23 0 11 12 11 23 634 GAMG: createProl 10 1.0 2.8838e-02 1.0 3.14e+06 1.1 5.0e+02 7.2e+02 1.2e+03 0 13 36 38 58 0 13 36 38 58 211 Graph 20 1.0 9.5172e-03 1.0 3.46e+04 1.1 5.0e+01 1.1e+02 2.6e+02 0 0 4 1 13 0 0 4 1 13 7 MIS/Agg 10 1.0 5.4216e-04 1.0 0.00e+00 0.0 5.5e+01 2.9e+02 1.5e+01 0 0 4 2 1 0 0 4 2 1 0 SA: col data 10 1.0 1.2393e-03 1.0 0.00e+00 0.0 7.0e+01 6.4e+02 1.7e+02 0 0 5 5 8 0 0 5 5 8 0 SA: frmProl0 10 1.0 1.0064e-03 1.0 0.00e+00 0.0 5.0e+01 1.8e+02 1.0e+02 0 0 4 1 5 0 0 4 1 5 0 SA: smooth 10 1.0 7.8394e-03 1.0 2.62e+06 1.1 1.6e+02 6.2e+02 4.7e+02 0 11 12 11 23 0 11 12 11 23 634 GAMG: partLevel 10 1.0 1.0525e-02 1.0 1.89e+06 1.3 1.6e+02 1.3e+03 4.4e+02 0 7 12 22 21 0 7 12 22 21 322 repartition 5 1.0 5.2214e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+01 0 0 0 0 1 0 0 0 0 1 0 Invert-Sort 5 1.0 1.1015e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 1 0 0 0 0 1 0 Move A 5 1.0 9.7775e-04 1.0 0.00e+00 0.0 2.5e+01 1.9e+03 9.0e+01 0 0 2 5 4 0 0 2 5 4 0 Move P 5 1.0 1.4322e-03 1.0 0.00e+00 0.0 1.0e+01 8.0e+01 9.0e+01 0 0 1 0 4 0 0 1 0 4 0 PCSetUp 10 1.0 4.1227e-02 1.0 5.03e+06 1.1 6.6e+02 8.5e+02 1.7e+03 0 21 48 60 82 0 21 48 60 82 230 PCSetUpOnBlocks 39 1.0 2.8324e-04 2.0 2.24e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 8 PCApply 39 1.0 1.3067e-02 1.0 1.83e+07 1.1 6.5e+02 5.2e+02 2.1e+02 0 75 47 36 10 0 75 47 36 10 2643 SFSetGraph 10 1.0 2.5988e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFBcastBegin 35 1.0 2.0766e-04 1.0 0.00e+00 0.0 5.5e+01 2.9e+02 0.0e+00 0 0 4 2 0 0 0 4 2 0 0 SFBcastEnd 35 1.0 3.2187e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 318 318 4176740 0. Matrix Coarsen 10 10 6360 0. Vector 714 714 2149632 0. Vector Scatter 71 71 81416 0. Index Set 217 217 177272 0. Krylov Solver 45 45 635400 0. Preconditioner 45 45 43520 0. PetscRandom 5 5 3230 0. Viewer 1 0 0 0. Star Forest Bipartite Graph 10 10 8640 0. ======================================================================================================================== Average time to get PetscTime(): 0. Average time for MPI_Barrier(): 1.90735e-07 Average time for zero size MPI_Send(): 5.00679e-06 #PETSc Option Table entries: -ksp_monitor -ksp_type cg -log_view -options_left -pc_gamg_agg_nsmooths 1 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --with-cc=icc --with-cxx=icpc --with-fc=ifort --download-parmetis --download-superlu_dist --download-openmpi --download-hypre --download-metis --download-mumps --download-scalapack --download-blacs --with-debugging=0 ----------------------------------------- Libraries compiled on Thu May 12 11:38:33 2016 on ucbvpn-208-160.vpn.berkeley.edu Machine characteristics: Darwin-13.4.0-x86_64-i386-64bit Using PETSc directory: /Users/sg/petsc-3.7.0/ Using PETSc arch: intel ----------------------------------------- Using C compiler: /Users/sg/petsc-3.7.0/intel/bin/mpicc -wd1572 -g -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /Users/sg/petsc-3.7.0/intel/bin/mpif90 -g -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/Users/sg/petsc-3.7.0/intel/include -I/Users/sg/petsc-3.7.0/include -I/Users/sg/petsc-3.7.0/include -I/Users/sg/petsc-3.7.0/intel/include -I/opt/X11/include ----------------------------------------- Using C linker: /Users/sg/petsc-3.7.0/intel/bin/mpicc Using Fortran linker: /Users/sg/petsc-3.7.0/intel/bin/mpif90 Using libraries: -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -lpetsc -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -lsuperlu_dist -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lHYPRE -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -L/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -L/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -L/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -limf -lsvml -lirng -lipgo -ldecimal -lirc -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/6.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/6.0/lib/darwin -lclang_rt.osx -limf -lsvml -lirng -lipgo -ldecimal -lirc -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin -lclang_rt.osx -lscalapack -llapack -lblas -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 -lssl -lcrypto -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -limf -lsvml -lipgo -lirc -lpthread -lclang_rt.osx -limf -lsvml -lirng -lipgo -ldecimal -lirc -lclang_rt.osx -limf -lsvml -lirng -lipgo -ldecimal -lirc -lclang_rt.osx -ldl -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -lmpi -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -L/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -L/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -L/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -limf -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lsvml -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lirng -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lipgo -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -ldecimal -lc++ -lSystem -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lirc -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin -lclang_rt.osx -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -L/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -L/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -L/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -ldl ----------------------------------------- #PETSc Option Table entries: -ksp_monitor -ksp_type cg -log_view -options_left -pc_gamg_agg_nsmooths 1 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries There are no unused options. From bsmith at mcs.anl.gov Thu May 19 13:42:54 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 19 May 2016 13:42:54 -0500 Subject: [petsc-users] GAMG Indefinite In-Reply-To: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> Message-ID: <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> We see this occasionally, there is nothing in the definition of GAMG that guarantees a positive definite preconditioner even if the operator was positive definite so we don't think this is a bug in the code. We've found using a slightly stronger smoother, like one more smoothing step seems to remove the problem. Barry > On May 19, 2016, at 1:07 PM, Sanjay Govindjee wrote: > > I am trying to solve a very ordinary nonlinear elasticity problem > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine > in PETSc 3.5.3. > > The problem I am seeing is on my first Newton iteration, the Ax=b > solve returns with and Indefinite Preconditioner error (KSPGetConvergedReason == -8): > (log_view.txt output also attached) > > 0 KSP Residual norm 8.411630828687e-02 > 1 KSP Residual norm 2.852209578900e-02 > NO CONVERGENCE REASON: Indefinite Preconditioner > NO CONVERGENCE REASON: Indefinite Preconditioner > > On the next and subsequent Newton iterations, I see perfectly normal > behavior and the problem converges quadratically. The results look fine. > > I tried the same problem with -pc_type jacobi as well as super-lu, and mumps > and they all work without complaint. > > My run line for GAMG is: > -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left > > The code flow looks like: > > ! If no matrix allocation yet > if(Kmat.eq.0) then > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > call MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) > call MatSetBlockSize(Kmat,nsbk,ierr) > call MatSetFromOptions(Kmat, ierr) > call MatSetType(Kmat,MATAIJ,ierr) > call MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) > call MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) > endif > > call MatZeroEntries(Kmat,ierr) > > ! Code to set values in matrix > > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) > call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) > > ! If no rhs allocation yet > if(rhs.eq.0) then > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) > call VecSetFromOptions(rhs, ierr) > endif > > ! Code to set values in RHS > > call VecAssemblyBegin(rhs, ierr) > call VecAssemblyEnd(rhs, ierr) > > if(kspsol_exists) then > call KSPDestroy(kspsol,ierr) > endif > > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) > call KSPSetFromOptions(kspsol,ierr) > call KSPGetPC(kspsol, pc , ierr) > > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) > > call KSPSolve(kspsol, rhs, sol, ierr) > call KSPGetConvergedReason(kspsol,reason,ierr) > > ! update solution, go back to the top > > reason is coming back as -8 on my first Ax=b solve and 2 or 3 after that > (with gamg). With the other solvers it is coming back as 2 or 3 for > iterative options and 4 if I use one of the direct solvers. > > Any ideas on what is causing the Indefinite PC on the first iteration with GAMG? > > Thanks in advance, > -sanjay > > -- > ----------------------------------------------- > Sanjay Govindjee, PhD, PE > Professor of Civil Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264 > > s_g at berkeley.edu > http://www.ce.berkeley.edu/~sanjay > > ----------------------------------------------- > > Books: > > Engineering Mechanics of Deformable > Solids: A Presentation with Exercises > > http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 > http://ukcatalogue.oup.com/product/9780199651641.do > http://amzn.com/0199651647 > > > Engineering Mechanics 3 (Dynamics) 2nd Edition > > http://www.springer.com/978-3-642-53711-0 > http://amzn.com/3642537111 > > > Engineering Mechanics 3, Supplementary Problems: Dynamics > > http://www.amzn.com/B00SOXN8JU > > > ----------------------------------------------- > > From jeffsteward at gmail.com Thu May 19 15:28:01 2016 From: jeffsteward at gmail.com (Jeff Steward) Date: Thu, 19 May 2016 13:28:01 -0700 Subject: [petsc-users] SLEPc spectrum slicing Message-ID: I have some questions regarding spectrum slicing in SLEPc, especially in the new version 3.7, as I see a line in the Change Notes that I don't quite understand ("in spectrum slicing in multi-communicator mode now it is possible to update the problem matrices directly on the sub-communicators".) 1) In light of this statement, how should I divide my Mat to best work with eps_krylovschur_partitions? Let's say I have 384 processors and eps_krylovschur_partitions=48, so there will be 8 processors in each group. Should I distribute the matrix over all 384 processors (so let's say this gives 32 rows per processor) and have the entire communicator call EPS solve, or should I (can I?) distribute the matrix over each of the 8 groups (giving say 1536 rows per processor) and have the 48 subcommunicators call EPSSolve? Am I correct in thinking that distributing over all 384 processors requires collecting and duplicating the matrix to the 48 different groups? 2) If I'm understanding it correctly, the approach for Krylov-Schur spectrum slicing described in the manual for SLEPc seems like a wasteful default method. From what I gather, regions are divided by equal distance, and different regions are bound to end up with different (and potentially vastly different) numbers of eigenpairs. I understand the user can provide their own region intervals, but wouldn't it be better for SLEPc to first compute the matrix inertias at some given first guess regions, interpolate, then fix the spectra endpoints so they contain approximately the same number in each region? An option for logarithmically spaced points rather than linearly spaced points would be helpful as well, as for the problem I am looking at the spectrum decays in this way (few large eigenvalues with an exponential decay down to many smaller eigenvalues). I require eigenpairs with eigenvalues that vary by several orders of magnitude (1e-3 to 1e3), so the linear equidistant strategy is hopeless. 3) The example given for spectrum slicing in the user manual is mpiexec -n 20 ./ex25 -eps_interval 0.4,0.8 -eps_krylovschur_partitions 4 -st_type sinvert -st_ksp_type preonly -st_pc_type cholesky -st_pc_factor_mat_solver_package mumps -mat_mumps_icntl_13 1 which requires a direct solver. If I can compute the matrix inertias myself and come up with spectral regions and subcommunicators as described above, is there a way to efficiently use SLEPc with an iterative solver? How about with a matrix shell? (I'm getting greedy now ^_^). I would really appreciate any help on these questions. Thank you for your time. Best wishes, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu May 19 16:00:45 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 19 May 2016 16:00:45 -0500 Subject: [petsc-users] Question about KSP In-Reply-To: <573D6FBD.30303@medunigraz.at> References: <573D6FBD.30303@medunigraz.at> Message-ID: <14A5832C-10B8-46AD-A4A1-D8761204A236@mcs.anl.gov> Just reuse the old preconditioner. If you keep on making rank one changes then once the preconditioner starts performing poorly "start all over again" by building a new preconditioner from scratch. Only if this doesn't perform ok for some reason would I try anything more complicated. Barry > On May 19, 2016, at 2:48 AM, Elias Karabelas wrote: > > Dear all, > > I have a question about preconditioned solvers. So, I have a Sparsematrix, say A, and now for some reason I would like to add some rank-one term u v^T to that matrix. > As far as I know about Petsc, I can define the action of this matrix with MatShell. But is it possible to adapt the preconditioner (like a AMG) of my KSP to handle that kind of MatShell? > > Kind Regards > Elias > > -- > Dr Elias Karabelas > > Medical University of Graz > Institute of Biophysics > Harrachgasse 21/IV > 8010 Graz, Austria > > Phone: +43 316 380 7759 > Email: elias.karabelas at medunigraz.at > Web : http://forschung.medunigraz.at/fodok/staff?name=EliasKarabelas > From jroman at dsic.upv.es Thu May 19 16:19:20 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 19 May 2016 23:19:20 +0200 Subject: [petsc-users] SLEPc spectrum slicing In-Reply-To: References: Message-ID: <127C47A7-2FF0-4E6A-AE9B-8F8CAB676B5A@dsic.upv.es> > El 19 may 2016, a las 22:28, Jeff Steward escribi?: > > I have some questions regarding spectrum slicing in SLEPc, especially in the new version 3.7, as I see a line in the Change Notes that I don't quite understand ("in spectrum slicing in multi-communicator mode now it is possible to update the problem matrices directly on the sub-communicators".) This is intended for applications that require solving a sequence of eigenproblems, where the problem matrices in a given problem are a modification of the previous ones. Instead of updating the matrices in the parent communicator it is now possible to do the update in the partitioned communicators. This avoids lots of communication required for moving data to/from the parent communicator. This is certainly an "advanced feature". > > 1) In light of this statement, how should I divide my Mat to best work with eps_krylovschur_partitions? Let's say I have 384 processors and eps_krylovschur_partitions=48, so there will be 8 processors in each group. Should I distribute the matrix over all 384 processors (so let's say this gives 32 rows per processor) and have the entire communicator call EPS solve, or should I (can I?) distribute the matrix over each of the 8 groups (giving say 1536 rows per processor) and have the 48 subcommunicators call EPSSolve? Am I correct in thinking that distributing over all 384 processors requires collecting and duplicating the matrix to the 48 different groups? The idea is that you create the matrix in the initial communicator (typically PETSC_COMM_WORLD) and then EPS will automatically create the partitioned sub-communicators and replicate the matrices via MatCreateRedundantMatrix(). You have to choose the number of partitions and the number of processes in the original communicator that are best suited for you application. If your matrix is relatively small but you need to compute many eigenvalues, then you can consider setting as many partitions as processes in the original communicator (hence the size of each partition is 1). But it will be necessary to communicate a lot for setting up data in the sub-communicators. The main point of sub-communicators is to workaround the limited scalability of parallel direct linear solvers (e.g. MUMPS) when one wants to use may processes. > 2) If I'm understanding it correctly, the approach for Krylov-Schur spectrum slicing described in the manual for SLEPc seems like a wasteful default method. From what I gather, regions are divided by equal distance, and different regions are bound to end up with different (and potentially vastly different) numbers of eigenpairs. I understand the user can provide their own region intervals, but wouldn't it be better for SLEPc to first compute the matrix inertias at some given first guess regions, interpolate, then fix the spectra endpoints so they contain approximately the same number in each region? An option for logarithmically spaced points rather than linearly spaced points would be helpful as well, as for the problem I am looking at the spectrum decays in this way (few large eigenvalues with an exponential decay down to many smaller eigenvalues). I require eigenpairs with eigenvalues that vary by several orders of magnitude (1e-3 to 1e3), so the linear equidistant strategy is hopeless. If you have an a priori knowledge of eigenvalue distribution then you can use EPSKrylovSchurSetSubintervals() to give hints. We do not compute inertia to get a rough estimation of the distribution because in the general case computing inertia is very costly (it requires a factorization in our implementation). The main intention of EPSKrylovSchurSetSubintervals() is also the case where a sequence of eigenproblems must be solved, so the solution of one problem provides an estimation of eigenvalue distribution for the next problem. Anyway, giving subintervals that roughly contain the same number of eigenvalues does not guarantee that the workload will be balanced, since convergence may be quite different from one subinterval to the other. > 3) The example given for spectrum slicing in the user manual is > > mpiexec -n 20 ./ex25 -eps_interval 0.4,0.8 -eps_krylovschur_partitions 4 > -st_type sinvert -st_ksp_type preonly -st_pc_type cholesky > -st_pc_factor_mat_solver_package mumps -mat_mumps_icntl_13 1 > > which requires a direct solver. If I can compute the matrix inertias myself and come up with spectral regions and subcommunicators as described above, is there a way to efficiently use SLEPc with an iterative solver? How about with a matrix shell? (I'm getting greedy now ^_^). I don't know how you can possibly compute the "exact" inertia cheaply. In that case yes, it would be worth using a shell matrix but the solver is probably not prepared for this case. If you want us to have a look at this possibility, send more details to slepc-maint. Jose > > I would really appreciate any help on these questions. Thank you for your time. > > Best wishes, > > Jeff From jeffsteward at gmail.com Thu May 19 16:44:41 2016 From: jeffsteward at gmail.com (Jeff Steward) Date: Thu, 19 May 2016 14:44:41 -0700 Subject: [petsc-users] SLEPc spectrum slicing In-Reply-To: <127C47A7-2FF0-4E6A-AE9B-8F8CAB676B5A@dsic.upv.es> References: <127C47A7-2FF0-4E6A-AE9B-8F8CAB676B5A@dsic.upv.es> Message-ID: Thank you very much for your quick response Jose. I really appreciate it. Your point about load balancing even with the same number of pairs in the interval is well taken. For now I can provide hints with EPSKrylovSchurSetSubintervals and that should work fine. However, my problem is reaching the upper limits of memory with plans to increase the size, so I'd like to work on an approach to spectrum slicing with iterative/shell methods for the long run. The idea for spectrum slicing as I discussed in my previous message is based on the paper of Aktulga et al (2014): https://math.berkeley.edu/~linlin/publications/ParallelEig.pdf They address the issues involved with computing the (very expensive) matrix inertia. These experiments use PARPACK and FEAST to do a parallel spectrum slicing method. Of course PARPACK and FEAST have their own issues, and SLEPc is much easier to use and more flexible, extensible, etc. I think the MSIL approach they outline might be beneficial for my relatively large eigenproblem where I need many eigenpairs. Computing the matrix inertia for such an approach is likely to be the bottleneck at least in terms of memory. How far do you think the incomplete Cholesky with MUMPS would scale? I will send a separate e-mail to slepc-maint regarding my problem. Thank you again! Best wishes, Jeff On Thu, May 19, 2016 at 2:19 PM, Jose E. Roman wrote: > > > El 19 may 2016, a las 22:28, Jeff Steward > escribi?: > > > > I have some questions regarding spectrum slicing in SLEPc, especially in > the new version 3.7, as I see a line in the Change Notes that I don't quite > understand ("in spectrum slicing in multi-communicator mode now it is > possible to update the problem matrices directly on the sub-communicators".) > > This is intended for applications that require solving a sequence of > eigenproblems, where the problem matrices in a given problem are a > modification of the previous ones. Instead of updating the matrices in the > parent communicator it is now possible to do the update in the partitioned > communicators. This avoids lots of communication required for moving data > to/from the parent communicator. This is certainly an "advanced feature". > > > > > 1) In light of this statement, how should I divide my Mat to best work > with eps_krylovschur_partitions? Let's say I have 384 processors and > eps_krylovschur_partitions=48, so there will be 8 processors in each group. > Should I distribute the matrix over all 384 processors (so let's say this > gives 32 rows per processor) and have the entire communicator call EPS > solve, or should I (can I?) distribute the matrix over each of the 8 groups > (giving say 1536 rows per processor) and have the 48 subcommunicators call > EPSSolve? Am I correct in thinking that distributing over all 384 > processors requires collecting and duplicating the matrix to the 48 > different groups? > > The idea is that you create the matrix in the initial communicator > (typically PETSC_COMM_WORLD) and then EPS will automatically create the > partitioned sub-communicators and replicate the matrices via > MatCreateRedundantMatrix(). You have to choose the number of partitions and > the number of processes in the original communicator that are best suited > for you application. If your matrix is relatively small but you need to > compute many eigenvalues, then you can consider setting as many partitions > as processes in the original communicator (hence the size of each partition > is 1). But it will be necessary to communicate a lot for setting up data in > the sub-communicators. The main point of sub-communicators is to workaround > the limited scalability of parallel direct linear solvers (e.g. MUMPS) when > one wants to use may processes. > > > 2) If I'm understanding it correctly, the approach for Krylov-Schur > spectrum slicing described in the manual for SLEPc seems like a wasteful > default method. From what I gather, regions are divided by equal distance, > and different regions are bound to end up with different (and potentially > vastly different) numbers of eigenpairs. I understand the user can provide > their own region intervals, but wouldn't it be better for SLEPc to first > compute the matrix inertias at some given first guess regions, interpolate, > then fix the spectra endpoints so they contain approximately the same > number in each region? An option for logarithmically spaced points rather > than linearly spaced points would be helpful as well, as for the problem I > am looking at the spectrum decays in this way (few large eigenvalues with > an exponential decay down to many smaller eigenvalues). I require > eigenpairs with eigenvalues that vary by several orders of magnitude (1e-3 > to 1e3), so the linear equidistant strategy is hopeless. > > If you have an a priori knowledge of eigenvalue distribution then you can > use EPSKrylovSchurSetSubintervals() to give hints. We do not compute > inertia to get a rough estimation of the distribution because in the > general case computing inertia is very costly (it requires a factorization > in our implementation). The main intention of > EPSKrylovSchurSetSubintervals() is also the case where a sequence of > eigenproblems must be solved, so the solution of one problem provides an > estimation of eigenvalue distribution for the next problem. > > Anyway, giving subintervals that roughly contain the same number of > eigenvalues does not guarantee that the workload will be balanced, since > convergence may be quite different from one subinterval to the other. > > > > 3) The example given for spectrum slicing in the user manual is > > > > mpiexec -n 20 ./ex25 -eps_interval 0.4,0.8 -eps_krylovschur_partitions 4 > > -st_type sinvert -st_ksp_type preonly -st_pc_type > cholesky > > -st_pc_factor_mat_solver_package mumps > -mat_mumps_icntl_13 1 > > > > which requires a direct solver. If I can compute the matrix inertias > myself and come up with spectral regions and subcommunicators as described > above, is there a way to efficiently use SLEPc with an iterative solver? > How about with a matrix shell? (I'm getting greedy now ^_^). > > I don't know how you can possibly compute the "exact" inertia cheaply. In > that case yes, it would be worth using a shell matrix but the solver is > probably not prepared for this case. If you want us to have a look at this > possibility, send more details to slepc-maint. > > Jose > > > > > I would really appreciate any help on these questions. Thank you for > your time. > > > > Best wishes, > > > > Jeff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffsteward at gmail.com Thu May 19 18:24:14 2016 From: jeffsteward at gmail.com (Jeff Steward) Date: Thu, 19 May 2016 16:24:14 -0700 Subject: [petsc-users] SLEPc spectrum slicing In-Reply-To: References: <127C47A7-2FF0-4E6A-AE9B-8F8CAB676B5A@dsic.upv.es> Message-ID: Hi Jose, Follow up question: is it possible to call EPSKrylovSchurSetSubintervals from fortran? I don't see it in src/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.c. Best wishes, Jeff On Thu, May 19, 2016 at 2:44 PM, Jeff Steward wrote: > Thank you very much for your quick response Jose. I really appreciate it. > Your point about load balancing even with the same number of pairs in the > interval is well taken. > > For now I can provide hints with EPSKrylovSchurSetSubintervals and that > should work fine. However, my problem is reaching the upper limits of > memory with plans to increase the size, so I'd like to work on an approach > to spectrum slicing with iterative/shell methods for the long run. > > The idea for spectrum slicing as I discussed in my previous message is > based on the paper of Aktulga et al (2014): > > https://math.berkeley.edu/~linlin/publications/ParallelEig.pdf > > They address the issues involved with computing the (very expensive) > matrix inertia. These experiments use PARPACK and FEAST to do a parallel > spectrum slicing method. Of course PARPACK and FEAST have their own issues, > and SLEPc is much easier to use and more flexible, extensible, etc. I think > the MSIL approach they outline might be beneficial for my relatively large > eigenproblem where I need many eigenpairs. > > Computing the matrix inertia for such an approach is likely to be the > bottleneck at least in terms of memory. How far do you think the incomplete > Cholesky with MUMPS would scale? > > I will send a separate e-mail to slepc-maint regarding my problem. > > Thank you again! > > Best wishes, > > Jeff > > On Thu, May 19, 2016 at 2:19 PM, Jose E. Roman wrote: > >> >> > El 19 may 2016, a las 22:28, Jeff Steward >> escribi?: >> > >> > I have some questions regarding spectrum slicing in SLEPc, especially >> in the new version 3.7, as I see a line in the Change Notes that I don't >> quite understand ("in spectrum slicing in multi-communicator mode now it is >> possible to update the problem matrices directly on the sub-communicators".) >> >> This is intended for applications that require solving a sequence of >> eigenproblems, where the problem matrices in a given problem are a >> modification of the previous ones. Instead of updating the matrices in the >> parent communicator it is now possible to do the update in the partitioned >> communicators. This avoids lots of communication required for moving data >> to/from the parent communicator. This is certainly an "advanced feature". >> >> > >> > 1) In light of this statement, how should I divide my Mat to best work >> with eps_krylovschur_partitions? Let's say I have 384 processors and >> eps_krylovschur_partitions=48, so there will be 8 processors in each group. >> Should I distribute the matrix over all 384 processors (so let's say this >> gives 32 rows per processor) and have the entire communicator call EPS >> solve, or should I (can I?) distribute the matrix over each of the 8 groups >> (giving say 1536 rows per processor) and have the 48 subcommunicators call >> EPSSolve? Am I correct in thinking that distributing over all 384 >> processors requires collecting and duplicating the matrix to the 48 >> different groups? >> >> The idea is that you create the matrix in the initial communicator >> (typically PETSC_COMM_WORLD) and then EPS will automatically create the >> partitioned sub-communicators and replicate the matrices via >> MatCreateRedundantMatrix(). You have to choose the number of partitions and >> the number of processes in the original communicator that are best suited >> for you application. If your matrix is relatively small but you need to >> compute many eigenvalues, then you can consider setting as many partitions >> as processes in the original communicator (hence the size of each partition >> is 1). But it will be necessary to communicate a lot for setting up data in >> the sub-communicators. The main point of sub-communicators is to workaround >> the limited scalability of parallel direct linear solvers (e.g. MUMPS) when >> one wants to use may processes. >> >> > 2) If I'm understanding it correctly, the approach for Krylov-Schur >> spectrum slicing described in the manual for SLEPc seems like a wasteful >> default method. From what I gather, regions are divided by equal distance, >> and different regions are bound to end up with different (and potentially >> vastly different) numbers of eigenpairs. I understand the user can provide >> their own region intervals, but wouldn't it be better for SLEPc to first >> compute the matrix inertias at some given first guess regions, interpolate, >> then fix the spectra endpoints so they contain approximately the same >> number in each region? An option for logarithmically spaced points rather >> than linearly spaced points would be helpful as well, as for the problem I >> am looking at the spectrum decays in this way (few large eigenvalues with >> an exponential decay down to many smaller eigenvalues). I require >> eigenpairs with eigenvalues that vary by several orders of magnitude (1e-3 >> to 1e3), so the linear equidistant strategy is hopeless. >> >> If you have an a priori knowledge of eigenvalue distribution then you can >> use EPSKrylovSchurSetSubintervals() to give hints. We do not compute >> inertia to get a rough estimation of the distribution because in the >> general case computing inertia is very costly (it requires a factorization >> in our implementation). The main intention of >> EPSKrylovSchurSetSubintervals() is also the case where a sequence of >> eigenproblems must be solved, so the solution of one problem provides an >> estimation of eigenvalue distribution for the next problem. >> >> Anyway, giving subintervals that roughly contain the same number of >> eigenvalues does not guarantee that the workload will be balanced, since >> convergence may be quite different from one subinterval to the other. >> >> >> > 3) The example given for spectrum slicing in the user manual is >> > >> > mpiexec -n 20 ./ex25 -eps_interval 0.4,0.8 -eps_krylovschur_partitions >> 4 >> > -st_type sinvert -st_ksp_type preonly -st_pc_type >> cholesky >> > -st_pc_factor_mat_solver_package mumps >> -mat_mumps_icntl_13 1 >> > >> > which requires a direct solver. If I can compute the matrix inertias >> myself and come up with spectral regions and subcommunicators as described >> above, is there a way to efficiently use SLEPc with an iterative solver? >> How about with a matrix shell? (I'm getting greedy now ^_^). >> >> I don't know how you can possibly compute the "exact" inertia cheaply. In >> that case yes, it would be worth using a shell matrix but the solver is >> probably not prepared for this case. If you want us to have a look at this >> possibility, send more details to slepc-maint. >> >> Jose >> >> > >> > I would really appreciate any help on these questions. Thank you for >> your time. >> > >> > Best wishes, >> > >> > Jeff >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Fri May 20 08:48:47 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 20 May 2016 15:48:47 +0200 Subject: [petsc-users] SLEPc spectrum slicing In-Reply-To: References: <127C47A7-2FF0-4E6A-AE9B-8F8CAB676B5A@dsic.upv.es> Message-ID: > El 20 may 2016, a las 1:24, Jeff Steward escribi?: > > Hi Jose, > > Follow up question: is it possible to call EPSKrylovSchurSetSubintervals from fortran? I don't see it in src/eps/impls/krylov/krylovschur/ftn-auto/krylovschurf.c. > > Best wishes, > > Jeff Added in maint: https://bitbucket.org/slepc/slepc/commits/e23eb2b From juhaj at iki.fi Fri May 20 09:30:07 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Fri, 20 May 2016 15:30:07 +0100 Subject: [petsc-users] snes failures In-Reply-To: References: <2345493.9IZagzfzqj@vega> <1495692.HZxQxc1InI@vega> Message-ID: <50497974.sFAjIVxnCd@vega> > > It does not. And I should have mentioned earlier, that I tried -snes_mf, - > > snes_mf_operator, -snes_fd and -snes_fd_color already and none of those > > converges. Your suggested options result in > > For solver questions, I always need to see the result of > > -snes_view -snes_monitor -ksp_converged_reason -ksp_monitor_true_residual > -snes_converged_reason > > Turn on LU for the linear solver so it plays no role. I'm not sure what you mean by the last sentence, but I ran with -pc_type lu and the other options you suggested. Output is in http://paste.debian.net/688191/ I don't see anything wrong in it except it does not converge. :) And it settles in a weird oscillatory solution (looks almost like a sin()) whereas the real solution is 4*arctan(exp(x)). Thanks for your help. Cheers, Juha From knepley at gmail.com Fri May 20 09:47:55 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 May 2016 09:47:55 -0500 Subject: [petsc-users] snes failures In-Reply-To: <50497974.sFAjIVxnCd@vega> References: <2345493.9IZagzfzqj@vega> <1495692.HZxQxc1InI@vega> <50497974.sFAjIVxnCd@vega> Message-ID: On Fri, May 20, 2016 at 9:30 AM, Juha Jaykka wrote: > > > It does not. And I should have mentioned earlier, that I tried > -snes_mf, - > > > snes_mf_operator, -snes_fd and -snes_fd_color already and none of those > > > converges. Your suggested options result in > > > > For solver questions, I always need to see the result of > > > > -snes_view -snes_monitor -ksp_converged_reason > -ksp_monitor_true_residual > > -snes_converged_reason > > > > Turn on LU for the linear solver so it plays no role. > > I'm not sure what you mean by the last sentence, but I ran with -pc_type lu > and the other options you suggested. Output is in > http://paste.debian.net/688191/ > > I don't see anything wrong in it except it does not converge. :) And it > settles in a weird oscillatory solution (looks almost like a sin()) whereas > the real solution is 4*arctan(exp(x)). > Okay, so the linear solves are accurate and we think the Jacobian is correct because you ran with -snes_test, or ran using -snes_fd and it made no difference. Nonlinear Gauss-Siedel (coordinate descent) converges but slowly, and Newton does not. This usually indicates a bad guess for Newton. I would now advocate trying at least two things: 1) Grid sequencing: This is easy if you use a DMDA. You just use -snes_grid_sequence and its automatic. Since you report that smaller grids converge, this is usually enough. 2) NPC: If the above fails, I would try preconditioning Newton with GS. You can do this just using multiplicative composition, or left preconditioning. If 1 works, then its likely that FAS would work even better. Thanks, Matt > Thanks for your help. > > Cheers, > Juha > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecosimo at gmail.com Fri May 20 09:57:24 2016 From: alecosimo at gmail.com (Alejandro Cosimo) Date: Fri, 20 May 2016 16:57:24 +0200 Subject: [petsc-users] Calling EPSSolve() multiple times Message-ID: Hi, I would like to know if I'm using EPSSolve() correctly in the following situation. I must solve a series of eigenvalue problems where the size of the matrix and the EPS options remain the same, but the entries of the matrix change for each problem. In pseudo-code, something like this: Mat A; /* create and set matrix */ EPSCreate(comm,&eps); /* here set EPS options */ for (i=0;i From knepley at gmail.com Fri May 20 10:03:47 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 May 2016 10:03:47 -0500 Subject: [petsc-users] Calling EPSSolve() multiple times In-Reply-To: References: Message-ID: On Fri, May 20, 2016 at 9:57 AM, Alejandro Cosimo wrote: > Hi, > > I would like to know if I'm using EPSSolve() correctly in the following > situation. I must solve a series of eigenvalue problems where the size of > the matrix and the EPS options remain the same, but the entries of the > matrix change for each problem. In pseudo-code, something like this: > > Mat A; > /* create and set matrix */ > EPSCreate(comm,&eps); > /* here set EPS options */ > for (i=0;i /* set entries of matrix A which vary for each index i of the for-loop */ > EPSSetOperators(eps,A,NULL); > EPSSolve(eps); > } > > My questions: > 1* Is it correct to call at each iteration of the loop to > EPSSetOperators() before proceeding to solve the eigenvalue problem? > Yes > 2* Would be wrong to call EPSSetOperators() before entering the loop and > to call to EPSReset() at every iteration instead of calling to > EPSSetOperators()? > Yes, reset frees all memory and destroys the implementation object > 3* A little different question, but related. Suppose a similar situation > where you have a KSP solver instead of an EPS solver. In this context, do > the answers to the previous questions apply similarly? That is, is it > correct to call at every iteration to KSPSetOperators() before calling to > KSPSolve() or KSPSetUp()? > Yes. Matt > Thanks, > Alejandro > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhaj at iki.fi Fri May 20 10:11:20 2016 From: juhaj at iki.fi (Juha Jaykka) Date: Fri, 20 May 2016 16:11:20 +0100 Subject: [petsc-users] snes failures In-Reply-To: References: <2345493.9IZagzfzqj@vega> <50497974.sFAjIVxnCd@vega> Message-ID: <13092937.ItKsL9Cyuq@vega> > not. This usually indicates a bad guess for Newton. I would now advocate Oh, did I forget to mention, that other initial conditions DO converge. The funny thing is the initial condition which does NOT converge is actually CLOSER to the solution than ones which DO. Had it been the other way around, I probably would not have thought twice of it. > 1) Grid sequencing: This is easy if you use a DMDA. You just use > -snes_grid_sequence > and its automatic. Since you report that smaller grids converge, this > is usually enough. Unfortunately, no. > 2) NPC: If the above fails, I would try preconditioning Newton with GS. > You can do this > just using multiplicative composition, or left preconditioning. I'm not sure what you mean here. Do you mean I should run normal newton line search snes with SNESSetNPC(snes_of_type_gs)? If so, does the inner snes_of_type_gs and the outer snes_of_type_newtonls get all the same routines? I guess so, but just checking. > If 1 works, then its likely that FAS would work even better. I wish it was so easy: I did try all the snes types before posting the first post. Also all KSP types. Except those that need things I don't have, of course. Cheers, Juha -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part. URL: From knepley at gmail.com Fri May 20 10:14:24 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 May 2016 10:14:24 -0500 Subject: [petsc-users] snes failures In-Reply-To: <13092937.ItKsL9Cyuq@vega> References: <2345493.9IZagzfzqj@vega> <50497974.sFAjIVxnCd@vega> <13092937.ItKsL9Cyuq@vega> Message-ID: On Fri, May 20, 2016 at 10:11 AM, Juha Jaykka wrote: > > not. This usually indicates a bad guess for Newton. I would now advocate > > Oh, did I forget to mention, that other initial conditions DO converge. The > funny thing is the initial condition which does NOT converge is actually > CLOSER to the solution than ones which DO. Had it been the other way > around, I > probably would not have thought twice of it. > > > 1) Grid sequencing: This is easy if you use a DMDA. You just use > > -snes_grid_sequence > > and its automatic. Since you report that smaller grids converge, > this > > is usually enough. > > Unfortunately, no. Isn't this a 1D problem with no geometry? You should use DMDA. It would make it easier. > > > 2) NPC: If the above fails, I would try preconditioning Newton with GS. > > You can do this > > just using multiplicative composition, or left preconditioning. > > I'm not sure what you mean here. Do you mean I should run normal newton > line > search snes with SNESSetNPC(snes_of_type_gs)? > Yes. > If so, does the inner snes_of_type_gs and the outer snes_of_type_newtonls > get > all the same routines? I guess so, but just checking. The outer SNES get the same routines, and the inner one is setup automatically. > > If 1 works, then its likely that FAS would work even better. > > I wish it was so easy: I did try all the snes types before posting the > first > post. Also all KSP types. Except those that need things I don't have, of > course. > Not sure what you mean here. To use FAS, you can either use DMDA, or provide interpolation operators between grids. Thanks, Matt > Cheers, > Juha > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri May 20 13:02:30 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 20 May 2016 13:02:30 -0500 Subject: [petsc-users] snes failures In-Reply-To: <13092937.ItKsL9Cyuq@vega> References: <2345493.9IZagzfzqj@vega> <50497974.sFAjIVxnCd@vega> <13092937.ItKsL9Cyuq@vega> Message-ID: <8B5891B3-C693-4C2D-B18B-A4C4E8063CF4@mcs.anl.gov> > On May 20, 2016, at 10:11 AM, Juha Jaykka wrote: > >> not. This usually indicates a bad guess for Newton. I would now advocate > > Oh, did I forget to mention, that other initial conditions DO converge. The > funny thing is the initial condition which does NOT converge is actually > CLOSER to the solution than ones which DO. Had it been the other way around, I > probably would not have thought twice of it. Yeah, "closeness" for convergence of Newton's method can be rather complicated. Send the code. Python though it may be. Barry > >> 1) Grid sequencing: This is easy if you use a DMDA. You just use >> -snes_grid_sequence >> and its automatic. Since you report that smaller grids converge, this >> is usually enough. > > Unfortunately, no. > >> 2) NPC: If the above fails, I would try preconditioning Newton with GS. >> You can do this >> just using multiplicative composition, or left preconditioning. > > I'm not sure what you mean here. Do you mean I should run normal newton line > search snes with SNESSetNPC(snes_of_type_gs)? > > If so, does the inner snes_of_type_gs and the outer snes_of_type_newtonls get > all the same routines? I guess so, but just checking. > >> If 1 works, then its likely that FAS would work even better. > > I wish it was so easy: I did try all the snes types before posting the first > post. Also all KSP types. Except those that need things I don't have, of > course. > > Cheers, > Juha From juhaj at iki.fi Fri May 20 17:17:09 2016 From: juhaj at iki.fi (Juha =?ISO-8859-1?Q?J=E4ykk=E4?=) Date: Fri, 20 May 2016 23:17:09 +0100 Subject: [petsc-users] snes failures In-Reply-To: References: <2345493.9IZagzfzqj@vega> <13092937.ItKsL9Cyuq@vega> Message-ID: <6529697.FJpi0XWJ1n@rigel> > > > 1) Grid sequencing: This is easy if you use a DMDA. You just use > > > -snes_grid_sequence > > > and its automatic. Since you report that smaller grids converge, > > this > > > is usually enough. > > Unfortunately, no. > Isn't this a 1D problem with no geometry? You should use DMDA. It would make > it easier. I think you misunderstood my too-short-a-comment. What I meant was unfortunately -snes_grid_sequence was not enough to make it converge. I went up to -snes_grid_sequence 10. I always use DMDA and yes, the only geometry in the problem is a straight line of points from -X to +X, where preferably X = infinity, but numerically of course not. > > I'm not sure what you mean here. Do you mean I should run normal newton > > line > > search snes with SNESSetNPC(snes_of_type_gs)? > Yes. Will do that once I get back to this next week. > > I wish it was so easy: I did try all the snes types before posting the > > first > > post. Also all KSP types. Except those that need things I don't have, of > > course. > Not sure what you mean here. To use FAS, you can either use DMDA, or provide > interpolation operators between grids. Yes, but FAS does not converge either. Neither does any other snes type except ngs, and changing to any other KSP type makes no difference either ? though why would it as the KSP seems to converge nicely anyway. I have also tried all snes LS types, to no avail. A correction: funnily enough, newtontr claims it converges, but it does not really. It ends up with the same "solution" as almost everything else does and thinks it is a solution (CONVERGED_SNORM_RELATIVE) whereas I know full well it is not a solution, not even close. But I always found newtontr fiddly anyway: it tends to be too trigger happy to shout "convergence" and finding the right parameters to make it not-so-trigger-happy is hard. Cheers, Juha -- ----------------------------------------------- | Juha J?ykk?, juhaj at iki.fi | | http://koti.kapsi.fi/~juhaj/ | ----------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part. URL: From bsmith at mcs.anl.gov Fri May 20 17:26:07 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 20 May 2016 17:26:07 -0500 Subject: [petsc-users] snes failures In-Reply-To: <6529697.FJpi0XWJ1n@rigel> References: <2345493.9IZagzfzqj@vega> <13092937.ItKsL9Cyuq@vega> <6529697.FJpi0XWJ1n@rigel> Message-ID: <2B8DA1EE-2232-499F-B39B-8A40FCD5ED11@mcs.anl.gov> > On May 20, 2016, at 5:17 PM, Juha J?ykk? wrote: > >>>> 1) Grid sequencing: This is easy if you use a DMDA. You just use >>>> -snes_grid_sequence >>>> and its automatic. Since you report that smaller grids converge, >>> this >>>> is usually enough. >>> Unfortunately, no. >> Isn't this a 1D problem with no geometry? You should use DMDA. It would make >> it easier. > > I think you misunderstood my too-short-a-comment. What I meant was > unfortunately -snes_grid_sequence was not enough to make it converge. I went > up to -snes_grid_sequence 10. Interesting. How much is the solution changing in each refinement? Is there a singularity in the solution at one end point, or elsewhere? If you know the form of the singularity perhaps you can subtract it out from the solution (and this hence changes the function) to get a simpler problem without a singularity that Newton works better on. Barry > > I always use DMDA and yes, the only geometry in the problem is a straight line > of points from -X to +X, where preferably X = infinity, but numerically of > course not. >>> I'm not sure what you mean here. Do you mean I should run normal newton >>> line >>> search snes with SNESSetNPC(snes_of_type_gs)? >> Yes. > > Will do that once I get back to this next week. > >>> I wish it was so easy: I did try all the snes types before posting the >>> first >>> post. Also all KSP types. Except those that need things I don't have, of >>> course. >> Not sure what you mean here. To use FAS, you can either use DMDA, or provide >> interpolation operators between grids. > > Yes, but FAS does not converge either. Neither does any other snes type except > ngs, and changing to any other KSP type makes no difference either ? though > why would it as the KSP seems to converge nicely anyway. I have also tried all > snes LS types, to no avail. > > A correction: funnily enough, newtontr claims it converges, but it does not > really. It ends up with the same "solution" as almost everything else does and > thinks it is a solution (CONVERGED_SNORM_RELATIVE) whereas I know full well it > is not a solution, not even close. But I always found newtontr fiddly anyway: > it tends to be too trigger happy to shout "convergence" and finding the right > parameters to make it not-so-trigger-happy is hard. > > Cheers, > Juha > > -- > ----------------------------------------------- > | Juha J?ykk?, juhaj at iki.fi | > | http://koti.kapsi.fi/~juhaj/ | > ----------------------------------------------- From knepley at gmail.com Fri May 20 19:09:18 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 May 2016 19:09:18 -0500 Subject: [petsc-users] snes failures In-Reply-To: <2B8DA1EE-2232-499F-B39B-8A40FCD5ED11@mcs.anl.gov> References: <2345493.9IZagzfzqj@vega> <13092937.ItKsL9Cyuq@vega> <6529697.FJpi0XWJ1n@rigel> <2B8DA1EE-2232-499F-B39B-8A40FCD5ED11@mcs.anl.gov> Message-ID: On Fri, May 20, 2016 at 5:26 PM, Barry Smith wrote: > > > On May 20, 2016, at 5:17 PM, Juha J?ykk? wrote: > > > >>>> 1) Grid sequencing: This is easy if you use a DMDA. You just use > >>>> -snes_grid_sequence > >>>> and its automatic. Since you report that smaller grids converge, > >>> this > >>>> is usually enough. > >>> Unfortunately, no. > >> Isn't this a 1D problem with no geometry? You should use DMDA. It would > make > >> it easier. > > > > I think you misunderstood my too-short-a-comment. What I meant was > > unfortunately -snes_grid_sequence was not enough to make it converge. I > went > > up to -snes_grid_sequence 10. > > Interesting. How much is the solution changing in each refinement? Is > there a singularity in the solution at one end point, or elsewhere? If you > know the form of the singularity perhaps you can subtract it out from the > solution (and this hence changes the function) to get a simpler problem > without a singularity that Newton works better on. This is very counter-intuitive. I would now really like to see the code. I would be very very very very surprised to see grid sequencing fail on such as smooth solution. Matt > > Barry > > > > > I always use DMDA and yes, the only geometry in the problem is a > straight line > > of points from -X to +X, where preferably X = infinity, but numerically > of > > course not. > >>> I'm not sure what you mean here. Do you mean I should run normal newton > >>> line > >>> search snes with SNESSetNPC(snes_of_type_gs)? > >> Yes. > > > > Will do that once I get back to this next week. > > > >>> I wish it was so easy: I did try all the snes types before posting the > >>> first > >>> post. Also all KSP types. Except those that need things I don't have, > of > >>> course. > >> Not sure what you mean here. To use FAS, you can either use DMDA, or > provide > >> interpolation operators between grids. > > > > Yes, but FAS does not converge either. Neither does any other snes type > except > > ngs, and changing to any other KSP type makes no difference either ? > though > > why would it as the KSP seems to converge nicely anyway. I have also > tried all > > snes LS types, to no avail. > > > > A correction: funnily enough, newtontr claims it converges, but it does > not > > really. It ends up with the same "solution" as almost everything else > does and > > thinks it is a solution (CONVERGED_SNORM_RELATIVE) whereas I know full > well it > > is not a solution, not even close. But I always found newtontr fiddly > anyway: > > it tends to be too trigger happy to shout "convergence" and finding the > right > > parameters to make it not-so-trigger-happy is hard. > > > > Cheers, > > Juha > > > > -- > > ----------------------------------------------- > > | Juha J?ykk?, juhaj at iki.fi | > > | http://koti.kapsi.fi/~juhaj/ | > > ----------------------------------------------- > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat May 21 14:36:00 2016 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 21 May 2016 15:36:00 -0400 Subject: [petsc-users] GAMG Indefinite In-Reply-To: <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> Message-ID: Barry, this is probably the Chebyshev problem. Sanjay, this is fixed but has not yet been moved to the master branch. You can fix this now with with -ksp_chebyshev_esteig_random. This should recover v3.5 semantics. Mark On Thu, May 19, 2016 at 2:42 PM, Barry Smith wrote: > > We see this occasionally, there is nothing in the definition of GAMG > that guarantees a positive definite preconditioner even if the operator was > positive definite so we don't think this is a bug in the code. We've found > using a slightly stronger smoother, like one more smoothing step seems to > remove the problem. > > Barry > > > On May 19, 2016, at 1:07 PM, Sanjay Govindjee wrote: > > > > I am trying to solve a very ordinary nonlinear elasticity problem > > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine > > in PETSc 3.5.3. > > > > The problem I am seeing is on my first Newton iteration, the Ax=b > > solve returns with and Indefinite Preconditioner error > (KSPGetConvergedReason == -8): > > (log_view.txt output also attached) > > > > 0 KSP Residual norm 8.411630828687e-02 > > 1 KSP Residual norm 2.852209578900e-02 > > NO CONVERGENCE REASON: Indefinite Preconditioner > > NO CONVERGENCE REASON: Indefinite Preconditioner > > > > On the next and subsequent Newton iterations, I see perfectly normal > > behavior and the problem converges quadratically. The results look fine. > > > > I tried the same problem with -pc_type jacobi as well as super-lu, and > mumps > > and they all work without complaint. > > > > My run line for GAMG is: > > -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg > -pc_gamg_agg_nsmooths 1 -options_left > > > > The code flow looks like: > > > > ! If no matrix allocation yet > > if(Kmat.eq.0) then > > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > > call > MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) > > call MatSetBlockSize(Kmat,nsbk,ierr) > > call MatSetFromOptions(Kmat, ierr) > > call MatSetType(Kmat,MATAIJ,ierr) > > call > MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) > > call > MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) > > endif > > > > call MatZeroEntries(Kmat,ierr) > > > > ! Code to set values in matrix > > > > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) > > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) > > call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) > > > > ! If no rhs allocation yet > > if(rhs.eq.0) then > > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) > > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) > > call VecSetFromOptions(rhs, ierr) > > endif > > > > ! Code to set values in RHS > > > > call VecAssemblyBegin(rhs, ierr) > > call VecAssemblyEnd(rhs, ierr) > > > > if(kspsol_exists) then > > call KSPDestroy(kspsol,ierr) > > endif > > > > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) > > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) > > call KSPSetFromOptions(kspsol,ierr) > > call KSPGetPC(kspsol, pc , ierr) > > > > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) > > > > call KSPSolve(kspsol, rhs, sol, ierr) > > call KSPGetConvergedReason(kspsol,reason,ierr) > > > > ! update solution, go back to the top > > > > reason is coming back as -8 on my first Ax=b solve and 2 or 3 after that > > (with gamg). With the other solvers it is coming back as 2 or 3 for > > iterative options and 4 if I use one of the direct solvers. > > > > Any ideas on what is causing the Indefinite PC on the first iteration > with GAMG? > > > > Thanks in advance, > > -sanjay > > > > -- > > ----------------------------------------------- > > Sanjay Govindjee, PhD, PE > > Professor of Civil Engineering > > > > 779 Davis Hall > > University of California > > Berkeley, CA 94720-1710 > > > > Voice: +1 510 642 6060 > > FAX: +1 510 643 5264 > > > > s_g at berkeley.edu > > http://www.ce.berkeley.edu/~sanjay > > > > ----------------------------------------------- > > > > Books: > > > > Engineering Mechanics of Deformable > > Solids: A Presentation with Exercises > > > > > http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 > > http://ukcatalogue.oup.com/product/9780199651641.do > > http://amzn.com/0199651647 > > > > > > Engineering Mechanics 3 (Dynamics) 2nd Edition > > > > http://www.springer.com/978-3-642-53711-0 > > http://amzn.com/3642537111 > > > > > > Engineering Mechanics 3, Supplementary Problems: Dynamics > > > > http://www.amzn.com/B00SOXN8JU > > > > > > ----------------------------------------------- > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sat May 21 17:03:16 2016 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sat, 21 May 2016 15:03:16 -0700 Subject: [petsc-users] GAMG Indefinite In-Reply-To: References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> Message-ID: <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> Mark, I added the option you mentioned but it seems not to use it; -options_left reports: #PETSc Option Table entries: -ksp_chebyshev_esteig_random -ksp_monitor -ksp_type cg -log_view -options_left -pc_gamg_agg_nsmooths 1 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries There is one unused database option. It is: Option left: name:-ksp_chebyshev_esteig_random (no value) On 5/21/16 12:36 PM, Mark Adams wrote: > Barry, this is probably the Chebyshev problem. > > Sanjay, this is fixed but has not yet been moved to the master > branch. You can fix this now with with -ksp_chebyshev_esteig_random. > This should recover v3.5 semantics. > > Mark > > On Thu, May 19, 2016 at 2:42 PM, Barry Smith > wrote: > > > We see this occasionally, there is nothing in the definition of > GAMG that guarantees a positive definite preconditioner even if > the operator was positive definite so we don't think this is a bug > in the code. We've found using a slightly stronger smoother, like > one more smoothing step seems to remove the problem. > > Barry > > > On May 19, 2016, at 1:07 PM, Sanjay Govindjee > wrote: > > > > I am trying to solve a very ordinary nonlinear elasticity problem > > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine > > in PETSc 3.5.3. > > > > The problem I am seeing is on my first Newton iteration, the Ax=b > > solve returns with and Indefinite Preconditioner error > (KSPGetConvergedReason == -8): > > (log_view.txt output also attached) > > > > 0 KSP Residual norm 8.411630828687e-02 > > 1 KSP Residual norm 2.852209578900e-02 > > NO CONVERGENCE REASON: Indefinite Preconditioner > > NO CONVERGENCE REASON: Indefinite Preconditioner > > > > On the next and subsequent Newton iterations, I see perfectly normal > > behavior and the problem converges quadratically. The results > look fine. > > > > I tried the same problem with -pc_type jacobi as well as > super-lu, and mumps > > and they all work without complaint. > > > > My run line for GAMG is: > > -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type > agg -pc_gamg_agg_nsmooths 1 -options_left > > > > The code flow looks like: > > > > ! If no matrix allocation yet > > if(Kmat.eq.0) then > > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) > > call > MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) > > call MatSetBlockSize(Kmat,nsbk,ierr) > > call MatSetFromOptions(Kmat, ierr) > > call MatSetType(Kmat,MATAIJ,ierr) > > call > MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) > > call > MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) > > endif > > > > call MatZeroEntries(Kmat,ierr) > > > > ! Code to set values in matrix > > > > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) > > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) > > call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) > > > > ! If no rhs allocation yet > > if(rhs.eq.0) then > > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) > > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) > > call VecSetFromOptions(rhs, ierr) > > endif > > > > ! Code to set values in RHS > > > > call VecAssemblyBegin(rhs, ierr) > > call VecAssemblyEnd(rhs, ierr) > > > > if(kspsol_exists) then > > call KSPDestroy(kspsol,ierr) > > endif > > > > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) > > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) > > call KSPSetFromOptions(kspsol,ierr) > > call KSPGetPC(kspsol, pc , ierr) > > > > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) > > > > call KSPSolve(kspsol, rhs, sol, ierr) > > call KSPGetConvergedReason(kspsol,reason,ierr) > > > > ! update solution, go back to the top > > > > reason is coming back as -8 on my first Ax=b solve and 2 or 3 > after that > > (with gamg). With the other solvers it is coming back as 2 or 3 for > > iterative options and 4 if I use one of the direct solvers. > > > > Any ideas on what is causing the Indefinite PC on the first > iteration with GAMG? > > > > Thanks in advance, > > -sanjay > > > > -- > > ----------------------------------------------- > > Sanjay Govindjee, PhD, PE > > Professor of Civil Engineering > > > > 779 Davis Hall > > University of California > > Berkeley, CA 94720-1710 > > > > Voice: +1 510 642 6060 > > FAX: +1 510 643 5264 > > > > s_g at berkeley.edu > > http://www.ce.berkeley.edu/~sanjay > > > > > ----------------------------------------------- > > > > Books: > > > > Engineering Mechanics of Deformable > > Solids: A Presentation with Exercises > > > > > http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 > > http://ukcatalogue.oup.com/product/9780199651641.do > > http://amzn.com/0199651647 > > > > > > Engineering Mechanics 3 (Dynamics) 2nd Edition > > > > http://www.springer.com/978-3-642-53711-0 > > http://amzn.com/3642537111 > > > > > > Engineering Mechanics 3, Supplementary Problems: Dynamics > > > > http://www.amzn.com/B00SOXN8JU > > > > > > ----------------------------------------------- > > > > > > -- ----------------------------------------------- Sanjay Govindjee, PhD, PE Professor of Civil Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu http://www.ce.berkeley.edu/~sanjay ----------------------------------------------- Books: Engineering Mechanics of Deformable Solids: A Presentation with Exercises http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 http://ukcatalogue.oup.com/product/9780199651641.do http://amzn.com/0199651647 Engineering Mechanics 3 (Dynamics) 2nd Edition http://www.springer.com/978-3-642-53711-0 http://amzn.com/3642537111 Engineering Mechanics 3, Supplementary Problems: Dynamics http://www.amzn.com/B00SOXN8JU ----------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffsteward at gmail.com Sat May 21 17:57:50 2016 From: jeffsteward at gmail.com (Jeff Steward) Date: Sat, 21 May 2016 15:57:50 -0700 Subject: [petsc-users] Matrix equation involving square-root Message-ID: I have a real SPD matrix A, and I would like to find x for (A + A^{1/2})x = b where A^{1/2} is the unique matrix such that A^{1/2}A^{1/2} = A. The trouble is I don't have A^{1/2} (unless I compute the eigenpairs of A which is fairly expensive), only A. Is there a computationally efficient method in PETSc to find x in my case? Ideally I would like to use an iterative method and a matrix shell for A due to memory concerns and the fact that I don't need an exact solution. Best wishes, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat May 21 19:55:52 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 21 May 2016 19:55:52 -0500 Subject: [petsc-users] Matrix equation involving square-root In-Reply-To: References: Message-ID: On Sat, May 21, 2016 at 5:57 PM, Jeff Steward wrote: > I have a real SPD matrix A, and I would like to find x for > > (A + A^{1/2})x = b > > where A^{1/2} is the unique matrix such that A^{1/2}A^{1/2} = A. The > trouble is I don't have A^{1/2} (unless I compute the eigenpairs of A which > is fairly expensive), only A. > > Is there a computationally efficient method in PETSc to find x in my case? > Ideally I would like to use an iterative method and a matrix shell for A > due to memory concerns and the fact that I don't need an exact solution. > There are iterative methods for the matrix square root ( https://en.wikipedia.org/wiki/Square_root_of_a_matrix), but they are for dense matrices. I do not know of a method beyond direct factorization that works for sparse matrices. If you are getting this fractional power from a continuous problem (like subdiffusion), then you should handle it with discretization rather than matrix tools. Thanks, Matt > Best wishes, > > Jeff > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncerpagi at umn.edu Sun May 22 02:08:42 2016 From: ncerpagi at umn.edu (Nestor Cerpa) Date: Sun, 22 May 2016 02:08:42 -0500 Subject: [petsc-users] PetscDSSetResidual/PetscDSSetJacobian in fortran Message-ID: <0D221479-8D21-408F-B629-40E68EC65F8D@umn.edu> Hello, I am trying to use PetscDSSetResidual and PetscDSSetJacobian in a simple fortran code but they seem to be missing in the include files (petsc-master). I get this error message : Undefined symbols for architecture x86_64: "_petscdssetresidual_", referenced from: _MAIN__ in Poisson.o ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status make: [Poisson] Error 1 (ignored) /bin/rm -f -f Poisson.o I am also wondering if there are snes examples like ex12 or ex62 in fortran? Thanks, Nestor --------------------------------------------------- --------------------------------------------------- Nestor Cerpa Gilvonio Postdoctoral research associate Department of Earth Sciences University of Minnesota 310 Pillsbury Drive SE Minnepolis, MN 55455 E-mail : ncerpagi at umn.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun May 22 07:06:09 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 22 May 2016 07:06:09 -0500 Subject: [petsc-users] PetscDSSetResidual/PetscDSSetJacobian in fortran In-Reply-To: <0D221479-8D21-408F-B629-40E68EC65F8D@umn.edu> References: <0D221479-8D21-408F-B629-40E68EC65F8D@umn.edu> Message-ID: On Sun, May 22, 2016 at 2:08 AM, Nestor Cerpa wrote: > Hello, > > I am trying to use PetscDSSetResidual and PetscDSSetJacobian in a simple > fortran code but they seem to be missing in the include files > (petsc-master). I get this error message : > They are not there. Taking function pointers from Fortran is complex, and I do not understand the new framework that Jed put in place to do this. It is used, for example in SNESSetFunction(). I would be happy to integrate code if you have time to implement it, but right now I am pressed for time. > *Undefined symbols for architecture x86_64:* > * "_petscdssetresidual_", referenced from:* > * _MAIN__ in Poisson.o* > *ld: symbol(s) not found for architecture x86_64* > *collect2: error: ld returned 1 exit status* > *make: [Poisson] Error 1 (ignored)* > */bin/rm -f -f Poisson.o* > > I am also wondering if there are snes examples like ex12 or ex62 in > fortran? > Everyone using Plex in Fortran has so far preferred to handle the discretization themselves. I hope that changes, but as you note I will have to get the pointwise function support in there first. Thanks, Matt > Thanks, > Nestor > > --------------------------------------------------- > --------------------------------------------------- > > Nestor Cerpa Gilvonio > Postdoctoral research associate > > Department of Earth Sciences > University of Minnesota > 310 Pillsbury Drive SE > Minnepolis, MN 55455 > > E-mail : ncerpagi at umn.edu > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sun May 22 10:55:56 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 22 May 2016 17:55:56 +0200 Subject: [petsc-users] Matrix equation involving square-root In-Reply-To: References: Message-ID: <375F3FC9-8D53-419B-83F7-69FAC50C450C@dsic.upv.es> The latest version of SLEPc has functionality for computing f(A)*v - this includes the square root. See chapter 7 of the users guide (MFN). So I guess you could use an MFNSolve within a shell matrix in a KSP for your linear system. Currently the solver is quite basic (restarted Krylov iteration). I plan to add further improvements, so I would be interested in applications that need this. If you are interested, write to my personal email. Jose > El 22 may 2016, a las 2:55, Matthew Knepley escribi?: > > On Sat, May 21, 2016 at 5:57 PM, Jeff Steward wrote: > I have a real SPD matrix A, and I would like to find x for > > (A + A^{1/2})x = b > > where A^{1/2} is the unique matrix such that A^{1/2}A^{1/2} = A. The trouble is I don't have A^{1/2} (unless I compute the eigenpairs of A which is fairly expensive), only A. > > Is there a computationally efficient method in PETSc to find x in my case? Ideally I would like to use an iterative method and a matrix shell for A due to memory concerns and the fact that I don't need an exact solution. > > There are iterative methods for the matrix square root (https://en.wikipedia.org/wiki/Square_root_of_a_matrix), but they are > for dense matrices. I do not know of a method beyond direct factorization that works for sparse matrices. > > If you are getting this fractional power from a continuous problem (like subdiffusion), then you > should handle it with discretization rather than matrix tools. > > Thanks, > > Matt > > Best wishes, > > Jeff > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From mfadams at lbl.gov Sun May 22 16:29:37 2016 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 22 May 2016 17:29:37 -0400 Subject: [petsc-users] GAMG Indefinite In-Reply-To: <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> Message-ID: Humm, maybe we have version mixup: src/ksp/ksp/impls/cheby/cheby.c: ierr = PetscOptionsBool("-ksp_chebyshev_esteig_random","Use random right hand side for estimate","KSPChebyshevEstEigSetUseRandom",cheb->userandom,&cheb->userandom Also, you should use CG. These other options are the defaults but CG is not: -mg_levels_esteig_ksp_type cg -mg_levels_esteig_ksp_max_it 10 -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 Anyway. you can also run with -info, which will be very noisy, but just grep for GAMG and send me that. Mark On Sat, May 21, 2016 at 6:03 PM, Sanjay Govindjee wrote: > Mark, > I added the option you mentioned but it seems not to use it; > -options_left reports: > > #PETSc Option Table entries: > -ksp_chebyshev_esteig_random > -ksp_monitor > -ksp_type cg > -log_view > -options_left > -pc_gamg_agg_nsmooths 1 > -pc_gamg_type agg > -pc_type gamg > #End of PETSc Option Table entries > There is one unused database option. It is: > Option left: name:-ksp_chebyshev_esteig_random (no value) > > > > On 5/21/16 12:36 PM, Mark Adams wrote: > > Barry, this is probably the Chebyshev problem. > > Sanjay, this is fixed but has not yet been moved to the master branch. > You can fix this now with with -ksp_chebyshev_esteig_random. This should > recover v3.5 semantics. > > Mark > > On Thu, May 19, 2016 at 2:42 PM, Barry Smith wrote: > >> >> We see this occasionally, there is nothing in the definition of GAMG >> that guarantees a positive definite preconditioner even if the operator was >> positive definite so we don't think this is a bug in the code. We've found >> using a slightly stronger smoother, like one more smoothing step seems to >> remove the problem. >> >> Barry >> >> > On May 19, 2016, at 1:07 PM, Sanjay Govindjee wrote: >> > >> > I am trying to solve a very ordinary nonlinear elasticity problem >> > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine >> > in PETSc 3.5.3. >> > >> > The problem I am seeing is on my first Newton iteration, the Ax=b >> > solve returns with and Indefinite Preconditioner error >> (KSPGetConvergedReason == -8): >> > (log_view.txt output also attached) >> > >> > 0 KSP Residual norm 8.411630828687e-02 >> > 1 KSP Residual norm 2.852209578900e-02 >> > NO CONVERGENCE REASON: Indefinite Preconditioner >> > NO CONVERGENCE REASON: Indefinite Preconditioner >> > >> > On the next and subsequent Newton iterations, I see perfectly normal >> > behavior and the problem converges quadratically. The results look >> fine. >> > >> > I tried the same problem with -pc_type jacobi as well as super-lu, and >> mumps >> > and they all work without complaint. >> > >> > My run line for GAMG is: >> > -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg >> -pc_gamg_agg_nsmooths 1 -options_left >> > >> > The code flow looks like: >> > >> > ! If no matrix allocation yet >> > if(Kmat.eq.0) then >> > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >> > call >> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) >> > call MatSetBlockSize(Kmat,nsbk,ierr) >> > call MatSetFromOptions(Kmat, ierr) >> > call MatSetType(Kmat,MATAIJ,ierr) >> > call >> MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) >> > call >> MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) >> > endif >> > >> > call MatZeroEntries(Kmat,ierr) >> > >> > ! Code to set values in matrix >> > >> > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) >> > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) >> > call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) >> > >> > ! If no rhs allocation yet >> > if(rhs.eq.0) then >> > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) >> > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) >> > call VecSetFromOptions(rhs, ierr) >> > endif >> > >> > ! Code to set values in RHS >> > >> > call VecAssemblyBegin(rhs, ierr) >> > call VecAssemblyEnd(rhs, ierr) >> > >> > if(kspsol_exists) then >> > call KSPDestroy(kspsol,ierr) >> > endif >> > >> > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) >> > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) >> > call KSPSetFromOptions(kspsol,ierr) >> > call KSPGetPC(kspsol, pc , ierr) >> > >> > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) >> > >> > call KSPSolve(kspsol, rhs, sol, ierr) >> > call KSPGetConvergedReason(kspsol,reason,ierr) >> > >> > ! update solution, go back to the top >> > >> > reason is coming back as -8 on my first Ax=b solve and 2 or 3 after that >> > (with gamg). With the other solvers it is coming back as 2 or 3 for >> > iterative options and 4 if I use one of the direct solvers. >> > >> > Any ideas on what is causing the Indefinite PC on the first iteration >> with GAMG? >> > >> > Thanks in advance, >> > -sanjay >> > >> > -- >> > ----------------------------------------------- >> > Sanjay Govindjee, PhD, PE >> > Professor of Civil Engineering >> > >> > 779 Davis Hall >> > University of California >> > Berkeley, CA 94720-1710 >> > >> > Voice: +1 510 642 6060 >> > FAX: +1 510 643 5264 >> > >> > s_g at berkeley.edu >> > http://www.ce.berkeley.edu/~sanjay >> > >> > ----------------------------------------------- >> > >> > Books: >> > >> > Engineering Mechanics of Deformable >> > Solids: A Presentation with Exercises >> > >> > >> http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 >> > http://ukcatalogue.oup.com/product/9780199651641.do >> > http://amzn.com/0199651647 >> > >> > >> > Engineering Mechanics 3 (Dynamics) 2nd Edition >> > >> > http://www.springer.com/978-3-642-53711-0 >> > http://amzn.com/3642537111 >> > >> > >> > Engineering Mechanics 3, Supplementary Problems: Dynamics >> > >> > http://www.amzn.com/B00SOXN8JU >> > >> > >> > ----------------------------------------------- >> > >> > >> >> > > -- > ----------------------------------------------- > Sanjay Govindjee, PhD, PE > Professor of Civil Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264s_g at berkeley.eduhttp://www.ce.berkeley.edu/~sanjay > ----------------------------------------------- > > Books: > > Engineering Mechanics of Deformable > Solids: A Presentation with Exerciseshttp://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641http://ukcatalogue.oup.com/product/9780199651641.dohttp://amzn.com/0199651647 > > Engineering Mechanics 3 (Dynamics) 2nd Editionhttp://www.springer.com/978-3-642-53711-0http://amzn.com/3642537111 > > Engineering Mechanics 3, Supplementary Problems: Dynamics http://www.amzn.com/B00SOXN8JU > > ----------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Sun May 22 16:38:48 2016 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 22 May 2016 14:38:48 -0700 Subject: [petsc-users] GAMG Indefinite In-Reply-To: References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> Message-ID: Mark, Can you give me the full option line that you want me to use? I currently have: -ksp_type cg -ksp_monitor -ksp_chebyshev_esteig_random -log_view -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left -sanjay On 5/22/16 2:29 PM, Mark Adams wrote: > Humm, maybe we have version mixup: > > src/ksp/ksp/impls/cheby/cheby.c: ierr = > PetscOptionsBool("-ksp_chebyshev_esteig_random","Use random right hand > side for > estimate","KSPChebyshevEstEigSetUseRandom",cheb->userandom,&cheb->userandom > > Also, you should use CG. These other options are the defaults but CG > is not: > > -mg_levels_esteig_ksp_type cg > -mg_levels_esteig_ksp_max_it 10 > -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 > > Anyway. you can also run with -info, which will be very noisy, but > just grep for GAMG and send me that. > > Mark > > > > On Sat, May 21, 2016 at 6:03 PM, Sanjay Govindjee > wrote: > > Mark, > I added the option you mentioned but it seems not to use it; > -options_left reports: > > #PETSc Option Table entries: > -ksp_chebyshev_esteig_random > -ksp_monitor > -ksp_type cg > -log_view > -options_left > -pc_gamg_agg_nsmooths 1 > -pc_gamg_type agg > -pc_type gamg > #End of PETSc Option Table entries > There is one unused database option. It is: > Option left: name:-ksp_chebyshev_esteig_random (no value) > > > > On 5/21/16 12:36 PM, Mark Adams wrote: >> Barry, this is probably the Chebyshev problem. >> >> Sanjay, this is fixed but has not yet been moved to the master >> branch. You can fix this now with with >> -ksp_chebyshev_esteig_random. This should recover v3.5 semantics. >> >> Mark >> >> On Thu, May 19, 2016 at 2:42 PM, Barry Smith > > wrote: >> >> >> We see this occasionally, there is nothing in the >> definition of GAMG that guarantees a positive definite >> preconditioner even if the operator was positive definite so >> we don't think this is a bug in the code. We've found using a >> slightly stronger smoother, like one more smoothing step >> seems to remove the problem. >> >> Barry >> >> > On May 19, 2016, at 1:07 PM, Sanjay Govindjee >> > wrote: >> > >> > I am trying to solve a very ordinary nonlinear elasticity >> problem >> > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which >> worked fine >> > in PETSc 3.5.3. >> > >> > The problem I am seeing is on my first Newton iteration, >> the Ax=b >> > solve returns with and Indefinite Preconditioner error >> (KSPGetConvergedReason == -8): >> > (log_view.txt output also attached) >> > >> > 0 KSP Residual norm 8.411630828687e-02 >> > 1 KSP Residual norm 2.852209578900e-02 >> > NO CONVERGENCE REASON: Indefinite Preconditioner >> > NO CONVERGENCE REASON: Indefinite Preconditioner >> > >> > On the next and subsequent Newton iterations, I see >> perfectly normal >> > behavior and the problem converges quadratically. The >> results look fine. >> > >> > I tried the same problem with -pc_type jacobi as well as >> super-lu, and mumps >> > and they all work without complaint. >> > >> > My run line for GAMG is: >> > -ksp_type cg -ksp_monitor -log_view -pc_type gamg >> -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left >> > >> > The code flow looks like: >> > >> > ! If no matrix allocation yet >> > if(Kmat.eq.0) then >> > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >> > call >> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) >> > call MatSetBlockSize(Kmat,nsbk,ierr) >> > call MatSetFromOptions(Kmat, ierr) >> > call MatSetType(Kmat,MATAIJ,ierr) >> > call >> MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) >> > call >> MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) >> > endif >> > >> > call MatZeroEntries(Kmat,ierr) >> > >> > ! Code to set values in matrix >> > >> > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) >> > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) >> > call >> MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) >> > >> > ! If no rhs allocation yet >> > if(rhs.eq.0) then >> > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) >> > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) >> > call VecSetFromOptions(rhs, ierr) >> > endif >> > >> > ! Code to set values in RHS >> > >> > call VecAssemblyBegin(rhs, ierr) >> > call VecAssemblyEnd(rhs, ierr) >> > >> > if(kspsol_exists) then >> > call KSPDestroy(kspsol,ierr) >> > endif >> > >> > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) >> > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) >> > call KSPSetFromOptions(kspsol,ierr) >> > call KSPGetPC(kspsol, pc , ierr) >> > >> > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) >> > >> > call KSPSolve(kspsol, rhs, sol, ierr) >> > call KSPGetConvergedReason(kspsol,reason,ierr) >> > >> > ! update solution, go back to the top >> > >> > reason is coming back as -8 on my first Ax=b solve and 2 or >> 3 after that >> > (with gamg). With the other solvers it is coming back as 2 >> or 3 for >> > iterative options and 4 if I use one of the direct solvers. >> > >> > Any ideas on what is causing the Indefinite PC on the first >> iteration with GAMG? >> > >> > Thanks in advance, >> > -sanjay >> > >> > -- >> > ----------------------------------------------- >> > Sanjay Govindjee, PhD, PE >> > Professor of Civil Engineering >> > >> > 779 Davis Hall >> > University of California >> > Berkeley, CA 94720-1710 >> > >> > Voice: +1 510 642 6060 >> > FAX: +1 510 643 5264 >> > >> > s_g at berkeley.edu >> > http://www.ce.berkeley.edu/~sanjay >> >> > >> > ----------------------------------------------- >> > >> > Books: >> > >> > Engineering Mechanics of Deformable >> > Solids: A Presentation with Exercises >> > >> > >> http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 >> > http://ukcatalogue.oup.com/product/9780199651641.do >> > http://amzn.com/0199651647 >> > >> > >> > Engineering Mechanics 3 (Dynamics) 2nd Edition >> > >> > http://www.springer.com/978-3-642-53711-0 >> > http://amzn.com/3642537111 >> > >> > >> > Engineering Mechanics 3, Supplementary Problems: Dynamics >> > >> > http://www.amzn.com/B00SOXN8JU >> > >> > >> > ----------------------------------------------- >> > >> > >> >> > > -- > ----------------------------------------------- > Sanjay Govindjee, PhD, PE > Professor of Civil Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice:+1 510 642 6060 > FAX:+1 510 643 5264 > s_g at berkeley.edu > http://www.ce.berkeley.edu/~sanjay > > ----------------------------------------------- > > Books: > > Engineering Mechanics of Deformable > Solids: A Presentation with Exercises > http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 > http://ukcatalogue.oup.com/product/9780199651641.do > http://amzn.com/0199651647 > > Engineering Mechanics 3 (Dynamics) 2nd Edition > http://www.springer.com/978-3-642-53711-0 > http://amzn.com/3642537111 > > Engineering Mechanics 3, Supplementary Problems: Dynamics > http://www.amzn.com/B00SOXN8JU > > ----------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun May 22 17:02:41 2016 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 22 May 2016 18:02:41 -0400 Subject: [petsc-users] GAMG Indefinite In-Reply-To: References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> Message-ID: I thought you would have this also, so add it (I assume this is 3D elasticity): -pc_gamg_square_graph 1 -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor Plus what I just mentioned: -mg_levels_esteig_ksp_type cg -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 Just for diagnostics add: -mg_levels_esteig_ksp_max_it 50 -mg_levels_esteig_ksp_monitor_singular_value -ksp_view On Sun, May 22, 2016 at 5:38 PM, Sanjay Govindjee wrote: > Mark, > Can you give me the full option line that you want me to use? I > currently have: > > -ksp_type cg -ksp_monitor -ksp_chebyshev_esteig_random -log_view -pc_type > gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left > > -sanjay > > > On 5/22/16 2:29 PM, Mark Adams wrote: > > Humm, maybe we have version mixup: > > src/ksp/ksp/impls/cheby/cheby.c: ierr = > PetscOptionsBool("-ksp_chebyshev_esteig_random","Use random right hand side > for > estimate","KSPChebyshevEstEigSetUseRandom",cheb->userandom,&cheb->userandom > > Also, you should use CG. These other options are the defaults but CG is > not: > > -mg_levels_esteig_ksp_type cg > -mg_levels_esteig_ksp_max_it 10 > -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 > > Anyway. you can also run with -info, which will be very noisy, but just > grep for GAMG and send me that. > > Mark > > > > On Sat, May 21, 2016 at 6:03 PM, Sanjay Govindjee > wrote: > >> Mark, >> I added the option you mentioned but it seems not to use it; >> -options_left reports: >> >> #PETSc Option Table entries: >> -ksp_chebyshev_esteig_random >> -ksp_monitor >> -ksp_type cg >> -log_view >> -options_left >> -pc_gamg_agg_nsmooths 1 >> -pc_gamg_type agg >> -pc_type gamg >> #End of PETSc Option Table entries >> There is one unused database option. It is: >> Option left: name:-ksp_chebyshev_esteig_random (no value) >> >> >> >> On 5/21/16 12:36 PM, Mark Adams wrote: >> >> Barry, this is probably the Chebyshev problem. >> >> Sanjay, this is fixed but has not yet been moved to the master branch. >> You can fix this now with with -ksp_chebyshev_esteig_random. This should >> recover v3.5 semantics. >> >> Mark >> >> On Thu, May 19, 2016 at 2:42 PM, Barry Smith < >> bsmith at mcs.anl.gov> wrote: >> >>> >>> We see this occasionally, there is nothing in the definition of GAMG >>> that guarantees a positive definite preconditioner even if the operator was >>> positive definite so we don't think this is a bug in the code. We've found >>> using a slightly stronger smoother, like one more smoothing step seems to >>> remove the problem. >>> >>> Barry >>> >>> > On May 19, 2016, at 1:07 PM, Sanjay Govindjee >>> wrote: >>> > >>> > I am trying to solve a very ordinary nonlinear elasticity problem >>> > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine >>> > in PETSc 3.5.3. >>> > >>> > The problem I am seeing is on my first Newton iteration, the Ax=b >>> > solve returns with and Indefinite Preconditioner error >>> (KSPGetConvergedReason == -8): >>> > (log_view.txt output also attached) >>> > >>> > 0 KSP Residual norm 8.411630828687e-02 >>> > 1 KSP Residual norm 2.852209578900e-02 >>> > NO CONVERGENCE REASON: Indefinite Preconditioner >>> > NO CONVERGENCE REASON: Indefinite Preconditioner >>> > >>> > On the next and subsequent Newton iterations, I see perfectly normal >>> > behavior and the problem converges quadratically. The results look >>> fine. >>> > >>> > I tried the same problem with -pc_type jacobi as well as super-lu, and >>> mumps >>> > and they all work without complaint. >>> > >>> > My run line for GAMG is: >>> > -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg >>> -pc_gamg_agg_nsmooths 1 -options_left >>> > >>> > The code flow looks like: >>> > >>> > ! If no matrix allocation yet >>> > if(Kmat.eq.0) then >>> > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >>> > call >>> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) >>> > call MatSetBlockSize(Kmat,nsbk,ierr) >>> > call MatSetFromOptions(Kmat, ierr) >>> > call MatSetType(Kmat,MATAIJ,ierr) >>> > call >>> MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) >>> > call >>> MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) >>> > endif >>> > >>> > call MatZeroEntries(Kmat,ierr) >>> > >>> > ! Code to set values in matrix >>> > >>> > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) >>> > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) >>> > call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) >>> > >>> > ! If no rhs allocation yet >>> > if(rhs.eq.0) then >>> > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) >>> > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) >>> > call VecSetFromOptions(rhs, ierr) >>> > endif >>> > >>> > ! Code to set values in RHS >>> > >>> > call VecAssemblyBegin(rhs, ierr) >>> > call VecAssemblyEnd(rhs, ierr) >>> > >>> > if(kspsol_exists) then >>> > call KSPDestroy(kspsol,ierr) >>> > endif >>> > >>> > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) >>> > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) >>> > call KSPSetFromOptions(kspsol,ierr) >>> > call KSPGetPC(kspsol, pc , ierr) >>> > >>> > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) >>> > >>> > call KSPSolve(kspsol, rhs, sol, ierr) >>> > call KSPGetConvergedReason(kspsol,reason,ierr) >>> > >>> > ! update solution, go back to the top >>> > >>> > reason is coming back as -8 on my first Ax=b solve and 2 or 3 after >>> that >>> > (with gamg). With the other solvers it is coming back as 2 or 3 for >>> > iterative options and 4 if I use one of the direct solvers. >>> > >>> > Any ideas on what is causing the Indefinite PC on the first iteration >>> with GAMG? >>> > >>> > Thanks in advance, >>> > -sanjay >>> > >>> > -- >>> > ----------------------------------------------- >>> > Sanjay Govindjee, PhD, PE >>> > Professor of Civil Engineering >>> > >>> > 779 Davis Hall >>> > University of California >>> > Berkeley, CA 94720-1710 >>> > >>> > Voice: +1 510 642 6060 <%2B1%20510%20642%206060> >>> > FAX: +1 510 643 5264 <%2B1%20510%20643%205264> >>> > >>> > s_g at berkeley.edu >>> > http://www.ce.berkeley.edu/~sanjay >>> > >>> > ----------------------------------------------- >>> > >>> > Books: >>> > >>> > Engineering Mechanics of Deformable >>> > Solids: A Presentation with Exercises >>> > >>> > >>> http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 >>> > http://ukcatalogue.oup.com/product/9780199651641.do >>> > http://amzn.com/0199651647 >>> > >>> > >>> > Engineering Mechanics 3 (Dynamics) 2nd Edition >>> > >>> > http://www.springer.com/978-3-642-53711-0 >>> > http://amzn.com/3642537111 >>> > >>> > >>> > Engineering Mechanics 3, Supplementary Problems: Dynamics >>> > >>> > http://www.amzn.com/B00SOXN8JU >>> > >>> > >>> > ----------------------------------------------- >>> > >>> > >>> >>> >> >> -- >> ----------------------------------------------- >> Sanjay Govindjee, PhD, PE >> Professor of Civil Engineering >> >> 779 Davis Hall >> University of California >> Berkeley, CA 94720-1710 >> >> Voice: +1 510 642 6060 >> FAX: +1 510 643 5264s_g at berkeley.eduhttp://www.ce.berkeley.edu/~sanjay >> ----------------------------------------------- >> >> Books: >> >> Engineering Mechanics of Deformable >> Solids: A Presentation with Exerciseshttp://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641http://ukcatalogue.oup.com/product/9780199651641.dohttp://amzn.com/0199651647 >> >> Engineering Mechanics 3 (Dynamics) 2nd Editionhttp://www.springer.com/978-3-642-53711-0http://amzn.com/3642537111 >> >> Engineering Mechanics 3, Supplementary Problems: Dynamics http://www.amzn.com/B00SOXN8JU >> >> ----------------------------------------------- >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zocca.marco at gmail.com Mon May 23 06:31:52 2016 From: zocca.marco at gmail.com (Marco Zocca) Date: Mon, 23 May 2016 13:31:52 +0200 Subject: [petsc-users] re. AIJ preallocation man page Message-ID: Dear all, I'm struggling with this statement: "It is recommended that you call both of the above preallocation routines [MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation] for simplicity." (source: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATAIJ.html ) I saw that, even on a single-processor communicator, MatMPIAIJSetPreallocation works fine (at least, it gives no errors, but I don't know about efficiency drops). What's the effect of the recommended double preallocation? Thank you and KR, Marco From rupp at iue.tuwien.ac.at Mon May 23 06:39:14 2016 From: rupp at iue.tuwien.ac.at (Karl Rupp) Date: Mon, 23 May 2016 13:39:14 +0200 Subject: [petsc-users] re. AIJ preallocation man page In-Reply-To: References: Message-ID: <5742EBE2.2050801@iue.tuwien.ac.at> Hi Marco, > I'm struggling with this statement: > > "It is recommended that you call both of the above preallocation > routines [MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation] for > simplicity." > > (source: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATAIJ.html > ) > > I saw that, even on a single-processor communicator, > MatMPIAIJSetPreallocation works fine (at least, it gives no errors, > but I don't know about efficiency drops). > What's the effect of the recommended double preallocation? It allows you to quickly switch between a serial run and a parallel run. If something doesn't work as expected, it's much easier to debug (if possible) on a single rank, hence MatSeqAIJSetPreallocation. However, ultimately you are looking for parallel runs, for which you need MatMPIAIJSetPreallocation. Changing your code back and forth is non-productive, hence it's recommended to call both and you get full flexibility :-) Best regards, Karli From knepley at gmail.com Mon May 23 07:04:31 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 May 2016 07:04:31 -0500 Subject: [petsc-users] re. AIJ preallocation man page In-Reply-To: <5742EBE2.2050801@iue.tuwien.ac.at> References: <5742EBE2.2050801@iue.tuwien.ac.at> Message-ID: On Mon, May 23, 2016 at 6:39 AM, Karl Rupp wrote: > Hi Marco, > > > I'm struggling with this statement: > >> >> "It is recommended that you call both of the above preallocation >> routines [MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation] for >> simplicity." >> >> (source: >> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATAIJ.html >> ) >> >> I saw that, even on a single-processor communicator, >> MatMPIAIJSetPreallocation works fine (at least, it gives no errors, >> but I don't know about efficiency drops). >> What's the effect of the recommended double preallocation? >> > > It allows you to quickly switch between a serial run and a parallel run. > If something doesn't work as expected, it's much easier to debug (if > possible) on a single rank, hence MatSeqAIJSetPreallocation. However, > ultimately you are looking for parallel runs, for which you need > MatMPIAIJSetPreallocation. Changing your code back and forth is > non-productive, hence it's recommended to call both and you get full > flexibility :-) > PETSc uses a system for dispatch that ignores functions which do not apply to that concrete type. This allows you to call a variety of specialization functions without a) constantly checking the type b) failure where a downcast might fail Thanks, Matt > Best regards, > Karli > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From pguhur at anl.gov Mon May 23 10:13:11 2016 From: pguhur at anl.gov (Pierre-Louis Guhur) Date: Mon, 23 May 2016 10:13:11 -0500 Subject: [petsc-users] Checkpointing-restart Message-ID: <5383154d-f986-1313-8715-573b5fd2625b@anl.gov> Hello, I would like to experiment a checkpointing-restart scheme in the context of PDEs. It would consist of checkpointing regularly key variables, and being able to restart the solver from a certain checkpoint. Could I do it with a XXXView() and XXXLoad() with used components? Have any application implemented an higher-level interface? Thanks, Pierre From knepley at gmail.com Mon May 23 10:51:11 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 May 2016 10:51:11 -0500 Subject: [petsc-users] Checkpointing-restart In-Reply-To: <5383154d-f986-1313-8715-573b5fd2625b@anl.gov> References: <5383154d-f986-1313-8715-573b5fd2625b@anl.gov> Message-ID: On Mon, May 23, 2016 at 10:13 AM, Pierre-Louis Guhur wrote: > Hello, > > I would like to experiment a checkpointing-restart scheme in the context > of PDEs. It would consist of checkpointing regularly key variables, and > being able to restart the solver from a certain checkpoint. > > Could I do it with a XXXView() and XXXLoad() with used components? > > Have any application implemented an higher-level interface? > I do this in SNES ex12. I read the prior solution in using VecLoad() https://bitbucket.org/petsc/petsc/src/dcc3cfc91e7b406f088083f8513c3f773809933c/src/snes/examples/tutorials/ex12.c?at=master&fileviewer=file-view-default#ex12.c-806 I don't do the DMLoad() here because it does not interact well with setting up the MG hierarchy, but on a single mesh you could just do that too. Matt Thanks, > > Pierre > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Mon May 23 11:15:50 2016 From: hongzhang at anl.gov (Hong Zhang) Date: Mon, 23 May 2016 11:15:50 -0500 Subject: [petsc-users] Checkpointing-restart In-Reply-To: <5383154d-f986-1313-8715-573b5fd2625b@anl.gov> References: <5383154d-f986-1313-8715-573b5fd2625b@anl.gov> Message-ID: <5F638595-B5CC-4BC4-AC4C-567D3419B5B3@anl.gov> Hi Pierre, Checkpointing and restarting the TS solver can be achieved easily with TSTrajectory. It is intended specifically for adjoint calculation, but I think the implementation can also be useful for resilience. In particular, you might want to start with the basic type (/src/ts/trajectory/impl/basic), which saves the solution and stage values (optional) to binary files on disk at each time step. To enable this feature, just use the command line options -ts_save_trajectory -ts_trajectory_type basic. To read the data back, you can call TSTrajectoryGet() (which is the high-level interface you asked for). Another TSTrajectory type ?memory? comes with many sophisticated checkpointing schemes using RAM (single-level) or a combination of RAM and disk (multi-level). I would suggest you to use the existing interface as much as possible because the framework has been deliberately designed and extensively tested for compatibility with other PETSc components such as TSMonitor, TSEvent and TSAdapt. Hong > On May 23, 2016, at 10:13 AM, Pierre-Louis Guhur wrote: > > Hello, > > I would like to experiment a checkpointing-restart scheme in the context of PDEs. It would consist of checkpointing regularly key variables, and being able to restart the solver from a certain checkpoint. > > Could I do it with a XXXView() and XXXLoad() with used components? > > Have any application implemented an higher-level interface? > > Thanks, > > Pierre > From s_g at berkeley.edu Mon May 23 11:31:17 2016 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 23 May 2016 09:31:17 -0700 Subject: [petsc-users] GAMG Indefinite In-Reply-To: References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> Message-ID: <487cb6c8-a6fd-52f3-49a8-222f2f45cc32@berkeley.edu> Mark, Yes, the problem is finite elasticity. I re-ran the problem with the options table and output shown below. It converges now based on the residual tolerance test (KSPConvergedReason == 2); note I am using Using Petsc Release Version 3.7.0, Apr, 25, 2016. Seems to work now with these options; wish I understood what they all meant! -sanjay ------ Options Table ------ -ksp_chebyshev_esteig_random -ksp_monitor -ksp_type cg -ksp_view -log_view -mg_levels_esteig_ksp_max_it 50 -mg_levels_esteig_ksp_monitor_singular_value -mg_levels_esteig_ksp_type cg -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor -options_left -pc_gamg_agg_nsmooths 1 -pc_gamg_square_graph 1 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries There is one unused database option. It is: Option left: name:-ksp_chebyshev_esteig_random (no value) ---- Output ----- Residual norm = 6.4807407E-02 1.0000000E+00 t= 0.06 0.00 Residual norm = 6.4807407E-02 1.0000000E+00 t= 2.72 0.00 0 KSP Residual norm 1.560868061707e-03 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP Residual norm 6.096709155146e-04 % max 5.751224548776e-01 min 5.751224548776e-01 max/min 1.000000000000e+00 2 KSP Residual norm 7.011069458089e-04 % max 8.790398458008e-01 min 1.423643814416e-01 max/min 6.174577073983e+00 3 KSP Residual norm 7.860563886831e-04 % max 9.381463130007e-01 min 6.370569326093e-02 max/min 1.472625545661e+01 4 KSP Residual norm 7.281118133903e-04 % max 9.663599650986e-01 min 3.450830062982e-02 max/min 2.800369613865e+01 5 KSP Residual norm 7.399083718116e-04 % max 9.794646011330e-01 min 2.160135448201e-02 max/min 4.534274005590e+01 6 KSP Residual norm 7.629904179692e-04 % max 9.849620569978e-01 min 1.481091947823e-02 max/min 6.650242467697e+01 7 KSP Residual norm 7.698477913710e-04 % max 9.886850579079e-01 min 1.045801989510e-02 max/min 9.453845640235e+01 8 KSP Residual norm 8.217081868349e-04 % max 9.919371306726e-01 min 7.371001382474e-03 max/min 1.345729133942e+02 9 KSP Residual norm 7.524701879786e-04 % max 9.942899758738e-01 min 5.827670079091e-03 max/min 1.706153509687e+02 10 KSP Residual norm 6.944661718672e-04 % max 1.081740430310e+00 min 5.160677620664e-03 max/min 2.096120916327e+02 11 KSP Residual norm 5.551504568073e-04 % max 1.486860759049e+00 min 4.666379506027e-03 max/min 3.186326266710e+02 12 KSP Residual norm 5.259305993307e-04 % max 1.564857844056e+00 min 4.221790647720e-03 max/min 3.706621134569e+02 13 KSP Residual norm 5.156103593875e-04 % max 1.569917781391e+00 min 3.850994097592e-03 max/min 4.076655901322e+02 14 KSP Residual norm 5.312020352146e-04 % max 1.570323151514e+00 min 3.568538377912e-03 max/min 4.400465919699e+02 15 KSP Residual norm 5.305598654979e-04 % max 1.570412447356e+00 min 3.341861320427e-03 max/min 4.699214888890e+02 16 KSP Residual norm 5.058601071413e-04 % max 1.570467828098e+00 min 3.091912672470e-03 max/min 5.079276145416e+02 17 KSP Residual norm 5.485622395473e-04 % max 1.570494963661e+00 min 2.876900954621e-03 max/min 5.458981690483e+02 18 KSP Residual norm 5.368711040867e-04 % max 1.570521333658e+00 min 2.664379213208e-03 max/min 5.894511283800e+02 19 KSP Residual norm 5.198795692341e-04 % max 1.570548096173e+00 min 2.476198885896e-03 max/min 6.342576539867e+02 20 KSP Residual norm 5.958949153283e-04 % max 1.570562153443e+00 min 2.291602609346e-03 max/min 6.853553696602e+02 21 KSP Residual norm 6.378632372927e-04 % max 1.570571141557e+00 min 2.073298632606e-03 max/min 7.575228753145e+02 22 KSP Residual norm 5.831338029614e-04 % max 1.570577774384e+00 min 1.893683789633e-03 max/min 8.293769968259e+02 23 KSP Residual norm 4.875913917209e-04 % max 1.570583697638e+00 min 1.759110933887e-03 max/min 8.928281141244e+02 24 KSP Residual norm 4.107781613610e-04 % max 1.570587819069e+00 min 1.672892328012e-03 max/min 9.388457300985e+02 25 KSP Residual norm 3.715001142988e-04 % max 1.570590462451e+00 min 1.617963853373e-03 max/min 9.707203650914e+02 26 KSP Residual norm 2.991751132378e-04 % max 1.570593709995e+00 min 1.573420350354e-03 max/min 9.982035059109e+02 27 KSP Residual norm 2.208369736689e-04 % max 1.570597161312e+00 min 1.545543181679e-03 max/min 1.016210468870e+03 28 KSP Residual norm 1.811301040805e-04 % max 1.570599147395e+00 min 1.529650294818e-03 max/min 1.026770074647e+03 29 KSP Residual norm 1.466232980955e-04 % max 1.570599898730e+00 min 1.518818086503e-03 max/min 1.034093491964e+03 30 KSP Residual norm 1.206956326419e-04 % max 1.570600148946e+00 min 1.512437955816e-03 max/min 1.038455920063e+03 31 KSP Residual norm 8.815660316339e-05 % max 1.570600276978e+00 min 1.508452842477e-03 max/min 1.041199454667e+03 32 KSP Residual norm 8.077031357734e-05 % max 1.570600325445e+00 min 1.505888596198e-03 max/min 1.042972454543e+03 33 KSP Residual norm 8.820553812137e-05 % max 1.570600343673e+00 min 1.503328090889e-03 max/min 1.044748882956e+03 34 KSP Residual norm 9.117950808819e-05 % max 1.570600352355e+00 min 1.499931693668e-03 max/min 1.047114584608e+03 35 KSP Residual norm 1.017388214943e-04 % max 1.570600354842e+00 min 1.495477205174e-03 max/min 1.050233563847e+03 36 KSP Residual norm 8.890686455242e-05 % max 1.570600355527e+00 min 1.491021415006e-03 max/min 1.053372097624e+03 37 KSP Residual norm 6.145275695828e-05 % max 1.570600355891e+00 min 1.487712171225e-03 max/min 1.055715202355e+03 38 KSP Residual norm 4.601453163034e-05 % max 1.570600356004e+00 min 1.486196684310e-03 max/min 1.056791723858e+03 39 KSP Residual norm 3.537992820781e-05 % max 1.570600356044e+00 min 1.485426023422e-03 max/min 1.057340002989e+03 40 KSP Residual norm 2.801997681470e-05 % max 1.570600356064e+00 min 1.484937369198e-03 max/min 1.057687946066e+03 41 KSP Residual norm 2.422931135206e-05 % max 1.570600356072e+00 min 1.484576829299e-03 max/min 1.057944813010e+03 42 KSP Residual norm 2.099844940173e-05 % max 1.570600356076e+00 min 1.484295103277e-03 max/min 1.058145615794e+03 43 KSP Residual norm 1.692165408662e-05 % max 1.570600356077e+00 min 1.484073208085e-03 max/min 1.058303827278e+03 44 KSP Residual norm 1.303434704073e-05 % max 1.570600356078e+00 min 1.483945099557e-03 max/min 1.058395190325e+03 45 KSP Residual norm 1.116085143372e-05 % max 1.570600356078e+00 min 1.483862236504e-03 max/min 1.058454294098e+03 46 KSP Residual norm 1.042314767557e-05 % max 1.570600356078e+00 min 1.483776389632e-03 max/min 1.058515533104e+03 47 KSP Residual norm 8.547779619028e-06 % max 1.570600356078e+00 min 1.483710767635e-03 max/min 1.058562349440e+03 48 KSP Residual norm 5.686341715603e-06 % max 1.570600356078e+00 min 1.483675385024e-03 max/min 1.058587593979e+03 49 KSP Residual norm 3.605023485024e-06 % max 1.570600356078e+00 min 1.483660745235e-03 max/min 1.058598039425e+03 50 KSP Residual norm 2.272532759782e-06 % max 1.570600356078e+00 min 1.483654995309e-03 max/min 1.058602142037e+03 0 KSP Residual norm 5.279802306740e-03 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP Residual norm 3.528673026246e-03 % max 3.090782483684e-01 min 3.090782483684e-01 max/min 1.000000000000e+00 2 KSP Residual norm 3.205752762808e-03 % max 8.446523416360e-01 min 9.580457075356e-02 max/min 8.816409645096e+00 3 KSP Residual norm 2.374523626799e-03 % max 9.504624619604e-01 min 6.564631404306e-02 max/min 1.447853509851e+01 4 KSP Residual norm 1.924918345847e-03 % max 9.848656903443e-01 min 4.952831543265e-02 max/min 1.988490183325e+01 5 KSP Residual norm 6.808260964410e-04 % max 9.938458367029e-01 min 4.251109574134e-02 max/min 2.337850434978e+01 6 KSP Residual norm 3.315021446270e-04 % max 9.967726858804e-01 min 4.166488358467e-02 max/min 2.392356824554e+01 7 KSP Residual norm 1.314681668602e-04 % max 9.982316137592e-01 min 4.143867906023e-02 max/min 2.408936858987e+01 8 KSP Residual norm 4.301436610738e-05 % max 9.989590393061e-01 min 4.138618158843e-02 max/min 2.413750196238e+01 9 KSP Residual norm 8.016910487452e-06 % max 9.994371206849e-01 min 4.138180175046e-02 max/min 2.415160960636e+01 10 KSP Residual norm 6.525723903099e-07 % max 9.997527609942e-01 min 4.138161245544e-02 max/min 2.415934763467e+01 11 KSP Residual norm 8.205412727192e-08 % max 9.998804196297e-01 min 4.138161047930e-02 max/min 2.416243370059e+01 12 KSP Residual norm 1.645897401565e-08 % max 9.999250934790e-01 min 4.138161045086e-02 max/min 2.416351327521e+01 13 KSP Residual norm 2.636218490435e-09 % max 9.999479210248e-01 min 4.138161044973e-02 max/min 2.416406491090e+01 14 KSP Residual norm 2.614816263321e-10 % max 9.999674891694e-01 min 4.138161044968e-02 max/min 2.416453778147e+01 15 KSP Residual norm 2.309894761749e-11 % max 9.999797578219e-01 min 4.138161044968e-02 max/min 2.416483425742e+01 16 KSP Residual norm 2.261461487058e-12 % max 9.999874664751e-01 min 4.138161044968e-02 max/min 2.416502053952e+01 17 KSP Residual norm 2.659598917594e-13 % max 9.999898150784e-01 min 4.138161044968e-02 max/min 2.416507729428e+01 18 KSP Residual norm 1.822011884274e-14 % max 9.999918194957e-01 min 4.138161044968e-02 max/min 2.416512573167e+01 19 KSP Residual norm 1.042398553176e-15 % max 9.999933278733e-01 min 4.138161044968e-02 max/min 2.416516218210e+01 0 KSP Residual norm 8.439161590194e-02 1 KSP Residual norm 7.614890998257e-03 2 KSP Residual norm 1.514029318872e-03 3 KSP Residual norm 3.781832295258e-04 4 KSP Residual norm 3.799411703870e-05 5 KSP Residual norm 4.799680240826e-06 6 KSP Residual norm 9.360965396987e-07 7 KSP Residual norm 1.250237476907e-07 8 KSP Residual norm 2.036465606099e-08 9 KSP Residual norm 3.993471620298e-09 10 KSP Residual norm 5.041262213944e-10 KSP Object: 2 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=3 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices GAMG specific options Threshold for dropping small values from graph 0. AGG specific options Symmetric graph false Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 2 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 2 MPI processes type: bjacobi block Jacobi: number of blocks = 2 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 1. Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=9, cols=9, bs=3 package used to perform factorization: petsc total: nonzeros=81, allocated nonzeros=81 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=9, cols=9, bs=3 total: nonzeros=81, allocated nonzeros=81 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=9, cols=9, bs=3 total: nonzeros=81, allocated nonzeros=81 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 2 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 2 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0499997, max = 1.04999 Chebyshev: eigenvalues estimated using cg with translations [0. 0.05; 0. 1.05] KSP Object: (mg_levels_1_esteig_) 2 MPI processes type: cg maximum iterations=50, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 2 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=54, cols=54, bs=3 total: nonzeros=1764, allocated nonzeros=1764 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 18 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 2 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.07853, max = 1.64913 Chebyshev: eigenvalues estimated using cg with translations [0. 0.05; 0. 1.05] KSP Object: (mg_levels_2_esteig_) 2 MPI processes type: cg maximum iterations=50, initial guess is zero tolerances: relative=1e-12, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 2 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=882, cols=882, bs=2 total: nonzeros=26244, allocated nonzeros=26244 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 189 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=882, cols=882, bs=2 total: nonzeros=26244, allocated nonzeros=26244 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 189 nodes, limit used is 5 CONVERGENCE: Satisfied residual tolerance Iterations = 10 On 5/22/16 3:02 PM, Mark Adams wrote: > I thought you would have this also, so add it (I assume this is 3D > elasticity): > > -pc_gamg_square_graph 1 > -mg_levels_ksp_type chebyshev > -mg_levels_pc_type sor > > Plus what I just mentioned: > > -mg_levels_esteig_ksp_type cg > -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 > > Just for diagnostics add: > > -mg_levels_esteig_ksp_max_it 50 > -mg_levels_esteig_ksp_monitor_singular_value > -ksp_view > > > > On Sun, May 22, 2016 at 5:38 PM, Sanjay Govindjee > wrote: > > Mark, > Can you give me the full option line that you want me to use? I > currently have: > > -ksp_type cg -ksp_monitor -ksp_chebyshev_esteig_random -log_view > -pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left > > -sanjay > > > On 5/22/16 2:29 PM, Mark Adams wrote: >> Humm, maybe we have version mixup: >> >> src/ksp/ksp/impls/cheby/cheby.c: ierr = >> PetscOptionsBool("-ksp_chebyshev_esteig_random","Use random right >> hand side for >> estimate","KSPChebyshevEstEigSetUseRandom",cheb->userandom,&cheb->userandom >> >> Also, you should use CG. These other options are the defaults but >> CG is not: >> >> -mg_levels_esteig_ksp_type cg >> -mg_levels_esteig_ksp_max_it 10 >> -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 >> >> Anyway. you can also run with -info, which will be very noisy, >> but just grep for GAMG and send me that. >> >> Mark >> >> >> >> On Sat, May 21, 2016 at 6:03 PM, Sanjay Govindjee >> > wrote: >> >> Mark, >> I added the option you mentioned but it seems not to use >> it; -options_left reports: >> >> #PETSc Option Table entries: >> -ksp_chebyshev_esteig_random >> -ksp_monitor >> -ksp_type cg >> -log_view >> -options_left >> -pc_gamg_agg_nsmooths 1 >> -pc_gamg_type agg >> -pc_type gamg >> #End of PETSc Option Table entries >> There is one unused database option. It is: >> Option left: name:-ksp_chebyshev_esteig_random (no value) >> >> >> >> On 5/21/16 12:36 PM, Mark Adams wrote: >>> Barry, this is probably the Chebyshev problem. >>> >>> Sanjay, this is fixed but has not yet been moved to the >>> master branch. You can fix this now with with >>> -ksp_chebyshev_esteig_random. This should recover v3.5 >>> semantics. >>> >>> Mark >>> >>> On Thu, May 19, 2016 at 2:42 PM, Barry Smith >>> > wrote: >>> >>> >>> We see this occasionally, there is nothing in the >>> definition of GAMG that guarantees a positive definite >>> preconditioner even if the operator was positive >>> definite so we don't think this is a bug in the code. >>> We've found using a slightly stronger smoother, like one >>> more smoothing step seems to remove the problem. >>> >>> Barry >>> >>> > On May 19, 2016, at 1:07 PM, Sanjay Govindjee >>> > wrote: >>> > >>> > I am trying to solve a very ordinary nonlinear >>> elasticity problem >>> > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which >>> worked fine >>> > in PETSc 3.5.3. >>> > >>> > The problem I am seeing is on my first Newton >>> iteration, the Ax=b >>> > solve returns with and Indefinite Preconditioner error >>> (KSPGetConvergedReason == -8): >>> > (log_view.txt output also attached) >>> > >>> > 0 KSP Residual norm 8.411630828687e-02 >>> > 1 KSP Residual norm 2.852209578900e-02 >>> > NO CONVERGENCE REASON: Indefinite Preconditioner >>> > NO CONVERGENCE REASON: Indefinite Preconditioner >>> > >>> > On the next and subsequent Newton iterations, I see >>> perfectly normal >>> > behavior and the problem converges quadratically. The >>> results look fine. >>> > >>> > I tried the same problem with -pc_type jacobi as well >>> as super-lu, and mumps >>> > and they all work without complaint. >>> > >>> > My run line for GAMG is: >>> > -ksp_type cg -ksp_monitor -log_view -pc_type gamg >>> -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left >>> > >>> > The code flow looks like: >>> > >>> > ! If no matrix allocation yet >>> > if(Kmat.eq.0) then >>> > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >>> > call >>> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) >>> > call MatSetBlockSize(Kmat,nsbk,ierr) >>> > call MatSetFromOptions(Kmat, ierr) >>> > call MatSetType(Kmat,MATAIJ,ierr) >>> > call >>> MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) >>> > call >>> MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) >>> > endif >>> > >>> > call MatZeroEntries(Kmat,ierr) >>> > >>> > ! Code to set values in matrix >>> > >>> > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) >>> > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) >>> > call >>> MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) >>> > >>> > ! If no rhs allocation yet >>> > if(rhs.eq.0) then >>> > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) >>> > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) >>> > call VecSetFromOptions(rhs, ierr) >>> > endif >>> > >>> > ! Code to set values in RHS >>> > >>> > call VecAssemblyBegin(rhs, ierr) >>> > call VecAssemblyEnd(rhs, ierr) >>> > >>> > if(kspsol_exists) then >>> > call KSPDestroy(kspsol,ierr) >>> > endif >>> > >>> > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) >>> > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) >>> > call KSPSetFromOptions(kspsol,ierr) >>> > call KSPGetPC(kspsol, pc , ierr) >>> > >>> > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) >>> > >>> > call KSPSolve(kspsol, rhs, sol, ierr) >>> > call KSPGetConvergedReason(kspsol,reason,ierr) >>> > >>> > ! update solution, go back to the top >>> > >>> > reason is coming back as -8 on my first Ax=b solve and >>> 2 or 3 after that >>> > (with gamg). With the other solvers it is coming back >>> as 2 or 3 for >>> > iterative options and 4 if I use one of the direct >>> solvers. >>> > >>> > Any ideas on what is causing the Indefinite PC on the >>> first iteration with GAMG? >>> > >>> > Thanks in advance, >>> > -sanjay >>> > >>> > -- >>> > ----------------------------------------------- >>> > Sanjay Govindjee, PhD, PE >>> > Professor of Civil Engineering >>> > >>> > 779 Davis Hall >>> > University of California >>> > Berkeley, CA 94720-1710 >>> > >>> > Voice: +1 510 642 6060 >>> > FAX: +1 510 643 5264 >>> > >>> > s_g at berkeley.edu >>> > http://www.ce.berkeley.edu/~sanjay >>> >>> > >>> > ----------------------------------------------- >>> > >>> > Books: >>> > >>> > Engineering Mechanics of Deformable >>> > Solids: A Presentation with Exercises >>> > >>> > >>> http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 >>> > http://ukcatalogue.oup.com/product/9780199651641.do >>> > http://amzn.com/0199651647 >>> > >>> > >>> > Engineering Mechanics 3 (Dynamics) 2nd Edition >>> > >>> > http://www.springer.com/978-3-642-53711-0 >>> > http://amzn.com/3642537111 >>> > >>> > >>> > Engineering Mechanics 3, Supplementary Problems: Dynamics >>> > >>> > http://www.amzn.com/B00SOXN8JU >>> > >>> > >>> > ----------------------------------------------- >>> > >>> > >>> >>> >> >> -- >> ----------------------------------------------- >> Sanjay Govindjee, PhD, PE >> Professor of Civil Engineering >> >> 779 Davis Hall >> University of California >> Berkeley, CA 94720-1710 >> >> Voice:+1 510 642 6060 >> FAX:+1 510 643 5264 >> s_g at berkeley.edu >> http://www.ce.berkeley.edu/~sanjay >> >> ----------------------------------------------- >> >> Books: >> >> Engineering Mechanics of Deformable >> Solids: A Presentation with Exercises >> http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 >> http://ukcatalogue.oup.com/product/9780199651641.do >> http://amzn.com/0199651647 >> >> Engineering Mechanics 3 (Dynamics) 2nd Edition >> http://www.springer.com/978-3-642-53711-0 >> http://amzn.com/3642537111 >> >> Engineering Mechanics 3, Supplementary Problems: Dynamics >> http://www.amzn.com/B00SOXN8JU >> >> ----------------------------------------------- >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncerpagi at umn.edu Mon May 23 12:23:25 2016 From: ncerpagi at umn.edu (Nestor Cerpa Gilvonio) Date: Mon, 23 May 2016 12:23:25 -0500 Subject: [petsc-users] PetscDSSetResidual/PetscDSSetJacobian in fortran In-Reply-To: References: <0D221479-8D21-408F-B629-40E68EC65F8D@umn.edu> Message-ID: Thank you for the response. However, I am unsure what you mean by ?to handle discretization themselves?. Would you recommend (or is it possible) to still use PetscFE/PetscQuadrature functions and then use this data to evaluate residual and jacobian using SNESSet?(), or is it better to just use our own pieces of code for all of this ? Thanks, Nestor --------------------------------------------------- --------------------------------------------------- Nestor Cerpa Gilvonio Postdoctoral researcher Department of Earth Sciences University of Minnesota 310 Pillsbury Drive SE Minnepolis, MN 55455 E-mail : ncerpagi at umn.edu > On May 22, 2016, at 7:06 AM, Matthew Knepley wrote: > > On Sun, May 22, 2016 at 2:08 AM, Nestor Cerpa > wrote: > Hello, > > I am trying to use PetscDSSetResidual and PetscDSSetJacobian in a simple fortran code but they seem to be missing in the include files (petsc-master). I get this error message : > > They are not there. Taking function pointers from Fortran is complex, and I do not understand the new > framework that Jed put in place to do this. It is used, for example in SNESSetFunction(). I would be > happy to integrate code if you have time to implement it, but right now I am pressed for time. > > Undefined symbols for architecture x86_64: > "_petscdssetresidual_", referenced from: > _MAIN__ in Poisson.o > ld: symbol(s) not found for architecture x86_64 > collect2: error: ld returned 1 exit status > make: [Poisson] Error 1 (ignored) > /bin/rm -f -f Poisson.o > > I am also wondering if there are snes examples like ex12 or ex62 in fortran? > > Everyone using Plex in Fortran has so far preferred to handle the discretization themselves. I hope > that changes, but as you note I will have to get the pointwise function support in there first. > > Thanks, > > Matt > > Thanks, > Nestor > > --------------------------------------------------- > --------------------------------------------------- > > Nestor Cerpa Gilvonio > Postdoctoral research associate > > Department of Earth Sciences > University of Minnesota > 310 Pillsbury Drive SE > Minnepolis, MN 55455 > > E-mail : ncerpagi at umn.edu > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.robertson at cfdrc.com Mon May 23 14:08:30 2016 From: eric.robertson at cfdrc.com (Eric D. Robertson) Date: Mon, 23 May 2016 19:08:30 +0000 Subject: [petsc-users] SVD Usage Message-ID: <1B22639EA62F9B41AF2D9DE147555DF30BA9BB6A@Mail.cfdrc.com> I want to perform an SVD on a 100000 x 10 dense matrix using slepc SVD. I am able to do this in a test program in serial and print the singular values, but in parallel I cannot. Attached are the error messages and the source. Is there anything in particular that might keep me from running this in parallel? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: svderror.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_mat2.cxx Type: text/x-c Size: 1542 bytes Desc: petsc_mat2.cxx URL: From jroman at dsic.upv.es Mon May 23 14:57:39 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 23 May 2016 21:57:39 +0200 Subject: [petsc-users] SVD Usage In-Reply-To: <1B22639EA62F9B41AF2D9DE147555DF30BA9BB6A@Mail.cfdrc.com> References: <1B22639EA62F9B41AF2D9DE147555DF30BA9BB6A@Mail.cfdrc.com> Message-ID: <78A4E1EB-80D3-499D-AF68-9CEF8E3F240B@dsic.upv.es> > El 23 may 2016, a las 21:08, Eric D. Robertson escribi?: > > I want to perform an SVD on a 100000 x 10 dense matrix using slepc SVD. I am able to do this in a test program in serial and print the singular values, but in parallel I cannot. > > Attached are the error messages and the source. Is there anything in particular that might keep me from running this in parallel? > > > I think the problem is placing MatSetRandom() before assembly. If it is placed AFTER MatAssemblyEnd() it seems to work. Jose From mfadams at lbl.gov Mon May 23 19:22:42 2016 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 23 May 2016 20:22:42 -0400 Subject: [petsc-users] GAMG Indefinite In-Reply-To: <487cb6c8-a6fd-52f3-49a8-222f2f45cc32@berkeley.edu> References: <58770cd6-9279-16c2-9a2f-fd459fa16cb5@berkeley.edu> <064EC0CA-6E92-4DD3-9F0A-1012A1CCABFE@mcs.anl.gov> <481b05f5-19b6-4537-e2d7-f95a7dc3352c@berkeley.edu> <487cb6c8-a6fd-52f3-49a8-222f2f45cc32@berkeley.edu> Message-ID: I'm glad it worked. Its funny this problem has been in the repo since last year but in this last week it has been blowing up all over the place. In a nutshell, Chebyshev polynomials have some really nice properties and multigrid smoothers, especially in an AMG context where you don't know anything about the spectra of your (diagonally preconditioned) operator. [skip some linear algebra background]. Chebyshev needs the largest eigenvalue (lam) of your operator, which is easy to compute in theory. Two problems: 1) if you underestimate this lam you die as you have seen, and 2) there is no provable way to get lam cheaply. But in practice we can get pretty good (under)estimates of lam with Krylov methods (Power method being the simplest of these). We then increase this estimate with a safety factor like 1.05. Works but not provable. Last year we made a change that made this estimate bad in some/many cases. As far as what are you looking at, these iterations are converging on "max" (lam), at each level, eg: 35 KSP Residual norm 1.017388214943e-04 % max 1.570600354842e+00 min 1.495477205174e-03 max/min 1.050233563847e+03 ^^^^^^^^^^^^^^^^^^^^^ This should asymptot nicely to the true lam. The "bug" that we had, and you can probably see it if you go back to the bad solver parameters but use a lot of eigest iterations (ie, 50 here), results in what looks like nice convergence and then all of the sudden lam ("max") jumps up after many iterations. Yikes! This is probably more of a problem with simple test problems than the real world [skip explanation], but it is a big problem in practice as you have seen. Again, sorry for the mess, Mark On Mon, May 23, 2016 at 12:31 PM, Sanjay Govindjee wrote: > Mark, > Yes, the problem is finite elasticity. I re-ran the problem with the > options table and output shown below. It converges now > based on the residual tolerance test (KSPConvergedReason == 2); note I am > using Using Petsc Release Version 3.7.0, Apr, 25, 2016. > Seems to work now with these options; wish I understood what they all > meant! > -sanjay > > ------ Options Table ------ > > -ksp_chebyshev_esteig_random > -ksp_monitor > -ksp_type cg > -ksp_view > -log_view > -mg_levels_esteig_ksp_max_it 50 > -mg_levels_esteig_ksp_monitor_singular_value > -mg_levels_esteig_ksp_type cg > -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 > -mg_levels_ksp_type chebyshev > -mg_levels_pc_type sor > -options_left > -pc_gamg_agg_nsmooths 1 > -pc_gamg_square_graph 1 > -pc_gamg_type agg > -pc_type gamg > #End of PETSc Option Table entries > There is one unused database option. It is: > Option left: name:-ksp_chebyshev_esteig_random (no value) > > > ---- Output ----- > > > Residual norm = 6.4807407E-02 1.0000000E+00 t= 0.06 > 0.00 > Residual norm = 6.4807407E-02 1.0000000E+00 t= 2.72 > 0.00 > 0 KSP Residual norm 1.560868061707e-03 % max 1.000000000000e+00 min > 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP Residual norm 6.096709155146e-04 % max 5.751224548776e-01 min > 5.751224548776e-01 max/min 1.000000000000e+00 > 2 KSP Residual norm 7.011069458089e-04 % max 8.790398458008e-01 min > 1.423643814416e-01 max/min 6.174577073983e+00 > 3 KSP Residual norm 7.860563886831e-04 % max 9.381463130007e-01 min > 6.370569326093e-02 max/min 1.472625545661e+01 > 4 KSP Residual norm 7.281118133903e-04 % max 9.663599650986e-01 min > 3.450830062982e-02 max/min 2.800369613865e+01 > 5 KSP Residual norm 7.399083718116e-04 % max 9.794646011330e-01 min > 2.160135448201e-02 max/min 4.534274005590e+01 > 6 KSP Residual norm 7.629904179692e-04 % max 9.849620569978e-01 min > 1.481091947823e-02 max/min 6.650242467697e+01 > 7 KSP Residual norm 7.698477913710e-04 % max 9.886850579079e-01 min > 1.045801989510e-02 max/min 9.453845640235e+01 > 8 KSP Residual norm 8.217081868349e-04 % max 9.919371306726e-01 min > 7.371001382474e-03 max/min 1.345729133942e+02 > 9 KSP Residual norm 7.524701879786e-04 % max 9.942899758738e-01 min > 5.827670079091e-03 max/min 1.706153509687e+02 > 10 KSP Residual norm 6.944661718672e-04 % max 1.081740430310e+00 min > 5.160677620664e-03 max/min 2.096120916327e+02 > 11 KSP Residual norm 5.551504568073e-04 % max 1.486860759049e+00 min > 4.666379506027e-03 max/min 3.186326266710e+02 > 12 KSP Residual norm 5.259305993307e-04 % max 1.564857844056e+00 min > 4.221790647720e-03 max/min 3.706621134569e+02 > 13 KSP Residual norm 5.156103593875e-04 % max 1.569917781391e+00 min > 3.850994097592e-03 max/min 4.076655901322e+02 > 14 KSP Residual norm 5.312020352146e-04 % max 1.570323151514e+00 min > 3.568538377912e-03 max/min 4.400465919699e+02 > 15 KSP Residual norm 5.305598654979e-04 % max 1.570412447356e+00 min > 3.341861320427e-03 max/min 4.699214888890e+02 > 16 KSP Residual norm 5.058601071413e-04 % max 1.570467828098e+00 min > 3.091912672470e-03 max/min 5.079276145416e+02 > 17 KSP Residual norm 5.485622395473e-04 % max 1.570494963661e+00 min > 2.876900954621e-03 max/min 5.458981690483e+02 > 18 KSP Residual norm 5.368711040867e-04 % max 1.570521333658e+00 min > 2.664379213208e-03 max/min 5.894511283800e+02 > 19 KSP Residual norm 5.198795692341e-04 % max 1.570548096173e+00 min > 2.476198885896e-03 max/min 6.342576539867e+02 > 20 KSP Residual norm 5.958949153283e-04 % max 1.570562153443e+00 min > 2.291602609346e-03 max/min 6.853553696602e+02 > 21 KSP Residual norm 6.378632372927e-04 % max 1.570571141557e+00 min > 2.073298632606e-03 max/min 7.575228753145e+02 > 22 KSP Residual norm 5.831338029614e-04 % max 1.570577774384e+00 min > 1.893683789633e-03 max/min 8.293769968259e+02 > 23 KSP Residual norm 4.875913917209e-04 % max 1.570583697638e+00 min > 1.759110933887e-03 max/min 8.928281141244e+02 > 24 KSP Residual norm 4.107781613610e-04 % max 1.570587819069e+00 min > 1.672892328012e-03 max/min 9.388457300985e+02 > 25 KSP Residual norm 3.715001142988e-04 % max 1.570590462451e+00 min > 1.617963853373e-03 max/min 9.707203650914e+02 > 26 KSP Residual norm 2.991751132378e-04 % max 1.570593709995e+00 min > 1.573420350354e-03 max/min 9.982035059109e+02 > 27 KSP Residual norm 2.208369736689e-04 % max 1.570597161312e+00 min > 1.545543181679e-03 max/min 1.016210468870e+03 > 28 KSP Residual norm 1.811301040805e-04 % max 1.570599147395e+00 min > 1.529650294818e-03 max/min 1.026770074647e+03 > 29 KSP Residual norm 1.466232980955e-04 % max 1.570599898730e+00 min > 1.518818086503e-03 max/min 1.034093491964e+03 > 30 KSP Residual norm 1.206956326419e-04 % max 1.570600148946e+00 min > 1.512437955816e-03 max/min 1.038455920063e+03 > 31 KSP Residual norm 8.815660316339e-05 % max 1.570600276978e+00 min > 1.508452842477e-03 max/min 1.041199454667e+03 > 32 KSP Residual norm 8.077031357734e-05 % max 1.570600325445e+00 min > 1.505888596198e-03 max/min 1.042972454543e+03 > 33 KSP Residual norm 8.820553812137e-05 % max 1.570600343673e+00 min > 1.503328090889e-03 max/min 1.044748882956e+03 > 34 KSP Residual norm 9.117950808819e-05 % max 1.570600352355e+00 min > 1.499931693668e-03 max/min 1.047114584608e+03 > 35 KSP Residual norm 1.017388214943e-04 % max 1.570600354842e+00 min > 1.495477205174e-03 max/min 1.050233563847e+03 > 36 KSP Residual norm 8.890686455242e-05 % max 1.570600355527e+00 min > 1.491021415006e-03 max/min 1.053372097624e+03 > 37 KSP Residual norm 6.145275695828e-05 % max 1.570600355891e+00 min > 1.487712171225e-03 max/min 1.055715202355e+03 > 38 KSP Residual norm 4.601453163034e-05 % max 1.570600356004e+00 min > 1.486196684310e-03 max/min 1.056791723858e+03 > 39 KSP Residual norm 3.537992820781e-05 % max 1.570600356044e+00 min > 1.485426023422e-03 max/min 1.057340002989e+03 > 40 KSP Residual norm 2.801997681470e-05 % max 1.570600356064e+00 min > 1.484937369198e-03 max/min 1.057687946066e+03 > 41 KSP Residual norm 2.422931135206e-05 % max 1.570600356072e+00 min > 1.484576829299e-03 max/min 1.057944813010e+03 > 42 KSP Residual norm 2.099844940173e-05 % max 1.570600356076e+00 min > 1.484295103277e-03 max/min 1.058145615794e+03 > 43 KSP Residual norm 1.692165408662e-05 % max 1.570600356077e+00 min > 1.484073208085e-03 max/min 1.058303827278e+03 > 44 KSP Residual norm 1.303434704073e-05 % max 1.570600356078e+00 min > 1.483945099557e-03 max/min 1.058395190325e+03 > 45 KSP Residual norm 1.116085143372e-05 % max 1.570600356078e+00 min > 1.483862236504e-03 max/min 1.058454294098e+03 > 46 KSP Residual norm 1.042314767557e-05 % max 1.570600356078e+00 min > 1.483776389632e-03 max/min 1.058515533104e+03 > 47 KSP Residual norm 8.547779619028e-06 % max 1.570600356078e+00 min > 1.483710767635e-03 max/min 1.058562349440e+03 > 48 KSP Residual norm 5.686341715603e-06 % max 1.570600356078e+00 min > 1.483675385024e-03 max/min 1.058587593979e+03 > 49 KSP Residual norm 3.605023485024e-06 % max 1.570600356078e+00 min > 1.483660745235e-03 max/min 1.058598039425e+03 > 50 KSP Residual norm 2.272532759782e-06 % max 1.570600356078e+00 min > 1.483654995309e-03 max/min 1.058602142037e+03 > 0 KSP Residual norm 5.279802306740e-03 % max 1.000000000000e+00 > min 1.000000000000e+00 max/min 1.000000000000e+00 > 1 KSP Residual norm 3.528673026246e-03 % max 3.090782483684e-01 > min 3.090782483684e-01 max/min 1.000000000000e+00 > 2 KSP Residual norm 3.205752762808e-03 % max 8.446523416360e-01 > min 9.580457075356e-02 max/min 8.816409645096e+00 > 3 KSP Residual norm 2.374523626799e-03 % max 9.504624619604e-01 > min 6.564631404306e-02 max/min 1.447853509851e+01 > 4 KSP Residual norm 1.924918345847e-03 % max 9.848656903443e-01 > min 4.952831543265e-02 max/min 1.988490183325e+01 > 5 KSP Residual norm 6.808260964410e-04 % max 9.938458367029e-01 > min 4.251109574134e-02 max/min 2.337850434978e+01 > 6 KSP Residual norm 3.315021446270e-04 % max 9.967726858804e-01 > min 4.166488358467e-02 max/min 2.392356824554e+01 > 7 KSP Residual norm 1.314681668602e-04 % max 9.982316137592e-01 > min 4.143867906023e-02 max/min 2.408936858987e+01 > 8 KSP Residual norm 4.301436610738e-05 % max 9.989590393061e-01 > min 4.138618158843e-02 max/min 2.413750196238e+01 > 9 KSP Residual norm 8.016910487452e-06 % max 9.994371206849e-01 > min 4.138180175046e-02 max/min 2.415160960636e+01 > 10 KSP Residual norm 6.525723903099e-07 % max 9.997527609942e-01 > min 4.138161245544e-02 max/min 2.415934763467e+01 > 11 KSP Residual norm 8.205412727192e-08 % max 9.998804196297e-01 > min 4.138161047930e-02 max/min 2.416243370059e+01 > 12 KSP Residual norm 1.645897401565e-08 % max 9.999250934790e-01 > min 4.138161045086e-02 max/min 2.416351327521e+01 > 13 KSP Residual norm 2.636218490435e-09 % max 9.999479210248e-01 > min 4.138161044973e-02 max/min 2.416406491090e+01 > 14 KSP Residual norm 2.614816263321e-10 % max 9.999674891694e-01 > min 4.138161044968e-02 max/min 2.416453778147e+01 > 15 KSP Residual norm 2.309894761749e-11 % max 9.999797578219e-01 > min 4.138161044968e-02 max/min 2.416483425742e+01 > 16 KSP Residual norm 2.261461487058e-12 % max 9.999874664751e-01 > min 4.138161044968e-02 max/min 2.416502053952e+01 > 17 KSP Residual norm 2.659598917594e-13 % max 9.999898150784e-01 > min 4.138161044968e-02 max/min 2.416507729428e+01 > 18 KSP Residual norm 1.822011884274e-14 % max 9.999918194957e-01 > min 4.138161044968e-02 max/min 2.416512573167e+01 > 19 KSP Residual norm 1.042398553176e-15 % max 9.999933278733e-01 > min 4.138161044968e-02 max/min 2.416516218210e+01 > 0 KSP Residual norm 8.439161590194e-02 > 1 KSP Residual norm 7.614890998257e-03 > 2 KSP Residual norm 1.514029318872e-03 > 3 KSP Residual norm 3.781832295258e-04 > 4 KSP Residual norm 3.799411703870e-05 > 5 KSP Residual norm 4.799680240826e-06 > 6 KSP Residual norm 9.360965396987e-07 > 7 KSP Residual norm 1.250237476907e-07 > 8 KSP Residual norm 2.036465606099e-08 > 9 KSP Residual norm 3.993471620298e-09 > 10 KSP Residual norm 5.041262213944e-10 > KSP Object: 2 MPI processes > type: cg > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 2 MPI processes > type: gamg > MG: type is MULTIPLICATIVE, levels=3 cycles=v > Cycles per PCApply=1 > Using Galerkin computed coarse grid matrices > GAMG specific options > Threshold for dropping small values from graph 0. > AGG specific options > Symmetric graph false > Coarse grid solver -- level ------------------------------- > KSP Object: (mg_coarse_) 2 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_) 2 MPI processes > type: bjacobi > block Jacobi: number of blocks = 2 > Local solve is same for all blocks, in the following KSP and PC > objects: > KSP Object: (mg_coarse_sub_) 1 MPI processes > type: preonly > maximum iterations=1, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > using diagonal shift on blocks to prevent zero pivot [INBLOCKS] > matrix ordering: nd > factor fill ratio given 5., needed 1. > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=9, cols=9, bs=3 > package used to perform factorization: petsc > total: nonzeros=81, allocated nonzeros=81 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 2 nodes, limit used is 5 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=9, cols=9, bs=3 > total: nonzeros=81, allocated nonzeros=81 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 2 nodes, limit used is 5 > linear system matrix = precond matrix: > Mat Object: 2 MPI processes > type: mpiaij > rows=9, cols=9, bs=3 > total: nonzeros=81, allocated nonzeros=81 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 2 nodes, limit used > is 5 > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 2 MPI processes > type: chebyshev > Chebyshev: eigenvalue estimates: min = 0.0499997, max = 1.04999 > Chebyshev: eigenvalues estimated using cg with translations [0. > 0.05; 0. 1.05] > KSP Object: (mg_levels_1_esteig_) 2 MPI processes > type: cg > maximum iterations=50, initial guess is zero > tolerances: relative=1e-12, absolute=1e-50, divergence=10000. > left preconditioning > using PRECONDITIONED norm type for convergence test > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_1_) 2 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1. > linear system matrix = precond matrix: > Mat Object: 2 MPI processes > type: mpiaij > rows=54, cols=54, bs=3 > total: nonzeros=1764, allocated nonzeros=1764 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 18 nodes, limit used > is 5 > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 2 ------------------------------- > KSP Object: (mg_levels_2_) 2 MPI processes > type: chebyshev > Chebyshev: eigenvalue estimates: min = 0.07853, max = 1.64913 > Chebyshev: eigenvalues estimated using cg with translations [0. > 0.05; 0. 1.05] > KSP Object: (mg_levels_2_esteig_) 2 MPI processes > type: cg > maximum iterations=50, initial guess is zero > tolerances: relative=1e-12, absolute=1e-50, divergence=10000. > left preconditioning > using PRECONDITIONED norm type for convergence test > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_2_) 2 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1. > linear system matrix = precond matrix: > Mat Object: 2 MPI processes > type: mpiaij > rows=882, cols=882, bs=2 > total: nonzeros=26244, allocated nonzeros=26244 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 189 nodes, limit > used is 5 > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 2 MPI processes > type: mpiaij > rows=882, cols=882, bs=2 > total: nonzeros=26244, allocated nonzeros=26244 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 189 nodes, limit used is > 5 > CONVERGENCE: Satisfied residual tolerance Iterations = 10 > > > > On 5/22/16 3:02 PM, Mark Adams wrote: > > I thought you would have this also, so add it (I assume this is 3D > elasticity): > > -pc_gamg_square_graph 1 > -mg_levels_ksp_type chebyshev > -mg_levels_pc_type sor > > Plus what I just mentioned: > > -mg_levels_esteig_ksp_type cg > -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 > > Just for diagnostics add: > > -mg_levels_esteig_ksp_max_it 50 > -mg_levels_esteig_ksp_monitor_singular_value > -ksp_view > > > > On Sun, May 22, 2016 at 5:38 PM, Sanjay Govindjee < > s_g at berkeley.edu> wrote: > >> Mark, >> Can you give me the full option line that you want me to use? I >> currently have: >> >> -ksp_type cg -ksp_monitor -ksp_chebyshev_esteig_random -log_view -pc_type >> gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1 -options_left >> >> -sanjay >> >> >> On 5/22/16 2:29 PM, Mark Adams wrote: >> >> Humm, maybe we have version mixup: >> >> src/ksp/ksp/impls/cheby/cheby.c: ierr = >> PetscOptionsBool("-ksp_chebyshev_esteig_random","Use random right hand side >> for >> estimate","KSPChebyshevEstEigSetUseRandom",cheb->userandom,&cheb->userandom >> >> Also, you should use CG. These other options are the defaults but CG is >> not: >> >> -mg_levels_esteig_ksp_type cg >> -mg_levels_esteig_ksp_max_it 10 >> -mg_levels_ksp_chebyshev_esteig 0,0.05,0,1.05 >> >> Anyway. you can also run with -info, which will be very noisy, but just >> grep for GAMG and send me that. >> >> Mark >> >> >> >> On Sat, May 21, 2016 at 6:03 PM, Sanjay Govindjee < >> s_g at berkeley.edu> wrote: >> >>> Mark, >>> I added the option you mentioned but it seems not to use it; >>> -options_left reports: >>> >>> #PETSc Option Table entries: >>> -ksp_chebyshev_esteig_random >>> -ksp_monitor >>> -ksp_type cg >>> -log_view >>> -options_left >>> -pc_gamg_agg_nsmooths 1 >>> -pc_gamg_type agg >>> -pc_type gamg >>> #End of PETSc Option Table entries >>> There is one unused database option. It is: >>> Option left: name:-ksp_chebyshev_esteig_random (no value) >>> >>> >>> >>> On 5/21/16 12:36 PM, Mark Adams wrote: >>> >>> Barry, this is probably the Chebyshev problem. >>> >>> Sanjay, this is fixed but has not yet been moved to the master branch. >>> You can fix this now with with -ksp_chebyshev_esteig_random. This should >>> recover v3.5 semantics. >>> >>> Mark >>> >>> On Thu, May 19, 2016 at 2:42 PM, Barry Smith < >>> bsmith at mcs.anl.gov> wrote: >>> >>>> >>>> We see this occasionally, there is nothing in the definition of GAMG >>>> that guarantees a positive definite preconditioner even if the operator was >>>> positive definite so we don't think this is a bug in the code. We've found >>>> using a slightly stronger smoother, like one more smoothing step seems to >>>> remove the problem. >>>> >>>> Barry >>>> >>>> > On May 19, 2016, at 1:07 PM, Sanjay Govindjee < >>>> s_g at berkeley.edu> wrote: >>>> > >>>> > I am trying to solve a very ordinary nonlinear elasticity problem >>>> > using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine >>>> > in PETSc 3.5.3. >>>> > >>>> > The problem I am seeing is on my first Newton iteration, the Ax=b >>>> > solve returns with and Indefinite Preconditioner error >>>> (KSPGetConvergedReason == -8): >>>> > (log_view.txt output also attached) >>>> > >>>> > 0 KSP Residual norm 8.411630828687e-02 >>>> > 1 KSP Residual norm 2.852209578900e-02 >>>> > NO CONVERGENCE REASON: Indefinite Preconditioner >>>> > NO CONVERGENCE REASON: Indefinite Preconditioner >>>> > >>>> > On the next and subsequent Newton iterations, I see perfectly normal >>>> > behavior and the problem converges quadratically. The results look >>>> fine. >>>> > >>>> > I tried the same problem with -pc_type jacobi as well as super-lu, >>>> and mumps >>>> > and they all work without complaint. >>>> > >>>> > My run line for GAMG is: >>>> > -ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg >>>> -pc_gamg_agg_nsmooths 1 -options_left >>>> > >>>> > The code flow looks like: >>>> > >>>> > ! If no matrix allocation yet >>>> > if(Kmat.eq.0) then >>>> > call MatCreate(PETSC_COMM_WORLD,Kmat,ierr) >>>> > call >>>> MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr) >>>> > call MatSetBlockSize(Kmat,nsbk,ierr) >>>> > call MatSetFromOptions(Kmat, ierr) >>>> > call MatSetType(Kmat,MATAIJ,ierr) >>>> > call >>>> MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr) >>>> > call >>>> MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr) >>>> > endif >>>> > >>>> > call MatZeroEntries(Kmat,ierr) >>>> > >>>> > ! Code to set values in matrix >>>> > >>>> > call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr) >>>> > call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr) >>>> > call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr) >>>> > >>>> > ! If no rhs allocation yet >>>> > if(rhs.eq.0) then >>>> > call VecCreate (PETSC_COMM_WORLD, rhs, ierr) >>>> > call VecSetSizes (rhs, numpeq, PETSC_DECIDE, ierr) >>>> > call VecSetFromOptions(rhs, ierr) >>>> > endif >>>> > >>>> > ! Code to set values in RHS >>>> > >>>> > call VecAssemblyBegin(rhs, ierr) >>>> > call VecAssemblyEnd(rhs, ierr) >>>> > >>>> > if(kspsol_exists) then >>>> > call KSPDestroy(kspsol,ierr) >>>> > endif >>>> > >>>> > call KSPCreate(PETSC_COMM_WORLD, kspsol ,ierr) >>>> > call KSPSetOperators(kspsol, Kmat, Kmat, ierr) >>>> > call KSPSetFromOptions(kspsol,ierr) >>>> > call KSPGetPC(kspsol, pc , ierr) >>>> > >>>> > call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr) >>>> > >>>> > call KSPSolve(kspsol, rhs, sol, ierr) >>>> > call KSPGetConvergedReason(kspsol,reason,ierr) >>>> > >>>> > ! update solution, go back to the top >>>> > >>>> > reason is coming back as -8 on my first Ax=b solve and 2 or 3 after >>>> that >>>> > (with gamg). With the other solvers it is coming back as 2 or 3 for >>>> > iterative options and 4 if I use one of the direct solvers. >>>> > >>>> > Any ideas on what is causing the Indefinite PC on the first iteration >>>> with GAMG? >>>> > >>>> > Thanks in advance, >>>> > -sanjay >>>> > >>>> > -- >>>> > ----------------------------------------------- >>>> > Sanjay Govindjee, PhD, PE >>>> > Professor of Civil Engineering >>>> > >>>> > 779 Davis Hall >>>> > University of California >>>> > Berkeley, CA 94720-1710 >>>> > >>>> > Voice: +1 510 642 6060 <%2B1%20510%20642%206060> >>>> > FAX: +1 510 643 5264 <%2B1%20510%20643%205264> >>>> > >>>> > s_g at berkeley.edu >>>> > >>>> http://www.ce.berkeley.edu/~sanjay >>>> > >>>> > ----------------------------------------------- >>>> > >>>> > Books: >>>> > >>>> > Engineering Mechanics of Deformable >>>> > Solids: A Presentation with Exercises >>>> > >>>> > >>>> >>>> http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 >>>> > >>>> http://ukcatalogue.oup.com/product/9780199651641.do >>>> > http://amzn.com/0199651647 >>>> > >>>> > >>>> > Engineering Mechanics 3 (Dynamics) 2nd Edition >>>> > >>>> > >>>> http://www.springer.com/978-3-642-53711-0 >>>> > http://amzn.com/3642537111 >>>> > >>>> > >>>> > Engineering Mechanics 3, Supplementary Problems: Dynamics >>>> > >>>> > http://www.amzn.com/B00SOXN8JU >>>> > >>>> > >>>> > ----------------------------------------------- >>>> > >>>> > >>>> >>>> >>> >>> -- >>> ----------------------------------------------- >>> Sanjay Govindjee, PhD, PE >>> Professor of Civil Engineering >>> >>> 779 Davis Hall >>> University of California >>> Berkeley, CA 94720-1710 >>> >>> Voice: +1 510 642 6060 >>> FAX: +1 510 643 5264s_g at berkeley.eduhttp://www.ce.berkeley.edu/~sanjay >>> ----------------------------------------------- >>> >>> Books: >>> >>> Engineering Mechanics of Deformable >>> Solids: A Presentation with Exerciseshttp://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641http://ukcatalogue.oup.com/product/9780199651641.dohttp://amzn.com/0199651647 >>> >>> Engineering Mechanics 3 (Dynamics) 2nd Editionhttp://www.springer.com/978-3-642-53711-0http://amzn.com/3642537111 >>> >>> Engineering Mechanics 3, Supplementary Problems: Dynamics http://www.amzn.com/B00SOXN8JU >>> >>> ----------------------------------------------- >>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon May 23 19:59:05 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 May 2016 19:59:05 -0500 Subject: [petsc-users] PetscDSSetResidual/PetscDSSetJacobian in fortran In-Reply-To: References: <0D221479-8D21-408F-B629-40E68EC65F8D@umn.edu> Message-ID: On Mon, May 23, 2016 at 12:23 PM, Nestor Cerpa Gilvonio wrote: > Thank you for the response. > However, I am unsure what you mean by ?to handle discretization > themselves?. Would you recommend (or is it possible) to still use > PetscFE/PetscQuadrature functions and then use this data to evaluate > residual and jacobian using SNESSet?(), or is it better to just use our own > pieces of code for all of this ? > I mean they use DMPlex to handle topolgoy/geometry and PetscSection to handle parallel data layout. However, they do not use PetscFE/FV or PetscDS, instead managing the discretization and assembly themselves. You could easily use PetscFE/Quad and your own assembly loop (its fairly easy to look at mine). I would recommend the local version SNESSetFunctionLocal(). Thanks, Matt > Thanks, > > Nestor > > > --------------------------------------------------- > --------------------------------------------------- > > Nestor Cerpa Gilvonio > Postdoctoral researcher > > Department of Earth Sciences > University of Minnesota > 310 Pillsbury Drive SE > Minnepolis, MN 55455 > > E-mail : ncerpagi at umn.edu > > > > On May 22, 2016, at 7:06 AM, Matthew Knepley wrote: > > On Sun, May 22, 2016 at 2:08 AM, Nestor Cerpa wrote: > >> Hello, >> >> I am trying to use PetscDSSetResidual and PetscDSSetJacobian in a simple >> fortran code but they seem to be missing in the include files >> (petsc-master). I get this error message : >> > > They are not there. Taking function pointers from Fortran is complex, and > I do not understand the new > framework that Jed put in place to do this. It is used, for example in > SNESSetFunction(). I would be > happy to integrate code if you have time to implement it, but right now I > am pressed for time. > > >> *Undefined symbols for architecture x86_64:* >> * "_petscdssetresidual_", referenced from:* >> * _MAIN__ in Poisson.o* >> *ld: symbol(s) not found for architecture x86_64* >> *collect2: error: ld returned 1 exit status* >> *make: [Poisson] Error 1 (ignored)* >> */bin/rm -f -f Poisson.o* >> >> I am also wondering if there are snes examples like ex12 or ex62 in >> fortran? >> > > Everyone using Plex in Fortran has so far preferred to handle the > discretization themselves. I hope > that changes, but as you note I will have to get the pointwise function > support in there first. > > Thanks, > > Matt > > >> Thanks, >> Nestor >> >> --------------------------------------------------- >> --------------------------------------------------- >> >> Nestor Cerpa Gilvonio >> Postdoctoral research associate >> >> Department of Earth Sciences >> University of Minnesota >> 310 Pillsbury Drive SE >> Minnepolis, MN 55455 >> >> E-mail : ncerpagi at umn.edu >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rickcha at googlemail.com Tue May 24 04:32:21 2016 From: rickcha at googlemail.com (rickcha at googlemail.com) Date: Tue, 24 May 2016 11:32:21 +0200 Subject: [petsc-users] DMPlexCreateGmshFromFile seems to have problems with Physical names Message-ID: <80A4FA31-9664-47B4-9E84-8106290BAA49@googlemail.com> Hello, I am a new user of Petsc trying to make my rudimentary FEM code run on petsc for the purpose of testing coupling approaches of structural analysis with an existing multiphysics Fortran Code. For mesh generation I use Gmesh which has served me well in the past. Naturally I would like to import the mesh that I created in Gmsh as a DMPlex object but this only works when there are no Physical Names present in the .msh file. From the following conversation : http://lists.mcs.anl.gov/pipermail/petsc-users/2015-January/024101.html I deduct that it should be possible to import the Phsical Names or tags. However I get the following error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: File is not a valid Gmsh file [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown I am not sure whether it is incorrect use of the routines from my side or a bug in Petsc. I attached below the code and my .msh file. Thank you in advance, M. Hartig -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sifemc.c Type: application/octet-stream Size: 726 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testmesh_2D_box_quad.msh Type: application/octet-stream Size: 16699 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From sales04 at chinamould4.com Tue May 24 04:59:35 2016 From: sales04 at chinamould4.com (sales04) Date: Tue, 24 May 2016 17:59:35 +0800 Subject: [petsc-users] Fw: Re: Visual Reality BOX Supplier--F5117 Message-ID: <20160524175934296146957@chinamould4.com> Dear Manager, How are you? We are a professional fashion intelligent product manufacturer in Shenzhen(near Hongkong),China. It's glad to write you with keen hope to recommend our new products?3D VR Glasses. Product Information: 1. Material: ABS/PC/PMMA/Memory Foam 2. Color: White & Black 3. 3D Glasses Type:Virtual Reality Glasses 4. Adaptation phone size:4.0-6.0 inch 5. Very convenient to use 6. Focus Distance:Adjustable 7. Usage: Watching 3D Movie and playing games Competitive Advantage 1. Stable quality (CE FCC ROHS) 2. More vivid Xorn Video experience 3. Warranty: 12 months 4. OEM, ODM welcome 5.ISO9001-2008 and ISO14001:2004 certificated. If you are interested in it, click reply to contact us for details. Looking forward to your prompt reply. Thanks and Best Regards Michael Lee China -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: clip_image002(05-24-17-59-23).jpg Type: image/jpeg Size: 11678 bytes Desc: not available URL: From knepley at gmail.com Tue May 24 08:07:20 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 24 May 2016 08:07:20 -0500 Subject: [petsc-users] DMPlexCreateGmshFromFile seems to have problems with Physical names In-Reply-To: <80A4FA31-9664-47B4-9E84-8106290BAA49@googlemail.com> References: <80A4FA31-9664-47B4-9E84-8106290BAA49@googlemail.com> Message-ID: On Tue, May 24, 2016 at 4:32 AM, wrote: > Hello, > > I am a new user of Petsc trying to make my rudimentary FEM code run on > petsc for the purpose of testing coupling approaches of structural analysis > with an existing multiphysics Fortran Code. For mesh generation I use Gmesh > which has served me well in the past. > Naturally I would like to import the mesh that I created in Gmsh as a > DMPlex object but this only works when there are no Physical Names present > in the .msh file. > From the following conversation : > http://lists.mcs.anl.gov/pipermail/petsc-users/2015-January/024101.html I > deduct that it should be possible to import the Phsical Names or tags. > However I get the following error message: > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Invalid argument > [0]PETSC ERROR: File is not a valid Gmsh file > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown > > I am not sure whether it is incorrect use of the routines from my side or > a bug in Petsc. > I attached below the code and my .msh file. > Its my fault. We do not read that section. I will write code to read it in. However, I am not sure about integrating that information. I do not have a place for a lookup table of strings. I will update the 'next' branch when the code is finished. Thanks, Matt > Thank you in advance, > M. Hartig > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.robertson at cfdrc.com Tue May 24 09:54:02 2016 From: eric.robertson at cfdrc.com (Eric D. Robertson) Date: Tue, 24 May 2016 14:54:02 +0000 Subject: [petsc-users] SVD Usage In-Reply-To: <78A4E1EB-80D3-499D-AF68-9CEF8E3F240B@dsic.upv.es> References: <1B22639EA62F9B41AF2D9DE147555DF30BA9BB6A@Mail.cfdrc.com> <78A4E1EB-80D3-499D-AF68-9CEF8E3F240B@dsic.upv.es> Message-ID: <1B22639EA62F9B41AF2D9DE147555DF30BA9BCD5@Mail.cfdrc.com> That works. Thank you. -----Original Message----- From: Jose E. Roman [mailto:jroman at dsic.upv.es] Sent: Monday, May 23, 2016 2:58 PM To: Eric D. Robertson Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] SVD Usage > El 23 may 2016, a las 21:08, Eric D. Robertson escribi?: > > I want to perform an SVD on a 100000 x 10 dense matrix using slepc SVD. I am able to do this in a test program in serial and print the singular values, but in parallel I cannot. > > Attached are the error messages and the source. Is there anything in particular that might keep me from running this in parallel? > > > I think the problem is placing MatSetRandom() before assembly. If it is placed AFTER MatAssemblyEnd() it seems to work. Jose From alice.raeli at math.u-bordeaux.fr Tue May 24 11:22:39 2016 From: alice.raeli at math.u-bordeaux.fr (Alice Raeli) Date: Tue, 24 May 2016 18:22:39 +0200 Subject: [petsc-users] Troubles with VecScatter routines: inverting vectors values Message-ID: <31AA0F14-C972-4D96-9598-93ED80DAB499@math.u-bordeaux.fr> Hi All, I had the necessity to read data on another node of a Vector Petsc. I followed the structure showed in the tutorial of VecScatterCreateToAll. I call the function below 2 times with 2 different vectors vec1 and vec2. void Scatter(Vec vec, PetscScalar &value, vector NeighboursIndex, int NeighNumber){ Vec V_SEQ; VecScatter ctx; PetscScalar *vecArray; VecScatterCreateToAll(vec,&ctx,&V_SEQ); VecScatterBegin(ctx,vec,V_SEQ,INSERT_VALUES,SCATTER_FORWARD); VecScatterEnd(ctx,vec,V_SEQ,INSERT_VALUES,SCATTER_FORWARD); ierr = VecGetArray(V_SEQ,&vecArray); for(int i=0; i alice.raeli at u-bordeaux.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue May 24 11:37:02 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 24 May 2016 11:37:02 -0500 Subject: [petsc-users] Troubles with VecScatter routines: inverting vectors values In-Reply-To: <31AA0F14-C972-4D96-9598-93ED80DAB499@math.u-bordeaux.fr> References: <31AA0F14-C972-4D96-9598-93ED80DAB499@math.u-bordeaux.fr> Message-ID: On Tue, May 24, 2016 at 11:22 AM, Alice Raeli < alice.raeli at math.u-bordeaux.fr> wrote: > Hi All, > > I had the necessity to read data on another node of a Vector Petsc. > I followed the structure showed in the tutorial of VecScatterCreateToAll. > I call the function below 2 times with 2 different vectors vec1 and vec2. > > void Scatter(Vec vec, PetscScalar &value, vector NeighboursIndex, int > NeighNumber){ > > > Vec V_SEQ; > VecScatter ctx; > PetscScalar *vecArray; > > > VecScatterCreateToAll(vec,&ctx,&V_SEQ); > VecScatterBegin(ctx,vec,V_SEQ,INSERT_VALUES,SCATTER_FORWARD); > VecScatterEnd(ctx,vec,V_SEQ,INSERT_VALUES,SCATTER_FORWARD); > ierr = VecGetArray(V_SEQ,&vecArray); > for(int i=0; i PetscInt index=NeighboursIndex[i]; > value = vecArray[index]; > > if(NeighboursIndex[0]==869 && (index==363 || index == 364 || index > ==365) ){ > cout << endl << index << " inside function " << value; > } > > } > // > ierr = VecRestoreArray(V_SEQ,&vecArray); > VecScatterDestroy(&ctx); > VecDestroy(&V_SEQ); > return; > } > > When I run my code with one node i have a result coherent but if I use 2 > nodes the vectors are inverted in values. > VecScatter just sends values. It does not process them in any way. You must have other code in your program that is altering the values. If you can send a small example, we will look at it. Thanks, Matt > I used for a test a vector of 1 and a vector of 0 and I concentrated the > output on a point that communicates with the other node. > > Can you explain me what is effectively wrong when I call the vecscatter > routines? I can?t see a reason coherent with a simple inversion of values > from 2 vectors. > > Example of output > 1 node: > first call vec1 > 364 inside function 87.797 > 363 inside function 89.0416 > 365 inside function 86.4618 > second call vec2 > 364 inside function 0.0268704 > 363 inside function 0.0322394 > 365 inside function 0.0199141 > > 2 nodes: > first call vec1 > 364 inside function 0.0268704 > 363 inside function 0.0322394 > 365 inside function 0.0199141 > second call vec2 > 364 inside function 87.797 > 363 inside function 89.0416 > 365 inside function 86.4618 > > *Many Thanks* > *Alice Raeli* > *I*nstitut de *M*ath?matiques de *B*ordeaux *(IMB)* > > alice.raeli at math.u-bordeaux1.fr > > alice.raeli at u-bordeaux.fr > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.croucher at auckland.ac.nz Tue May 24 23:09:22 2016 From: a.croucher at auckland.ac.nz (Adrian Croucher) Date: Wed, 25 May 2016 16:09:22 +1200 Subject: [petsc-users] block matrix in serial In-Reply-To: References: <55F8CE54.5010707@auckland.ac.nz> <5714633E.8090709@auckland.ac.nz> Message-ID: <57452572.1060409@auckland.ac.nz> hi On 18/05/16 02:03, Matthew Knepley wrote: > > I have finally pushed what I think are the right fixes into master. > Can you take a look and let me know > if it is fixed for you? Yes, it looks like the block size is being set correctly now. Thanks very much! Cheers, Adrian -- Dr Adrian Croucher Senior Research Fellow Department of Engineering Science University of Auckland, New Zealand email: a.croucher at auckland.ac.nz tel: +64 (0)9 923 84611 -------------- next part -------------- An HTML attachment was scrubbed... URL: From constantin.nguyen.van at openmailbox.org Wed May 25 02:35:51 2016 From: constantin.nguyen.van at openmailbox.org (Constantin Nguyen Van) Date: Wed, 25 May 2016 09:35:51 +0200 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS Message-ID: Hi, I'm a new user of PETSc and I try to use it with MUMPS functionalities to compute a nullbasis. I wrote a code where I compute 4 times the same nullbasis. It does work well when I run it with several procs but with only one processor I get an error on the 2nd iteration when KSPSetUp is called. Furthermore when it is run with a debugger ( --with-debugging=yes), it works fine with one or several processors. Have you got any idea about why it doesn't work with one processor and no debugger? Thanks. Constantin. PS: You can find the code and the files required to run it enclosed. From constantin.nguyen.van at openmailbox.org Wed May 25 02:40:57 2016 From: constantin.nguyen.van at openmailbox.org (Constantin Nguyen Van) Date: Wed, 25 May 2016 09:40:57 +0200 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS Message-ID: <9f84279ee5c5358fbba0a3d55c537c09@openmailbox.org> Hi, I'm a new user of PETSc and I try to use it with MUMPS functionalities to compute a nullbasis. I wrote a code where I compute 4 times the same nullbasis. It does work well when I run it with several procs but with only one processor I get an error on the 2nd iteration when KSPSetUp is called. Furthermore when it is run with a debugger ( --with-debugging=yes), it works fine with one or several processors. Have you got any idea about why it doesn't work with one processor and no debugger? Thanks. Constantin. PS: You can find the code and the files required to run it enclosed. -------------- next part -------------- A non-text attachment was scrubbed... Name: mat_c_bin.txt Type: application/octet-stream Size: 128 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mat_c_bin.txt.info URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build_nullbasis_petsc_mumps.F90 Type: text/x-c Size: 7227 bytes Desc: not available URL: From hzhang at mcs.anl.gov Wed May 25 11:43:16 2016 From: hzhang at mcs.anl.gov (Hong) Date: Wed, 25 May 2016 11:43:16 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: <9f84279ee5c5358fbba0a3d55c537c09@openmailbox.org> References: <9f84279ee5c5358fbba0a3d55c537c09@openmailbox.org> Message-ID: Constantin: I can reproduce your report, and am trying to debug it. One thing I found is memory problem. With one processor on o-build, I get $ valgrind --tool=memcheck ./ex51f ==28659== Memcheck, a memory error detector ==28659== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==28659== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==28659== Command: ./ex51f ==28659== BEGIN PROC 0 ITERATION 1 ==28659== Invalid write of size 4 ==28659== at 0x509B519: matload_ (in /scratch/hzhang/petsc/arch-linux-gcc-gfortran-o/lib/libpetsc.so.3.07.1) ==28659== by 0x402204: MAIN__ (in /scratch/hzhang/petsc/src/ksp/ksp/examples/tests/ex51f) ==28659== by 0x4030DC: main (in /scratch/hzhang/petsc/src/ksp/ksp/examples/tests/ex51f) ==28659== Address 0xe is not stack'd, malloc'd or (recently) free'd g-build does not show this error, plenty memory leak though. I'm further investing it... Hong Hi, > > I'm a new user of PETSc and I try to use it with MUMPS functionalities to > compute a nullbasis. > I wrote a code where I compute 4 times the same nullbasis. It does work > well when I run it with several procs but with only one processor I get an > error on the 2nd iteration when KSPSetUp is called. Furthermore when it is > run with a debugger ( --with-debugging=yes), it works fine with one or > several processors. > Have you got any idea about why it doesn't work with one processor and no > debugger? > > Thanks. > Constantin. > > PS: You can find the code and the files required to run it enclosed. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed May 25 11:52:02 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 25 May 2016 11:52:02 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <9f84279ee5c5358fbba0a3d55c537c09@openmailbox.org> Message-ID: call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, viewer, ierr) This looks buggy. It should be: call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", FILE_MODE_READ, viewer, ierr) Satish On Wed, 25 May 2016, Hong wrote: > Constantin: > I can reproduce your report, and am trying to debug it. > One thing I found is memory problem. With one processor on o-build, > I get > $ valgrind --tool=memcheck ./ex51f > ==28659== Memcheck, a memory error detector > ==28659== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. > ==28659== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info > ==28659== Command: ./ex51f > ==28659== > BEGIN PROC 0 > ITERATION 1 > ==28659== Invalid write of size 4 > ==28659== at 0x509B519: matload_ (in > /scratch/hzhang/petsc/arch-linux-gcc-gfortran-o/lib/libpetsc.so.3.07.1) > ==28659== by 0x402204: MAIN__ (in > /scratch/hzhang/petsc/src/ksp/ksp/examples/tests/ex51f) > ==28659== by 0x4030DC: main (in > /scratch/hzhang/petsc/src/ksp/ksp/examples/tests/ex51f) > ==28659== Address 0xe is not stack'd, malloc'd or (recently) free'd > > g-build does not show this error, plenty memory leak though. > > I'm further investing it... > > Hong > > Hi, > > > > I'm a new user of PETSc and I try to use it with MUMPS functionalities to > > compute a nullbasis. > > I wrote a code where I compute 4 times the same nullbasis. It does work > > well when I run it with several procs but with only one processor I get an > > error on the 2nd iteration when KSPSetUp is called. Furthermore when it is > > run with a debugger ( --with-debugging=yes), it works fine with one or > > several processors. > > Have you got any idea about why it doesn't work with one processor and no > > debugger? > > > > Thanks. > > Constantin. > > > > PS: You can find the code and the files required to run it enclosed. > > > From balay at mcs.anl.gov Wed May 25 12:06:23 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 25 May 2016 12:06:23 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <9f84279ee5c5358fbba0a3d55c537c09@openmailbox.org> Message-ID: > > ==28659== Invalid write of size 4 > > ==28659== at 0x509B519: matload_ (in > > /scratch/hzhang/petsc/arch-linux-gcc-gfortran-o/lib/libpetsc.so.3.07.1) This refers to the buggy call to MatLoad() call MatLoad(mat_c, viewer) Should be: call MatLoad(mat_c, viewer,ierr) Satish On Wed, 25 May 2016, Satish Balay wrote: > call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, viewer, ierr) > > This looks buggy. It should be: > > call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", FILE_MODE_READ, viewer, ierr) > > Satish > > On Wed, 25 May 2016, Hong wrote: > > > Constantin: > > I can reproduce your report, and am trying to debug it. > > One thing I found is memory problem. With one processor on o-build, > > I get > > $ valgrind --tool=memcheck ./ex51f > > ==28659== Memcheck, a memory error detector > > ==28659== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. > > ==28659== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info > > ==28659== Command: ./ex51f > > ==28659== > > BEGIN PROC 0 > > ITERATION 1 > > ==28659== Invalid write of size 4 > > ==28659== at 0x509B519: matload_ (in > > /scratch/hzhang/petsc/arch-linux-gcc-gfortran-o/lib/libpetsc.so.3.07.1) > > ==28659== by 0x402204: MAIN__ (in > > /scratch/hzhang/petsc/src/ksp/ksp/examples/tests/ex51f) > > ==28659== by 0x4030DC: main (in > > /scratch/hzhang/petsc/src/ksp/ksp/examples/tests/ex51f) > > ==28659== Address 0xe is not stack'd, malloc'd or (recently) free'd > > > > g-build does not show this error, plenty memory leak though. > > > > I'm further investing it... > > > > Hong > > > > Hi, > > > > > > I'm a new user of PETSc and I try to use it with MUMPS functionalities to > > > compute a nullbasis. > > > I wrote a code where I compute 4 times the same nullbasis. It does work > > > well when I run it with several procs but with only one processor I get an > > > error on the 2nd iteration when KSPSetUp is called. Furthermore when it is > > > run with a debugger ( --with-debugging=yes), it works fine with one or > > > several processors. > > > Have you got any idea about why it doesn't work with one processor and no > > > debugger? > > > > > > Thanks. > > > Constantin. > > > > > > PS: You can find the code and the files required to run it enclosed. > > > > > > > From bsmith at mcs.anl.gov Wed May 25 13:04:10 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 May 2016 13:04:10 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: Message-ID: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> An HTML attachment was scrubbed... URL: From eric.robertson at cfdrc.com Wed May 25 22:19:07 2016 From: eric.robertson at cfdrc.com (Eric D. Robertson) Date: Thu, 26 May 2016 03:19:07 +0000 Subject: [petsc-users] MATELEMENTAL MatSetValue( ) Message-ID: <1B22639EA62F9B41AF2D9DE147555DF30BA9BFF6@Mail.cfdrc.com> I am trying to multiply two dense matrices using the Elemental interface. I fill the matrix using MatSetValue( ) like below: for ( i = 0; i < Matrix.M; i++){ for ( j = 0; j < Matrix.N; j++) { PetscScalar temp = i + one + (j*three); MatSetValue(Matrix.A, i, j, temp, INSERT_VALUES); } } However, I seem to get the following error: [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Only ADD_VALUES to off-processor entry is supported But if I use ADD_VALUES, I get a different matrix depending on the number of processors used. The entries become multiplied by the number of processors. How do I reconcile this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed May 25 22:32:02 2016 From: jed at jedbrown.org (Jed Brown) Date: Wed, 25 May 2016 21:32:02 -0600 Subject: [petsc-users] MATELEMENTAL MatSetValue( ) In-Reply-To: <1B22639EA62F9B41AF2D9DE147555DF30BA9BFF6@Mail.cfdrc.com> References: <1B22639EA62F9B41AF2D9DE147555DF30BA9BFF6@Mail.cfdrc.com> Message-ID: <8737p5xznx.fsf@jedbrown.org> Use MatGetOwnershipIS() and set only the entries in your owned rows and columns. What you're doing is nonscalable anyway because every process is filling the global matrix. If you want a cheap hack for small problems on small numbers of cores, you can run your insertion loop on only rank 0. "Eric D. Robertson" writes: > I am trying to multiply two dense matrices using the Elemental interface. I fill the matrix using MatSetValue( ) like below: > > for ( i = 0; i < Matrix.M; i++){ > for ( j = 0; j < Matrix.N; j++) { > PetscScalar temp = i + one + (j*three); > MatSetValue(Matrix.A, i, j, temp, INSERT_VALUES); > > } > } > > > However, I seem to get the following error: > > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Only ADD_VALUES to off-processor entry is supported > > But if I use ADD_VALUES, I get a different matrix depending on the number of processors used. The entries become multiplied by the number of processors. How do I reconcile this? > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From constantin.nguyen.van at openmailbox.org Thu May 26 01:46:17 2016 From: constantin.nguyen.van at openmailbox.org (Constantin Nguyen Van) Date: Thu, 26 May 2016 08:46:17 +0200 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> Message-ID: <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Thanks for all your answers. I'm sorry for the syntax mistake in MatLoad, it was done afterwards. I recompile PETSC --with-debugging=yes and run my code again. Now, I also have this strange behaviour. When I run the code without valgrind and with one proc, I have this error message: BEGIN PROC 0 ITERATION 1 ECHO 1 ECHO 2 INFOG(28): 2 BASIS OK 0 END PROC 0 BEGIN PROC 0 ITERATION 2 ECHO 1 ECHO 2 INFOG(28): 2 BASIS OK 0 END PROC 0 BEGIN PROC 0 ITERATION 3 ECHO 1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c [0]PETSC ERROR: [0] MatGetRowIJ line 7099 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c [0]PETSC ERROR: [0] MatGetOrdering line 185 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c [0]PETSC ERROR: [0] MatGetOrdering line 185 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c [0]PETSC ERROR: [0] PCSetUp_LU line 99 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: [0] PCSetUp line 945 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c [0]PETSC ERROR: [0] KSPSetUp line 247 /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c But when I run it with valgrind, it does work well. Le 2016-05-25 20:04, Barry Smith a ?crit?: > First run with valgrind > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van >> wrote: >> >> Hi, >> >> I'm a new user of PETSc and I try to use it with MUMPS >> functionalities to compute a nullbasis. >> I wrote a code where I compute 4 times the same nullbasis. It does >> work well when I run it with several procs but with only one >> processor I get an error on the 2nd iteration when KSPSetUp is >> called. Furthermore when it is run with a debugger ( >> --with-debugging=yes), it works fine with one or several processors. >> Have you got any idea about why it doesn't work with one processor >> and no debugger? >> >> Thanks. >> Constantin. >> >> PS: You can find the code and the files required to run it enclosed. From knepley at gmail.com Thu May 26 06:44:59 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 26 May 2016 06:44:59 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Usually this means you have an uninitialized variable that is causing you to overwrite memory. Fortran is so lax in checking this, its one reason to switch to C. Thanks, Matt On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < constantin.nguyen.van at openmailbox.org> wrote: > Thanks for all your answers. > I'm sorry for the syntax mistake in MatLoad, it was done afterwards. > > I recompile PETSC --with-debugging=yes and run my code again. > Now, I also have this strange behaviour. When I run the code without > valgrind and with one proc, I have this error message: > > BEGIN PROC 0 > ITERATION 1 > ECHO 1 > ECHO 2 > INFOG(28): 2 > BASIS OK 0 > END PROC 0 > BEGIN PROC 0 > ITERATION 2 > ECHO 1 > ECHO 2 > INFOG(28): 2 > BASIS OK 0 > END PROC 0 > BEGIN PROC 0 > ITERATION 3 > ECHO 1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: [0] MatGetRowIJ line 7099 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c > [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c > [0]PETSC ERROR: [0] MatGetOrdering line 185 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > [0]PETSC ERROR: [0] MatGetOrdering line 185 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > [0]PETSC ERROR: [0] PCSetUp_LU line 99 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: [0] PCSetUp line 945 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: [0] KSPSetUp line 247 > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c > > But when I run it with valgrind, it does work well. > > Le 2016-05-25 20:04, Barry Smith a ?crit : > >> First run with valgrind >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van >>> wrote: >>> >>> Hi, >>> >>> I'm a new user of PETSc and I try to use it with MUMPS >>> functionalities to compute a nullbasis. >>> I wrote a code where I compute 4 times the same nullbasis. It does >>> work well when I run it with several procs but with only one >>> processor I get an error on the 2nd iteration when KSPSetUp is >>> called. Furthermore when it is run with a debugger ( >>> --with-debugging=yes), it works fine with one or several processors. >>> Have you got any idea about why it doesn't work with one processor >>> and no debugger? >>> >>> Thanks. >>> Constantin. >>> >>> PS: You can find the code and the files required to run it enclosed. >>> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu May 26 10:38:20 2016 From: jed at jedbrown.org (Jed Brown) Date: Thu, 26 May 2016 09:38:20 -0600 Subject: [petsc-users] Question about KSP In-Reply-To: <573D6FBD.30303@medunigraz.at> References: <573D6FBD.30303@medunigraz.at> Message-ID: <87bn3sx21f.fsf@jedbrown.org> Elias Karabelas writes: > I have a question about preconditioned solvers. So, I have a > Sparsematrix, say A, and now for some reason I would like to add some > rank-one term u v^T to that matrix. You can use Sherman-Morrison/Woodbury, but it costs an extra preconditioner application and Krylov with the old preconditioner converges in one extra iteration, so it's a wash. You might prefer it for repeated solves though. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From balay at mcs.anl.gov Thu May 26 11:23:52 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 26 May 2016 11:23:52 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Mat Object: 1 MPI processes type: mpiaij row 0: (0, 0.) (1, 0.486111) row 1: (0, 0.486111) (1, 0.) row 2: (2, 0.) (3, 0.486111) row 3: (4, 0.486111) (5, -0.486111) row 4: row 5: The matrix created is funny (empty rows at the end) - so perhaps its exposing bugs in Mat code? [is that a valid matrix for this code?] ==21091== Use of uninitialised value of size 8 ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric (inode.c:101) ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) ==21091== by 0x403535: main (ex1f.F90:185) This goes away if I add: call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) And then there is also: ==21275== Invalid read of size 8 ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ (mpiaij.c:4734) ==21275== by 0x58970A8: MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable (mpimatmatmult.c:198) ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ (mpimatmatmult.c:34) ==21275== by 0x539664E: MatMatMult (matrix.c:9510) ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) ==21275== by 0x4035B9: main (ex1f.F90:186) ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 alloc'd ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) ==21275== by 0x4035B9: main (ex1f.F90:186) Satish ----------- $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 3,7c3 < #include < #include "petsc/finclude/petscvec.h" < #include "petsc/finclude/petscmat.h" < #include "petsc/finclude/petscpc.h" < #include "petsc/finclude/petscksp.h" --- > #include "petsc/finclude/petsc.h" 40,41c36,37 < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, viewer, ierr) < call MatLoad(mat_c, viewer) --- > call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", FILE_MODE_READ, viewer, ierr) > call MatLoad(mat_c, viewer,ierr) 75a72 > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) 150c147 < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) --- > call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) On Thu, 26 May 2016, Matthew Knepley wrote: > Usually this means you have an uninitialized variable that is causing you > to overwrite memory. Fortran > is so lax in checking this, its one reason to switch to C. > > Thanks, > > Matt > > On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < > constantin.nguyen.van at openmailbox.org> wrote: > > > Thanks for all your answers. > > I'm sorry for the syntax mistake in MatLoad, it was done afterwards. > > > > I recompile PETSC --with-debugging=yes and run my code again. > > Now, I also have this strange behaviour. When I run the code without > > valgrind and with one proc, I have this error message: > > > > BEGIN PROC 0 > > ITERATION 1 > > ECHO 1 > > ECHO 2 > > INFOG(28): 2 > > BASIS OK 0 > > END PROC 0 > > BEGIN PROC 0 > > ITERATION 2 > > ECHO 1 > > ECHO 2 > > INFOG(28): 2 > > BASIS OK 0 > > END PROC 0 > > BEGIN PROC 0 > > ITERATION 3 > > ECHO 1 > > [0]PETSC ERROR: > > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > probably memory access out of range > > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > > [0]PETSC ERROR: or see > > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > > X to find memory corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames > > ------------------------------------ > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the function > > [0]PETSC ERROR: is given. > > [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > > [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > > [0]PETSC ERROR: [0] MatGetRowIJ line 7099 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c > > [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c > > [0]PETSC ERROR: [0] MatGetOrdering line 185 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > > [0]PETSC ERROR: [0] MatGetOrdering line 185 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > > [0]PETSC ERROR: [0] PCSetUp_LU line 99 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c > > [0]PETSC ERROR: [0] PCSetUp line 945 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c > > [0]PETSC ERROR: [0] KSPSetUp line 247 > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c > > > > But when I run it with valgrind, it does work well. > > > > Le 2016-05-25 20:04, Barry Smith a ?crit : > > > >> First run with valgrind > >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >> > >> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van > >>> wrote: > >>> > >>> Hi, > >>> > >>> I'm a new user of PETSc and I try to use it with MUMPS > >>> functionalities to compute a nullbasis. > >>> I wrote a code where I compute 4 times the same nullbasis. It does > >>> work well when I run it with several procs but with only one > >>> processor I get an error on the 2nd iteration when KSPSetUp is > >>> called. Furthermore when it is run with a debugger ( > >>> --with-debugging=yes), it works fine with one or several processors. > >>> Have you got any idea about why it doesn't work with one processor > >>> and no debugger? > >>> > >>> Thanks. > >>> Constantin. > >>> > >>> PS: You can find the code and the files required to run it enclosed. > >>> > >> > > > From bsmith at mcs.anl.gov Thu May 26 12:04:52 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 26 May 2016 12:04:52 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Hong needs to run with this matrix and add appropriate error checkers in the matrix routines to detect "incomplete" matrices and likely just error out. Barry > On May 26, 2016, at 11:23 AM, Satish Balay wrote: > > Mat Object: 1 MPI processes > type: mpiaij > row 0: (0, 0.) (1, 0.486111) > row 1: (0, 0.486111) (1, 0.) > row 2: (2, 0.) (3, 0.486111) > row 3: (4, 0.486111) (5, -0.486111) > row 4: > row 5: > > The matrix created is funny (empty rows at the end) - so perhaps its > exposing bugs in Mat code? [is that a valid matrix for this code?] > > ==21091== Use of uninitialised value of size 8 > ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric (inode.c:101) > ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) > ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) > ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) > ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) > ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) > ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) > ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) > ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) > ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) > ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) > ==21091== by 0x403535: main (ex1f.F90:185) > > > This goes away if I add: > > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > And then there is also: > > ==21275== Invalid read of size 8 > ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ (mpiaij.c:4734) > ==21275== by 0x58970A8: MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable (mpimatmatmult.c:198) > ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ (mpimatmatmult.c:34) > ==21275== by 0x539664E: MatMatMult (matrix.c:9510) > ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) > ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) > ==21275== by 0x4035B9: main (ex1f.F90:186) > ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 alloc'd > ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) > ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) > ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) > ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) > ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) > ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) > ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) > ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) > ==21275== by 0x4035B9: main (ex1f.F90:186) > > > Satish > > ----------- > > $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 > 3,7c3 > < #include > < #include "petsc/finclude/petscvec.h" > < #include "petsc/finclude/petscmat.h" > < #include "petsc/finclude/petscpc.h" > < #include "petsc/finclude/petscksp.h" > --- >> #include "petsc/finclude/petsc.h" > 40,41c36,37 > < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, viewer, ierr) > < call MatLoad(mat_c, viewer) > --- >> call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", FILE_MODE_READ, viewer, ierr) >> call MatLoad(mat_c, viewer,ierr) > 75a72 >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > 150c147 > < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) > --- >> call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) > > > On Thu, 26 May 2016, Matthew Knepley wrote: > >> Usually this means you have an uninitialized variable that is causing you >> to overwrite memory. Fortran >> is so lax in checking this, its one reason to switch to C. >> >> Thanks, >> >> Matt >> >> On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < >> constantin.nguyen.van at openmailbox.org> wrote: >> >>> Thanks for all your answers. >>> I'm sorry for the syntax mistake in MatLoad, it was done afterwards. >>> >>> I recompile PETSC --with-debugging=yes and run my code again. >>> Now, I also have this strange behaviour. When I run the code without >>> valgrind and with one proc, I have this error message: >>> >>> BEGIN PROC 0 >>> ITERATION 1 >>> ECHO 1 >>> ECHO 2 >>> INFOG(28): 2 >>> BASIS OK 0 >>> END PROC 0 >>> BEGIN PROC 0 >>> ITERATION 2 >>> ECHO 1 >>> ECHO 2 >>> INFOG(28): 2 >>> BASIS OK 0 >>> END PROC 0 >>> BEGIN PROC 0 >>> ITERATION 3 >>> ECHO 1 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS >>> X to find memory corruption errors >>> [0]PETSC ERROR: likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c >>> [0]PETSC ERROR: [0] MatGetRowIJ line 7099 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c >>> [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c >>> [0]PETSC ERROR: [0] PCSetUp_LU line 99 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c >>> [0]PETSC ERROR: [0] PCSetUp line 945 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: [0] KSPSetUp line 247 >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c >>> >>> But when I run it with valgrind, it does work well. >>> >>> Le 2016-05-25 20:04, Barry Smith a ?crit : >>> >>>> First run with valgrind >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>>> >>>> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van >>>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I'm a new user of PETSc and I try to use it with MUMPS >>>>> functionalities to compute a nullbasis. >>>>> I wrote a code where I compute 4 times the same nullbasis. It does >>>>> work well when I run it with several procs but with only one >>>>> processor I get an error on the 2nd iteration when KSPSetUp is >>>>> called. Furthermore when it is run with a debugger ( >>>>> --with-debugging=yes), it works fine with one or several processors. >>>>> Have you got any idea about why it doesn't work with one processor >>>>> and no debugger? >>>>> >>>>> Thanks. >>>>> Constantin. >>>>> >>>>> PS: You can find the code and the files required to run it enclosed. >>>>> >>>> >> >> From hzhang at mcs.anl.gov Thu May 26 13:52:10 2016 From: hzhang at mcs.anl.gov (Hong) Date: Thu, 26 May 2016 13:52:10 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: I'll investigate this - had a day off since yesterday. Hong On Thu, May 26, 2016 at 12:04 PM, Barry Smith wrote: > > Hong needs to run with this matrix and add appropriate error checkers in > the matrix routines to detect "incomplete" matrices and likely just error > out. > > Barry > > > On May 26, 2016, at 11:23 AM, Satish Balay wrote: > > > > Mat Object: 1 MPI processes > > type: mpiaij > > row 0: (0, 0.) (1, 0.486111) > > row 1: (0, 0.486111) (1, 0.) > > row 2: (2, 0.) (3, 0.486111) > > row 3: (4, 0.486111) (5, -0.486111) > > row 4: > > row 5: > > > > The matrix created is funny (empty rows at the end) - so perhaps its > > exposing bugs in Mat code? [is that a valid matrix for this code?] > > > > ==21091== Use of uninitialised value of size 8 > > ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric > (inode.c:101) > > ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) > > ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) > > ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) > > ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) > > ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) > > ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) > > ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) > > ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) > > ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) > > ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) > > ==21091== by 0x403535: main (ex1f.F90:185) > > > > > > This goes away if I add: > > > > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > > > And then there is also: > > > > ==21275== Invalid read of size 8 > > ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ (mpiaij.c:4734) > > ==21275== by 0x58970A8: MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable > (mpimatmatmult.c:198) > > ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ (mpimatmatmult.c:34) > > ==21275== by 0x539664E: MatMatMult (matrix.c:9510) > > ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) > > ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) > > ==21275== by 0x4035B9: main (ex1f.F90:186) > > ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 alloc'd > > ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) > > ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) > > ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) > > ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) > > ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) > > ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) > > ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) > > ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) > > ==21275== by 0x4035B9: main (ex1f.F90:186) > > > > > > Satish > > > > ----------- > > > > $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 > > 3,7c3 > > < #include > > < #include "petsc/finclude/petscvec.h" > > < #include "petsc/finclude/petscmat.h" > > < #include "petsc/finclude/petscpc.h" > > < #include "petsc/finclude/petscksp.h" > > --- > >> #include "petsc/finclude/petsc.h" > > 40,41c36,37 > > < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, > viewer, ierr) > > < call MatLoad(mat_c, viewer) > > --- > >> call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", > FILE_MODE_READ, viewer, ierr) > >> call MatLoad(mat_c, viewer,ierr) > > 75a72 > >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > 150c147 > > < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) > > --- > >> call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) > > > > > > On Thu, 26 May 2016, Matthew Knepley wrote: > > > >> Usually this means you have an uninitialized variable that is causing > you > >> to overwrite memory. Fortran > >> is so lax in checking this, its one reason to switch to C. > >> > >> Thanks, > >> > >> Matt > >> > >> On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < > >> constantin.nguyen.van at openmailbox.org> wrote: > >> > >>> Thanks for all your answers. > >>> I'm sorry for the syntax mistake in MatLoad, it was done afterwards. > >>> > >>> I recompile PETSC --with-debugging=yes and run my code again. > >>> Now, I also have this strange behaviour. When I run the code without > >>> valgrind and with one proc, I have this error message: > >>> > >>> BEGIN PROC 0 > >>> ITERATION 1 > >>> ECHO 1 > >>> ECHO 2 > >>> INFOG(28): 2 > >>> BASIS OK 0 > >>> END PROC 0 > >>> BEGIN PROC 0 > >>> ITERATION 2 > >>> ECHO 1 > >>> ECHO 2 > >>> INFOG(28): 2 > >>> BASIS OK 0 > >>> END PROC 0 > >>> BEGIN PROC 0 > >>> ITERATION 3 > >>> ECHO 1 > >>> [0]PETSC ERROR: > >>> > ------------------------------------------------------------------------ > >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > >>> probably memory access out of range > >>> [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > >>> [0]PETSC ERROR: or see > >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS > >>> X to find memory corruption errors > >>> [0]PETSC ERROR: likely location of problem given in stack below > >>> [0]PETSC ERROR: --------------------- Stack Frames > >>> ------------------------------------ > >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > >>> available, > >>> [0]PETSC ERROR: INSTEAD the line number of the start of the > function > >>> [0]PETSC ERROR: is given. > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > >>> [0]PETSC ERROR: [0] MatGetRowIJ line 7099 > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c > >>> [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > >>> [0]PETSC ERROR: [0] PCSetUp_LU line 99 > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c > >>> [0]PETSC ERROR: [0] PCSetUp line 945 > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c > >>> [0]PETSC ERROR: [0] KSPSetUp line 247 > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c > >>> > >>> But when I run it with valgrind, it does work well. > >>> > >>> Le 2016-05-25 20:04, Barry Smith a ?crit : > >>> > >>>> First run with valgrind > >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >>>> > >>>> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van > >>>>> wrote: > >>>>> > >>>>> Hi, > >>>>> > >>>>> I'm a new user of PETSc and I try to use it with MUMPS > >>>>> functionalities to compute a nullbasis. > >>>>> I wrote a code where I compute 4 times the same nullbasis. It does > >>>>> work well when I run it with several procs but with only one > >>>>> processor I get an error on the 2nd iteration when KSPSetUp is > >>>>> called. Furthermore when it is run with a debugger ( > >>>>> --with-debugging=yes), it works fine with one or several processors. > >>>>> Have you got any idea about why it doesn't work with one processor > >>>>> and no debugger? > >>>>> > >>>>> Thanks. > >>>>> Constantin. > >>>>> > >>>>> PS: You can find the code and the files required to run it enclosed. > >>>>> > >>>> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu May 26 15:05:09 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 26 May 2016 15:05:09 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Well looks like MatGetBrowsOfAoCols_MPIAIJ() issue is primarily setting some local variables with uninitialzed data [thats primarily set/used for parallel commumication]. So valgrind flags it - but I don't think it gets used later on. [perhaps most of the code should be skipped for a sequential run..] The primary issue here is MatGetRowIJ_SeqAIJ_Inode_Symmetric() called by MatGetOrdering_ND(). The workarround is to not use ND with: call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) But I think the following might be the fix [have to recheck].. The test code works with this change [with the default ND] diff --git a/src/mat/impls/aij/seq/inode.c b/src/mat/impls/aij/seq/inode.c index 9af404e..49f76ce 100644 --- a/src/mat/impls/aij/seq/inode.c +++ b/src/mat/impls/aij/seq/inode.c @@ -97,6 +97,7 @@ static PetscErrorCode MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i j = aj + ai[row] + ishift; jmax = aj + ai[row+1] + ishift; + if (j==jmax) continue; /* empty row */ col = *j++ + ishift; i2 = tvc[col]; while (i2 I'll investigate this - had a day off since yesterday. > Hong > > On Thu, May 26, 2016 at 12:04 PM, Barry Smith wrote: > > > > > Hong needs to run with this matrix and add appropriate error checkers in > > the matrix routines to detect "incomplete" matrices and likely just error > > out. > > > > Barry > > > > > On May 26, 2016, at 11:23 AM, Satish Balay wrote: > > > > > > Mat Object: 1 MPI processes > > > type: mpiaij > > > row 0: (0, 0.) (1, 0.486111) > > > row 1: (0, 0.486111) (1, 0.) > > > row 2: (2, 0.) (3, 0.486111) > > > row 3: (4, 0.486111) (5, -0.486111) > > > row 4: > > > row 5: > > > > > > The matrix created is funny (empty rows at the end) - so perhaps its > > > exposing bugs in Mat code? [is that a valid matrix for this code?] > > > > > > ==21091== Use of uninitialised value of size 8 > > > ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric > > (inode.c:101) > > > ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) > > > ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) > > > ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) > > > ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) > > > ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) > > > ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) > > > ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) > > > ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) > > > ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) > > > ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) > > > ==21091== by 0x403535: main (ex1f.F90:185) > > > > > > > > > This goes away if I add: > > > > > > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > > > > > And then there is also: > > > > > > ==21275== Invalid read of size 8 > > > ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ (mpiaij.c:4734) > > > ==21275== by 0x58970A8: MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable > > (mpimatmatmult.c:198) > > > ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ (mpimatmatmult.c:34) > > > ==21275== by 0x539664E: MatMatMult (matrix.c:9510) > > > ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) > > > ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) > > > ==21275== by 0x4035B9: main (ex1f.F90:186) > > > ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 alloc'd > > > ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) > > > ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) > > > ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) > > > ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) > > > ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) > > > ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) > > > ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) > > > ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) > > > ==21275== by 0x4035B9: main (ex1f.F90:186) > > > > > > > > > Satish > > > > > > ----------- > > > > > > $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 > > > 3,7c3 > > > < #include > > > < #include "petsc/finclude/petscvec.h" > > > < #include "petsc/finclude/petscmat.h" > > > < #include "petsc/finclude/petscpc.h" > > > < #include "petsc/finclude/petscksp.h" > > > --- > > >> #include "petsc/finclude/petsc.h" > > > 40,41c36,37 > > > < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, > > viewer, ierr) > > > < call MatLoad(mat_c, viewer) > > > --- > > >> call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", > > FILE_MODE_READ, viewer, ierr) > > >> call MatLoad(mat_c, viewer,ierr) > > > 75a72 > > >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > > 150c147 > > > < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) > > > --- > > >> call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) > > > > > > > > > On Thu, 26 May 2016, Matthew Knepley wrote: > > > > > >> Usually this means you have an uninitialized variable that is causing > > you > > >> to overwrite memory. Fortran > > >> is so lax in checking this, its one reason to switch to C. > > >> > > >> Thanks, > > >> > > >> Matt > > >> > > >> On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < > > >> constantin.nguyen.van at openmailbox.org> wrote: > > >> > > >>> Thanks for all your answers. > > >>> I'm sorry for the syntax mistake in MatLoad, it was done afterwards. > > >>> > > >>> I recompile PETSC --with-debugging=yes and run my code again. > > >>> Now, I also have this strange behaviour. When I run the code without > > >>> valgrind and with one proc, I have this error message: > > >>> > > >>> BEGIN PROC 0 > > >>> ITERATION 1 > > >>> ECHO 1 > > >>> ECHO 2 > > >>> INFOG(28): 2 > > >>> BASIS OK 0 > > >>> END PROC 0 > > >>> BEGIN PROC 0 > > >>> ITERATION 2 > > >>> ECHO 1 > > >>> ECHO 2 > > >>> INFOG(28): 2 > > >>> BASIS OK 0 > > >>> END PROC 0 > > >>> BEGIN PROC 0 > > >>> ITERATION 3 > > >>> ECHO 1 > > >>> [0]PETSC ERROR: > > >>> > > ------------------------------------------------------------------------ > > >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > >>> probably memory access out of range > > >>> [0]PETSC ERROR: Try option -start_in_debugger or > > -on_error_attach_debugger > > >>> [0]PETSC ERROR: or see > > >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > > OS > > >>> X to find memory corruption errors > > >>> [0]PETSC ERROR: likely location of problem given in stack below > > >>> [0]PETSC ERROR: --------------------- Stack Frames > > >>> ------------------------------------ > > >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > >>> available, > > >>> [0]PETSC ERROR: INSTEAD the line number of the start of the > > function > > >>> [0]PETSC ERROR: is given. > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 > > >>> > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 > > >>> > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > > >>> [0]PETSC ERROR: [0] MatGetRowIJ line 7099 > > >>> > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c > > >>> [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 > > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > > >>> [0]PETSC ERROR: [0] PCSetUp_LU line 99 > > >>> > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c > > >>> [0]PETSC ERROR: [0] PCSetUp line 945 > > >>> > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c > > >>> [0]PETSC ERROR: [0] KSPSetUp line 247 > > >>> > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c > > >>> > > >>> But when I run it with valgrind, it does work well. > > >>> > > >>> Le 2016-05-25 20:04, Barry Smith a ?crit : > > >>> > > >>>> First run with valgrind > > >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > >>>> > > >>>> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van > > >>>>> wrote: > > >>>>> > > >>>>> Hi, > > >>>>> > > >>>>> I'm a new user of PETSc and I try to use it with MUMPS > > >>>>> functionalities to compute a nullbasis. > > >>>>> I wrote a code where I compute 4 times the same nullbasis. It does > > >>>>> work well when I run it with several procs but with only one > > >>>>> processor I get an error on the 2nd iteration when KSPSetUp is > > >>>>> called. Furthermore when it is run with a debugger ( > > >>>>> --with-debugging=yes), it works fine with one or several processors. > > >>>>> Have you got any idea about why it doesn't work with one processor > > >>>>> and no debugger? > > >>>>> > > >>>>> Thanks. > > >>>>> Constantin. > > >>>>> > > >>>>> PS: You can find the code and the files required to run it enclosed. > > >>>>> > > >>>> > > >> > > >> > > > > > From hzhang at mcs.anl.gov Thu May 26 17:15:05 2016 From: hzhang at mcs.anl.gov (Hong) Date: Thu, 26 May 2016 17:15:05 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Satish found a problem in using inode routines. In addition, user code has bugs. I modified build_nullbasis_petsc_mumps.F90 into ex51f.F90 (attached) which works well with option '-mat_no_inode'. ex51f.F90 differs from build_nullbasis_petsc_mumps.F90 in 1) use MATAIJ/MATDENSE instead of MATMPIAIJ/MATMPIDENSE MATAIJ wraps MATSEQAIJ and MATMPIAIJ. 2) MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x,ierr) -> MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x,ierr) see http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatConvert.html Hong On Thu, May 26, 2016 at 3:05 PM, Satish Balay wrote: > Well looks like MatGetBrowsOfAoCols_MPIAIJ() issue is primarily > setting some local variables with uninitialzed data [thats primarily > set/used for parallel commumication]. So valgrind flags it - but I > don't think it gets used later on. > > [perhaps most of the code should be skipped for a sequential run..] > > The primary issue here is MatGetRowIJ_SeqAIJ_Inode_Symmetric() called > by MatGetOrdering_ND(). > > The workarround is to not use ND with: > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > But I think the following might be the fix [have to recheck].. The > test code works with this change [with the default ND] > > diff --git a/src/mat/impls/aij/seq/inode.c b/src/mat/impls/aij/seq/inode.c > index 9af404e..49f76ce 100644 > --- a/src/mat/impls/aij/seq/inode.c > +++ b/src/mat/impls/aij/seq/inode.c > @@ -97,6 +97,7 @@ static PetscErrorCode > MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i > > j = aj + ai[row] + ishift; > jmax = aj + ai[row+1] + ishift; > + if (j==jmax) continue; /* empty row */ > col = *j++ + ishift; > i2 = tvc[col]; > while (i2 2.[-xx-------],off-diagonal elemets */ > @@ -125,6 +126,7 @@ static PetscErrorCode > MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i > for (i1=0,row=0; i1 j = aj + ai[row] + ishift; > jmax = aj + ai[row+1] + ishift; > + if (j==jmax) continue; /* empty row */ > col = *j++ + ishift; > i2 = tvc[col]; > while (i2 > Satish > > On Thu, 26 May 2016, Hong wrote: > > > I'll investigate this - had a day off since yesterday. > > Hong > > > > On Thu, May 26, 2016 at 12:04 PM, Barry Smith > wrote: > > > > > > > > Hong needs to run with this matrix and add appropriate error > checkers in > > > the matrix routines to detect "incomplete" matrices and likely just > error > > > out. > > > > > > Barry > > > > > > > On May 26, 2016, at 11:23 AM, Satish Balay > wrote: > > > > > > > > Mat Object: 1 MPI processes > > > > type: mpiaij > > > > row 0: (0, 0.) (1, 0.486111) > > > > row 1: (0, 0.486111) (1, 0.) > > > > row 2: (2, 0.) (3, 0.486111) > > > > row 3: (4, 0.486111) (5, -0.486111) > > > > row 4: > > > > row 5: > > > > > > > > The matrix created is funny (empty rows at the end) - so perhaps its > > > > exposing bugs in Mat code? [is that a valid matrix for this code?] > > > > > > > > ==21091== Use of uninitialised value of size 8 > > > > ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric > > > (inode.c:101) > > > > ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) > > > > ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) > > > > ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) > > > > ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) > > > > ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) > > > > ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) > > > > ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) > > > > ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) > > > > ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) > > > > ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) > > > > ==21091== by 0x403535: main (ex1f.F90:185) > > > > > > > > > > > > This goes away if I add: > > > > > > > > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > > > > > > > And then there is also: > > > > > > > > ==21275== Invalid read of size 8 > > > > ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ (mpiaij.c:4734) > > > > ==21275== by 0x58970A8: > MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable > > > (mpimatmatmult.c:198) > > > > ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ > (mpimatmatmult.c:34) > > > > ==21275== by 0x539664E: MatMatMult (matrix.c:9510) > > > > ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) > > > > ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) > > > > ==21275== by 0x4035B9: main (ex1f.F90:186) > > > > ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 > alloc'd > > > > ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) > > > > ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) > > > > ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) > > > > ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) > > > > ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) > > > > ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) > > > > ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) > > > > ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) > > > > ==21275== by 0x4035B9: main (ex1f.F90:186) > > > > > > > > > > > > Satish > > > > > > > > ----------- > > > > > > > > $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 > > > > 3,7c3 > > > > < #include > > > > < #include "petsc/finclude/petscvec.h" > > > > < #include "petsc/finclude/petscmat.h" > > > > < #include "petsc/finclude/petscpc.h" > > > > < #include "petsc/finclude/petscksp.h" > > > > --- > > > >> #include "petsc/finclude/petsc.h" > > > > 40,41c36,37 > > > > < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", 0, > > > viewer, ierr) > > > > < call MatLoad(mat_c, viewer) > > > > --- > > > >> call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", > > > FILE_MODE_READ, viewer, ierr) > > > >> call MatLoad(mat_c, viewer,ierr) > > > > 75a72 > > > >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > > > > 150c147 > > > > < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) > > > > --- > > > >> call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) > > > > > > > > > > > > On Thu, 26 May 2016, Matthew Knepley wrote: > > > > > > > >> Usually this means you have an uninitialized variable that is > causing > > > you > > > >> to overwrite memory. Fortran > > > >> is so lax in checking this, its one reason to switch to C. > > > >> > > > >> Thanks, > > > >> > > > >> Matt > > > >> > > > >> On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < > > > >> constantin.nguyen.van at openmailbox.org> wrote: > > > >> > > > >>> Thanks for all your answers. > > > >>> I'm sorry for the syntax mistake in MatLoad, it was done > afterwards. > > > >>> > > > >>> I recompile PETSC --with-debugging=yes and run my code again. > > > >>> Now, I also have this strange behaviour. When I run the code > without > > > >>> valgrind and with one proc, I have this error message: > > > >>> > > > >>> BEGIN PROC 0 > > > >>> ITERATION 1 > > > >>> ECHO 1 > > > >>> ECHO 2 > > > >>> INFOG(28): 2 > > > >>> BASIS OK 0 > > > >>> END PROC 0 > > > >>> BEGIN PROC 0 > > > >>> ITERATION 2 > > > >>> ECHO 1 > > > >>> ECHO 2 > > > >>> INFOG(28): 2 > > > >>> BASIS OK 0 > > > >>> END PROC 0 > > > >>> BEGIN PROC 0 > > > >>> ITERATION 3 > > > >>> ECHO 1 > > > >>> [0]PETSC ERROR: > > > >>> > > > > ------------------------------------------------------------------------ > > > >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, > > > >>> probably memory access out of range > > > >>> [0]PETSC ERROR: Try option -start_in_debugger or > > > -on_error_attach_debugger > > > >>> [0]PETSC ERROR: or see > > > >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > > >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > Mac > > > OS > > > >>> X to find memory corruption errors > > > >>> [0]PETSC ERROR: likely location of problem given in stack below > > > >>> [0]PETSC ERROR: --------------------- Stack Frames > > > >>> ------------------------------------ > > > >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > > >>> available, > > > >>> [0]PETSC ERROR: INSTEAD the line number of the start of the > > > function > > > >>> [0]PETSC ERROR: is given. > > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 > > > >>> > > > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 > > > >>> > > > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > > > >>> [0]PETSC ERROR: [0] MatGetRowIJ line 7099 > > > >>> > > > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c > > > >>> [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 > > > >>> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c > > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > > > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > > > >>> > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > > > >>> [0]PETSC ERROR: [0] PCSetUp_LU line 99 > > > >>> > > > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c > > > >>> [0]PETSC ERROR: [0] PCSetUp line 945 > > > >>> > > > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c > > > >>> [0]PETSC ERROR: [0] KSPSetUp line 247 > > > >>> > > > > /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c > > > >>> > > > >>> But when I run it with valgrind, it does work well. > > > >>> > > > >>> Le 2016-05-25 20:04, Barry Smith a ?crit : > > > >>> > > > >>>> First run with valgrind > > > >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > > >>>> > > > >>>> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van > > > >>>>> wrote: > > > >>>>> > > > >>>>> Hi, > > > >>>>> > > > >>>>> I'm a new user of PETSc and I try to use it with MUMPS > > > >>>>> functionalities to compute a nullbasis. > > > >>>>> I wrote a code where I compute 4 times the same nullbasis. It > does > > > >>>>> work well when I run it with several procs but with only one > > > >>>>> processor I get an error on the 2nd iteration when KSPSetUp is > > > >>>>> called. Furthermore when it is run with a debugger ( > > > >>>>> --with-debugging=yes), it works fine with one or several > processors. > > > >>>>> Have you got any idea about why it doesn't work with one > processor > > > >>>>> and no debugger? > > > >>>>> > > > >>>>> Thanks. > > > >>>>> Constantin. > > > >>>>> > > > >>>>> PS: You can find the code and the files required to run it > enclosed. > > > >>>>> > > > >>>> > > > >> > > > >> > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex51f.F90 Type: application/octet-stream Size: 7307 bytes Desc: not available URL: From jed at jedbrown.org Thu May 26 22:54:33 2016 From: jed at jedbrown.org (Jed Brown) Date: Thu, 26 May 2016 21:54:33 -0600 Subject: [petsc-users] MATELEMENTAL MatSetValue( ) In-Reply-To: <1B22639EA62F9B41AF2D9DE147555DF30BA9C12B@Mail.cfdrc.com> References: <1B22639EA62F9B41AF2D9DE147555DF30BA9BFF6@Mail.cfdrc.com> <8737p5xznx.fsf@jedbrown.org> <1B22639EA62F9B41AF2D9DE147555DF30BA9C12B@Mail.cfdrc.com> Message-ID: <87iny0updy.fsf@jedbrown.org> Please always use "reply-all" so that your messages go to the list. This is standard mailing list etiquette. It is important to preserve threading for people who find this discussion later and so that we do not waste our time re-answering the same questions that have already been answered in private side-conversations. You'll likely get an answer faster that way too. "Eric D. Robertson" writes: > Hi Jed, > > I actually tried your suggestion (which is in-line with ex38.c). Running that example actually seems to produce similar behavior. The 'C' matrix actually seems to grow in size with the number of processors. Is there something wrong here, or am I misinterpreting the purpose of the example? That test (NOT tutorial) uses ierr = MatSetSizes(C,m,n,PETSC_DECIDE,PETSC_DECIDE);CHKERRQ(ierr); which keeps the local sizes constant. You should be creating the matrix size that is semantically meaningful to you and setting the owned rows as identified by MatGetOwnershipIS(). > Eric Robertson > Research Engineer, BET > CFD Research Corporation > 701 McMillian Way NW, Suite D > Huntsville, AL 35806 > Ph: 256-726-4912 > Email: eric.robertson at cfdrc.com > > > ________________________________________ > From: Jed Brown [jed at jedbrown.org] > Sent: Wednesday, May 25, 2016 10:32 PM > To: Eric D. Robertson; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] MATELEMENTAL MatSetValue( ) > > Use MatGetOwnershipIS() and set only the entries in your owned rows and > columns. What you're doing is nonscalable anyway because every process > is filling the global matrix. If you want a cheap hack for small > problems on small numbers of cores, you can run your insertion loop on > only rank 0. > > "Eric D. Robertson" writes: > >> I am trying to multiply two dense matrices using the Elemental interface. I fill the matrix using MatSetValue( ) like below: >> >> for ( i = 0; i < Matrix.M; i++){ >> for ( j = 0; j < Matrix.N; j++) { >> PetscScalar temp = i + one + (j*three); >> MatSetValue(Matrix.A, i, j, temp, INSERT_VALUES); >> >> } >> } >> >> >> However, I seem to get the following error: >> >> [0]PETSC ERROR: No support for this operation for this object type >> [0]PETSC ERROR: Only ADD_VALUES to off-processor entry is supported >> >> But if I use ADD_VALUES, I get a different matrix depending on the number of processors used. The entries become multiplied by the number of processors. How do I reconcile this? >> >> >> > > static char help[] = "Test interface of Elemental. \n\n"; > > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > Mat C,Caij; > PetscInt i,j,m = 5,n,nrows,ncols; > const PetscInt *rows,*cols; > IS isrows,iscols; > PetscErrorCode ierr; > PetscBool flg,Test_MatMatMult=PETSC_FALSE,mats_view=PETSC_FALSE; > PetscScalar *v; > PetscMPIInt rank,size; > > PetscInitialize(&argc,&args,(char*)0,help); > ierr = MPI_Comm_rank(PETSC_COMM_WORLD,&rank);CHKERRQ(ierr); > ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); > ierr = PetscOptionsHasName(NULL,"-mats_view",&mats_view);CHKERRQ(ierr); > > /* Get local block or element size*/ > ierr = PetscOptionsGetInt(NULL,"-m",&m,NULL);CHKERRQ(ierr); > n = m; > ierr = PetscOptionsGetInt(NULL,"-n",&n,NULL);CHKERRQ(ierr); > > ierr = MatCreate(PETSC_COMM_WORLD,&C);CHKERRQ(ierr); > ierr = MatSetSizes(C,m,n,PETSC_DECIDE,PETSC_DECIDE);CHKERRQ(ierr); > ierr = MatSetType(C,MATELEMENTAL);CHKERRQ(ierr); > ierr = MatSetFromOptions(C);CHKERRQ(ierr); > ierr = MatSetUp(C);CHKERRQ(ierr); > > ierr = PetscOptionsHasName(NULL,"-row_oriented",&flg);CHKERRQ(ierr); > if (flg) {ierr = MatSetOption(C,MAT_ROW_ORIENTED,PETSC_TRUE);CHKERRQ(ierr);} > ierr = MatGetOwnershipIS(C,&isrows,&iscols);CHKERRQ(ierr); > ierr = PetscOptionsHasName(NULL,"-Cexp_view_ownership",&flg);CHKERRQ(ierr); > if (flg) { /* View ownership of explicit C */ > IS tmp; > ierr = PetscPrintf(PETSC_COMM_WORLD,"Ownership of explicit C:\n");CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD,"Row index set:\n");CHKERRQ(ierr); > ierr = ISOnComm(isrows,PETSC_COMM_WORLD,PETSC_USE_POINTER,&tmp);CHKERRQ(ierr); > ierr = ISView(tmp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > ierr = ISDestroy(&tmp);CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD,"Column index set:\n");CHKERRQ(ierr); > ierr = ISOnComm(iscols,PETSC_COMM_WORLD,PETSC_USE_POINTER,&tmp);CHKERRQ(ierr); > ierr = ISView(tmp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > ierr = ISDestroy(&tmp);CHKERRQ(ierr); > } > > /* Set local matrix entries */ > ierr = ISGetLocalSize(isrows,&nrows);CHKERRQ(ierr); > ierr = ISGetIndices(isrows,&rows);CHKERRQ(ierr); > ierr = ISGetLocalSize(iscols,&ncols);CHKERRQ(ierr); > ierr = ISGetIndices(iscols,&cols);CHKERRQ(ierr); > ierr = PetscMalloc1(nrows*ncols,&v);CHKERRQ(ierr); > for (i=0; i for (j=0; j /*v[i*ncols+j] = (PetscReal)(rank);*/ > v[i*ncols+j] = (PetscReal)(rank*10000+100*rows[i]+cols[j]); > } > } > ierr = MatSetValues(C,nrows,rows,ncols,cols,v,INSERT_VALUES);CHKERRQ(ierr); > ierr = ISRestoreIndices(isrows,&rows);CHKERRQ(ierr); > ierr = ISRestoreIndices(iscols,&cols);CHKERRQ(ierr); > ierr = MatAssemblyBegin(C,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(C,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > /* Test MatView() */ > if (mats_view) { > ierr = MatView(C,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > } > > /* Set unowned matrix entries - add subdiagonals and diagonals from proc[0] */ > if (!rank) { > PetscInt M,N,cols[2]; > ierr = MatGetSize(C,&M,&N);CHKERRQ(ierr); > for (i=0; i cols[0] = i; v[0] = i + 0.5; > cols[1] = i-1; v[1] = 0.5; > if (i) { > ierr = MatSetValues(C,1,&i,2,cols,v,ADD_VALUES);CHKERRQ(ierr); > } else { > ierr = MatSetValues(C,1,&i,1,&i,v,ADD_VALUES);CHKERRQ(ierr); > } > } > } > ierr = MatAssemblyBegin(C,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(C,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > /* Test MatMult() */ > ierr = MatComputeExplicitOperator(C,&Caij);CHKERRQ(ierr); > ierr = MatMultEqual(C,Caij,5,&flg);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_NOTSAMETYPE,"C != Caij. MatMultEqual() fails"); > ierr = MatMultTransposeEqual(C,Caij,5,&flg);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_NOTSAMETYPE,"C != Caij. MatMultTransposeEqual() fails"); > > /* Test MatMultAdd() and MatMultTransposeAddEqual() */ > ierr = MatMultAddEqual(C,Caij,5,&flg);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_NOTSAMETYPE,"C != Caij. MatMultAddEqual() fails"); > ierr = MatMultTransposeAddEqual(C,Caij,5,&flg);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_NOTSAMETYPE,"C != Caij. MatMultTransposeAddEqual() fails"); > > /* Test MatMatMult() */ > ierr = PetscOptionsHasName(NULL,"-test_matmatmult",&Test_MatMatMult);CHKERRQ(ierr); > if (Test_MatMatMult) { > Mat CCelem,CCaij; > ierr = MatMatMult(C,C,MAT_INITIAL_MATRIX,PETSC_DEFAULT,&CCelem);CHKERRQ(ierr); > ierr = MatMatMult(Caij,Caij,MAT_INITIAL_MATRIX,PETSC_DEFAULT,&CCaij);CHKERRQ(ierr); > ierr = MatMultEqual(CCelem,CCaij,5,&flg);CHKERRQ(ierr); > if (!flg) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_NOTSAMETYPE,"CCelem != CCaij. MatMatMult() fails"); > ierr = MatDestroy(&CCaij);CHKERRQ(ierr); > ierr = MatDestroy(&CCelem);CHKERRQ(ierr); > } > > ierr = PetscFree(v);CHKERRQ(ierr); > ierr = ISDestroy(&isrows);CHKERRQ(ierr); > ierr = ISDestroy(&iscols);CHKERRQ(ierr); > ierr = MatDestroy(&C);CHKERRQ(ierr); > ierr = MatDestroy(&Caij);CHKERRQ(ierr); > ierr = PetscFinalize(); > return 0; > } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From hzhang at mcs.anl.gov Fri May 27 09:55:24 2016 From: hzhang at mcs.anl.gov (Hong) Date: Fri, 27 May 2016 09:55:24 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Satish, I tested your fix on ex51f.F90 (modified from build_nullbasis_petsc_mumps.F90) --it gives clean results with valgrind. Shall you patch it to petsc-maint? I also like add ex51f.F90 (contributed by Constantin) to petsc/src/ksp/ksp/examples/tests/. Hong On Thu, May 26, 2016 at 5:15 PM, Hong wrote: > Satish found a problem in using inode routines. > > In addition, user code has bugs. I modified > build_nullbasis_petsc_mumps.F90 into ex51f.F90 (attached) > which works well with option '-mat_no_inode'. > > ex51f.F90 differs from build_nullbasis_petsc_mumps.F90 in > 1) use MATAIJ/MATDENSE instead of MATMPIAIJ/MATMPIDENSE > MATAIJ wraps MATSEQAIJ and MATMPIAIJ. > > 2) > MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x,ierr) > -> > MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x,ierr) > see > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatConvert.html > > Hong > > On Thu, May 26, 2016 at 3:05 PM, Satish Balay wrote: > >> Well looks like MatGetBrowsOfAoCols_MPIAIJ() issue is primarily >> setting some local variables with uninitialzed data [thats primarily >> set/used for parallel commumication]. So valgrind flags it - but I >> don't think it gets used later on. >> >> [perhaps most of the code should be skipped for a sequential run..] >> >> The primary issue here is MatGetRowIJ_SeqAIJ_Inode_Symmetric() called >> by MatGetOrdering_ND(). >> >> The workarround is to not use ND with: >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) >> >> But I think the following might be the fix [have to recheck].. The >> test code works with this change [with the default ND] >> >> diff --git a/src/mat/impls/aij/seq/inode.c b/src/mat/impls/aij/seq/inode.c >> index 9af404e..49f76ce 100644 >> --- a/src/mat/impls/aij/seq/inode.c >> +++ b/src/mat/impls/aij/seq/inode.c >> @@ -97,6 +97,7 @@ static PetscErrorCode >> MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i >> >> j = aj + ai[row] + ishift; >> jmax = aj + ai[row+1] + ishift; >> + if (j==jmax) continue; /* empty row */ >> col = *j++ + ishift; >> i2 = tvc[col]; >> while (i2> 2.[-xx-------],off-diagonal elemets */ >> @@ -125,6 +126,7 @@ static PetscErrorCode >> MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i >> for (i1=0,row=0; i1> j = aj + ai[row] + ishift; >> jmax = aj + ai[row+1] + ishift; >> + if (j==jmax) continue; /* empty row */ >> col = *j++ + ishift; >> i2 = tvc[col]; >> while (i2> >> Satish >> >> On Thu, 26 May 2016, Hong wrote: >> >> > I'll investigate this - had a day off since yesterday. >> > Hong >> > >> > On Thu, May 26, 2016 at 12:04 PM, Barry Smith >> wrote: >> > >> > > >> > > Hong needs to run with this matrix and add appropriate error >> checkers in >> > > the matrix routines to detect "incomplete" matrices and likely just >> error >> > > out. >> > > >> > > Barry >> > > >> > > > On May 26, 2016, at 11:23 AM, Satish Balay >> wrote: >> > > > >> > > > Mat Object: 1 MPI processes >> > > > type: mpiaij >> > > > row 0: (0, 0.) (1, 0.486111) >> > > > row 1: (0, 0.486111) (1, 0.) >> > > > row 2: (2, 0.) (3, 0.486111) >> > > > row 3: (4, 0.486111) (5, -0.486111) >> > > > row 4: >> > > > row 5: >> > > > >> > > > The matrix created is funny (empty rows at the end) - so perhaps its >> > > > exposing bugs in Mat code? [is that a valid matrix for this code?] >> > > > >> > > > ==21091== Use of uninitialised value of size 8 >> > > > ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric >> > > (inode.c:101) >> > > > ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) >> > > > ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) >> > > > ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) >> > > > ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) >> > > > ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) >> > > > ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) >> > > > ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) >> > > > ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) >> > > > ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) >> > > > ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) >> > > > ==21091== by 0x403535: main (ex1f.F90:185) >> > > > >> > > > >> > > > This goes away if I add: >> > > > >> > > > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) >> > > > >> > > > And then there is also: >> > > > >> > > > ==21275== Invalid read of size 8 >> > > > ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ >> (mpiaij.c:4734) >> > > > ==21275== by 0x58970A8: >> MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable >> > > (mpimatmatmult.c:198) >> > > > ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ >> (mpimatmatmult.c:34) >> > > > ==21275== by 0x539664E: MatMatMult (matrix.c:9510) >> > > > ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) >> > > > ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) >> > > > ==21275== by 0x4035B9: main (ex1f.F90:186) >> > > > ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 >> alloc'd >> > > > ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) >> > > > ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) >> > > > ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) >> > > > ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) >> > > > ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) >> > > > ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) >> > > > ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) >> > > > ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) >> > > > ==21275== by 0x4035B9: main (ex1f.F90:186) >> > > > >> > > > >> > > > Satish >> > > > >> > > > ----------- >> > > > >> > > > $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 >> > > > 3,7c3 >> > > > < #include >> > > > < #include "petsc/finclude/petscvec.h" >> > > > < #include "petsc/finclude/petscmat.h" >> > > > < #include "petsc/finclude/petscpc.h" >> > > > < #include "petsc/finclude/petscksp.h" >> > > > --- >> > > >> #include "petsc/finclude/petsc.h" >> > > > 40,41c36,37 >> > > > < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", >> 0, >> > > viewer, ierr) >> > > > < call MatLoad(mat_c, viewer) >> > > > --- >> > > >> call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", >> > > FILE_MODE_READ, viewer, ierr) >> > > >> call MatLoad(mat_c, viewer,ierr) >> > > > 75a72 >> > > >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) >> > > > 150c147 >> > > > < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) >> > > > --- >> > > >> call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) >> > > > >> > > > >> > > > On Thu, 26 May 2016, Matthew Knepley wrote: >> > > > >> > > >> Usually this means you have an uninitialized variable that is >> causing >> > > you >> > > >> to overwrite memory. Fortran >> > > >> is so lax in checking this, its one reason to switch to C. >> > > >> >> > > >> Thanks, >> > > >> >> > > >> Matt >> > > >> >> > > >> On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < >> > > >> constantin.nguyen.van at openmailbox.org> wrote: >> > > >> >> > > >>> Thanks for all your answers. >> > > >>> I'm sorry for the syntax mistake in MatLoad, it was done >> afterwards. >> > > >>> >> > > >>> I recompile PETSC --with-debugging=yes and run my code again. >> > > >>> Now, I also have this strange behaviour. When I run the code >> without >> > > >>> valgrind and with one proc, I have this error message: >> > > >>> >> > > >>> BEGIN PROC 0 >> > > >>> ITERATION 1 >> > > >>> ECHO 1 >> > > >>> ECHO 2 >> > > >>> INFOG(28): 2 >> > > >>> BASIS OK 0 >> > > >>> END PROC 0 >> > > >>> BEGIN PROC 0 >> > > >>> ITERATION 2 >> > > >>> ECHO 1 >> > > >>> ECHO 2 >> > > >>> INFOG(28): 2 >> > > >>> BASIS OK 0 >> > > >>> END PROC 0 >> > > >>> BEGIN PROC 0 >> > > >>> ITERATION 3 >> > > >>> ECHO 1 >> > > >>> [0]PETSC ERROR: >> > > >>> >> > > >> ------------------------------------------------------------------------ >> > > >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >> Violation, >> > > >>> probably memory access out of range >> > > >>> [0]PETSC ERROR: Try option -start_in_debugger or >> > > -on_error_attach_debugger >> > > >>> [0]PETSC ERROR: or see >> > > >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> > > >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and >> Apple Mac >> > > OS >> > > >>> X to find memory corruption errors >> > > >>> [0]PETSC ERROR: likely location of problem given in stack below >> > > >>> [0]PETSC ERROR: --------------------- Stack Frames >> > > >>> ------------------------------------ >> > > >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> > > >>> available, >> > > >>> [0]PETSC ERROR: INSTEAD the line number of the start of the >> > > function >> > > >>> [0]PETSC ERROR: is given. >> > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 >> > > >>> >> > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c >> > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 >> > > >>> >> > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c >> > > >>> [0]PETSC ERROR: [0] MatGetRowIJ line 7099 >> > > >>> >> > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c >> > > >>> [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 >> > > >>> >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c >> > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 >> > > >>> >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c >> > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 >> > > >>> >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c >> > > >>> [0]PETSC ERROR: [0] PCSetUp_LU line 99 >> > > >>> >> > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c >> > > >>> [0]PETSC ERROR: [0] PCSetUp line 945 >> > > >>> >> > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c >> > > >>> [0]PETSC ERROR: [0] KSPSetUp line 247 >> > > >>> >> > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c >> > > >>> >> > > >>> But when I run it with valgrind, it does work well. >> > > >>> >> > > >>> Le 2016-05-25 20:04, Barry Smith a ?crit : >> > > >>> >> > > >>>> First run with valgrind >> > > >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> > > >>>> >> > > >>>> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van >> > > >>>>> wrote: >> > > >>>>> >> > > >>>>> Hi, >> > > >>>>> >> > > >>>>> I'm a new user of PETSc and I try to use it with MUMPS >> > > >>>>> functionalities to compute a nullbasis. >> > > >>>>> I wrote a code where I compute 4 times the same nullbasis. It >> does >> > > >>>>> work well when I run it with several procs but with only one >> > > >>>>> processor I get an error on the 2nd iteration when KSPSetUp is >> > > >>>>> called. Furthermore when it is run with a debugger ( >> > > >>>>> --with-debugging=yes), it works fine with one or several >> processors. >> > > >>>>> Have you got any idea about why it doesn't work with one >> processor >> > > >>>>> and no debugger? >> > > >>>>> >> > > >>>>> Thanks. >> > > >>>>> Constantin. >> > > >>>>> >> > > >>>>> PS: You can find the code and the files required to run it >> enclosed. >> > > >>>>> >> > > >>>> >> > > >> >> > > >> >> > > >> > > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 27 10:08:38 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 27 May 2016 10:08:38 -0500 Subject: [petsc-users] KSPSetUp with PETSc/MUMPS In-Reply-To: References: <3BE03BC7-BCD6-4FA3-8B2C-C701C623EA36@mcs.anl.gov> <4f7fbc5026a5412bfa1fea12a7a9296b@openmailbox.org> Message-ID: Hong, My fix is already in next [and will merge to maint now]. https://bitbucket.org/petsc/petsc/commits/83fed2ed878cb731bb04364f986d423ef53d20e6 I was hoping you would check the issue with valgrind messages on MatGetBrowsOfAoCols_MPIAIJ() [As mentioned in my earlier mail - its probably setting some local variables with uninitialized data for sequential runs - and perhaps can be fixed by not doing that..] And I've indicated my changes to Constantin's code in my earlier e-mail. [I don't see some of my changes in your diff..] Satish On Fri, 27 May 2016, Hong wrote: > Satish, > I tested your fix on ex51f.F90 (modified from > build_nullbasis_petsc_mumps.F90) --it gives clean results with valgrind. > > Shall you patch it to petsc-maint? > > I also like add ex51f.F90 (contributed by Constantin) > to petsc/src/ksp/ksp/examples/tests/. > > Hong > > > On Thu, May 26, 2016 at 5:15 PM, Hong wrote: > > > Satish found a problem in using inode routines. > > > > In addition, user code has bugs. I modified > > build_nullbasis_petsc_mumps.F90 into ex51f.F90 (attached) > > which works well with option '-mat_no_inode'. > > > > ex51f.F90 differs from build_nullbasis_petsc_mumps.F90 in > > 1) use MATAIJ/MATDENSE instead of MATMPIAIJ/MATMPIDENSE > > MATAIJ wraps MATSEQAIJ and MATMPIAIJ. > > > > 2) > > MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x,ierr) > > -> > > MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x,ierr) > > see > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatConvert.html > > > > Hong > > > > On Thu, May 26, 2016 at 3:05 PM, Satish Balay wrote: > > > >> Well looks like MatGetBrowsOfAoCols_MPIAIJ() issue is primarily > >> setting some local variables with uninitialzed data [thats primarily > >> set/used for parallel commumication]. So valgrind flags it - but I > >> don't think it gets used later on. > >> > >> [perhaps most of the code should be skipped for a sequential run..] > >> > >> The primary issue here is MatGetRowIJ_SeqAIJ_Inode_Symmetric() called > >> by MatGetOrdering_ND(). > >> > >> The workarround is to not use ND with: > >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > >> > >> But I think the following might be the fix [have to recheck].. The > >> test code works with this change [with the default ND] > >> > >> diff --git a/src/mat/impls/aij/seq/inode.c b/src/mat/impls/aij/seq/inode.c > >> index 9af404e..49f76ce 100644 > >> --- a/src/mat/impls/aij/seq/inode.c > >> +++ b/src/mat/impls/aij/seq/inode.c > >> @@ -97,6 +97,7 @@ static PetscErrorCode > >> MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i > >> > >> j = aj + ai[row] + ishift; > >> jmax = aj + ai[row+1] + ishift; > >> + if (j==jmax) continue; /* empty row */ > >> col = *j++ + ishift; > >> i2 = tvc[col]; > >> while (i2 >> 2.[-xx-------],off-diagonal elemets */ > >> @@ -125,6 +126,7 @@ static PetscErrorCode > >> MatGetRowIJ_SeqAIJ_Inode_Symmetric(Mat A,const PetscInt *i > >> for (i1=0,row=0; i1 >> j = aj + ai[row] + ishift; > >> jmax = aj + ai[row+1] + ishift; > >> + if (j==jmax) continue; /* empty row */ > >> col = *j++ + ishift; > >> i2 = tvc[col]; > >> while (i2 >> > >> Satish > >> > >> On Thu, 26 May 2016, Hong wrote: > >> > >> > I'll investigate this - had a day off since yesterday. > >> > Hong > >> > > >> > On Thu, May 26, 2016 at 12:04 PM, Barry Smith > >> wrote: > >> > > >> > > > >> > > Hong needs to run with this matrix and add appropriate error > >> checkers in > >> > > the matrix routines to detect "incomplete" matrices and likely just > >> error > >> > > out. > >> > > > >> > > Barry > >> > > > >> > > > On May 26, 2016, at 11:23 AM, Satish Balay > >> wrote: > >> > > > > >> > > > Mat Object: 1 MPI processes > >> > > > type: mpiaij > >> > > > row 0: (0, 0.) (1, 0.486111) > >> > > > row 1: (0, 0.486111) (1, 0.) > >> > > > row 2: (2, 0.) (3, 0.486111) > >> > > > row 3: (4, 0.486111) (5, -0.486111) > >> > > > row 4: > >> > > > row 5: > >> > > > > >> > > > The matrix created is funny (empty rows at the end) - so perhaps its > >> > > > exposing bugs in Mat code? [is that a valid matrix for this code?] > >> > > > > >> > > > ==21091== Use of uninitialised value of size 8 > >> > > > ==21091== at 0x57CA16B: MatGetRowIJ_SeqAIJ_Inode_Symmetric > >> > > (inode.c:101) > >> > > > ==21091== by 0x57CBA1C: MatGetRowIJ_SeqAIJ_Inode (inode.c:241) > >> > > > ==21091== by 0x537C0B5: MatGetRowIJ (matrix.c:7274) > >> > > > ==21091== by 0x53072FD: MatGetOrdering_ND (spnd.c:18) > >> > > > ==21091== by 0x530BC39: MatGetOrdering (sorder.c:260) > >> > > > ==21091== by 0x530A72D: MatGetOrdering (sorder.c:202) > >> > > > ==21091== by 0x5DDD764: PCSetUp_LU (lu.c:124) > >> > > > ==21091== by 0x5EBFE60: PCSetUp (precon.c:968) > >> > > > ==21091== by 0x5FDA1B3: KSPSetUp (itfunc.c:390) > >> > > > ==21091== by 0x601C17D: kspsetup_ (itfuncf.c:252) > >> > > > ==21091== by 0x4028B9: MAIN__ (ex1f.F90:104) > >> > > > ==21091== by 0x403535: main (ex1f.F90:185) > >> > > > > >> > > > > >> > > > This goes away if I add: > >> > > > > >> > > > call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > >> > > > > >> > > > And then there is also: > >> > > > > >> > > > ==21275== Invalid read of size 8 > >> > > > ==21275== at 0x584DE93: MatGetBrowsOfAoCols_MPIAIJ > >> (mpiaij.c:4734) > >> > > > ==21275== by 0x58970A8: > >> MatMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable > >> > > (mpimatmatmult.c:198) > >> > > > ==21275== by 0x5894A54: MatMatMult_MPIAIJ_MPIAIJ > >> (mpimatmatmult.c:34) > >> > > > ==21275== by 0x539664E: MatMatMult (matrix.c:9510) > >> > > > ==21275== by 0x53B3201: matmatmult_ (matrixf.c:1157) > >> > > > ==21275== by 0x402FC9: MAIN__ (ex1f.F90:149) > >> > > > ==21275== by 0x4035B9: main (ex1f.F90:186) > >> > > > ==21275== Address 0xa3d20f0 is 0 bytes after a block of size 48 > >> alloc'd > >> > > > ==21275== at 0x4C2DF93: memalign (vg_replace_malloc.c:858) > >> > > > ==21275== by 0x4FDE05E: PetscMallocAlign (mal.c:28) > >> > > > ==21275== by 0x5240240: VecScatterCreate (vscat.c:1220) > >> > > > ==21275== by 0x5857708: MatSetUpMultiply_MPIAIJ (mmaij.c:116) > >> > > > ==21275== by 0x581C31E: MatAssemblyEnd_MPIAIJ (mpiaij.c:747) > >> > > > ==21275== by 0x53680F2: MatAssemblyEnd (matrix.c:5187) > >> > > > ==21275== by 0x53B24D2: matassemblyend_ (matrixf.c:926) > >> > > > ==21275== by 0x40262C: MAIN__ (ex1f.F90:60) > >> > > > ==21275== by 0x4035B9: main (ex1f.F90:186) > >> > > > > >> > > > > >> > > > Satish > >> > > > > >> > > > ----------- > >> > > > > >> > > > $ diff build_nullbasis_petsc_mumps.F90 ex1f.F90 > >> > > > 3,7c3 > >> > > > < #include > >> > > > < #include "petsc/finclude/petscvec.h" > >> > > > < #include "petsc/finclude/petscmat.h" > >> > > > < #include "petsc/finclude/petscpc.h" > >> > > > < #include "petsc/finclude/petscksp.h" > >> > > > --- > >> > > >> #include "petsc/finclude/petsc.h" > >> > > > 40,41c36,37 > >> > > > < call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", > >> 0, > >> > > viewer, ierr) > >> > > > < call MatLoad(mat_c, viewer) > >> > > > --- > >> > > >> call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat_c_bin.txt", > >> > > FILE_MODE_READ, viewer, ierr) > >> > > >> call MatLoad(mat_c, viewer,ierr) > >> > > > 75a72 > >> > > >> call PCFactorSetMatOrderingType(pc,MATORDERINGNATURAL,ierr) > >> > > > 150c147 > >> > > > < call MatConvert(x, MATMPIAIJ, MAT_REUSE_MATRIX, x, ierr) > >> > > > --- > >> > > >> call MatConvert(x, MATMPIAIJ, MAT_INPLACE_MATRIX, x, ierr) > >> > > > > >> > > > > >> > > > On Thu, 26 May 2016, Matthew Knepley wrote: > >> > > > > >> > > >> Usually this means you have an uninitialized variable that is > >> causing > >> > > you > >> > > >> to overwrite memory. Fortran > >> > > >> is so lax in checking this, its one reason to switch to C. > >> > > >> > >> > > >> Thanks, > >> > > >> > >> > > >> Matt > >> > > >> > >> > > >> On Thu, May 26, 2016 at 1:46 AM, Constantin Nguyen Van < > >> > > >> constantin.nguyen.van at openmailbox.org> wrote: > >> > > >> > >> > > >>> Thanks for all your answers. > >> > > >>> I'm sorry for the syntax mistake in MatLoad, it was done > >> afterwards. > >> > > >>> > >> > > >>> I recompile PETSC --with-debugging=yes and run my code again. > >> > > >>> Now, I also have this strange behaviour. When I run the code > >> without > >> > > >>> valgrind and with one proc, I have this error message: > >> > > >>> > >> > > >>> BEGIN PROC 0 > >> > > >>> ITERATION 1 > >> > > >>> ECHO 1 > >> > > >>> ECHO 2 > >> > > >>> INFOG(28): 2 > >> > > >>> BASIS OK 0 > >> > > >>> END PROC 0 > >> > > >>> BEGIN PROC 0 > >> > > >>> ITERATION 2 > >> > > >>> ECHO 1 > >> > > >>> ECHO 2 > >> > > >>> INFOG(28): 2 > >> > > >>> BASIS OK 0 > >> > > >>> END PROC 0 > >> > > >>> BEGIN PROC 0 > >> > > >>> ITERATION 3 > >> > > >>> ECHO 1 > >> > > >>> [0]PETSC ERROR: > >> > > >>> > >> > > > >> ------------------------------------------------------------------------ > >> > > >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > >> Violation, > >> > > >>> probably memory access out of range > >> > > >>> [0]PETSC ERROR: Try option -start_in_debugger or > >> > > -on_error_attach_debugger > >> > > >>> [0]PETSC ERROR: or see > >> > > >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >> > > >>> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and > >> Apple Mac > >> > > OS > >> > > >>> X to find memory corruption errors > >> > > >>> [0]PETSC ERROR: likely location of problem given in stack below > >> > > >>> [0]PETSC ERROR: --------------------- Stack Frames > >> > > >>> ------------------------------------ > >> > > >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > >> > > >>> available, > >> > > >>> [0]PETSC ERROR: INSTEAD the line number of the start of the > >> > > function > >> > > >>> [0]PETSC ERROR: is given. > >> > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode_Symmetric line 69 > >> > > >>> > >> > > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > >> > > >>> [0]PETSC ERROR: [0] MatGetRowIJ_SeqAIJ_Inode line 235 > >> > > >>> > >> > > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/impls/aij/seq/inode.c > >> > > >>> [0]PETSC ERROR: [0] MatGetRowIJ line 7099 > >> > > >>> > >> > > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/interface/matrix.c > >> > > >>> [0]PETSC ERROR: [0] MatGetOrdering_ND line 17 > >> > > >>> > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/spnd.c > >> > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > >> > > >>> > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > >> > > >>> [0]PETSC ERROR: [0] MatGetOrdering line 185 > >> > > >>> > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/mat/order/sorder.c > >> > > >>> [0]PETSC ERROR: [0] PCSetUp_LU line 99 > >> > > >>> > >> > > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/impls/factor/lu/lu.c > >> > > >>> [0]PETSC ERROR: [0] PCSetUp line 945 > >> > > >>> > >> > > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/pc/interface/precon.c > >> > > >>> [0]PETSC ERROR: [0] KSPSetUp line 247 > >> > > >>> > >> > > > >> /home/j10077/librairie/petsc-mumps/petsc-3.6.4/src/ksp/ksp/interface/itfunc.c > >> > > >>> > >> > > >>> But when I run it with valgrind, it does work well. > >> > > >>> > >> > > >>> Le 2016-05-25 20:04, Barry Smith a ?crit : > >> > > >>> > >> > > >>>> First run with valgrind > >> > > >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >> > > >>>> > >> > > >>>> On May 25, 2016, at 2:35 AM, Constantin Nguyen Van > >> > > >>>>> wrote: > >> > > >>>>> > >> > > >>>>> Hi, > >> > > >>>>> > >> > > >>>>> I'm a new user of PETSc and I try to use it with MUMPS > >> > > >>>>> functionalities to compute a nullbasis. > >> > > >>>>> I wrote a code where I compute 4 times the same nullbasis. It > >> does > >> > > >>>>> work well when I run it with several procs but with only one > >> > > >>>>> processor I get an error on the 2nd iteration when KSPSetUp is > >> > > >>>>> called. Furthermore when it is run with a debugger ( > >> > > >>>>> --with-debugging=yes), it works fine with one or several > >> processors. > >> > > >>>>> Have you got any idea about why it doesn't work with one > >> processor > >> > > >>>>> and no debugger? > >> > > >>>>> > >> > > >>>>> Thanks. > >> > > >>>>> Constantin. > >> > > >>>>> > >> > > >>>>> PS: You can find the code and the files required to run it > >> enclosed. > >> > > >>>>> > >> > > >>>> > >> > > >> > >> > > >> > >> > > > >> > > > >> > > >> > > > > > From elbueler at alaska.edu Fri May 27 14:34:53 2016 From: elbueler at alaska.edu (Ed Bueler) Date: Fri, 27 May 2016 11:34:53 -0800 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) Message-ID: Dear PETSc -- This is an "am I using it correctly" question. Probably the API has the current design because of something I am missing. First, a quote from the PETSc manual which I fully understand; it works great and gives literate code (to the extent possible...): """ The recommended approach for multi-component PDEs is to declare a struct representing the fields defined at each node of the grid, e.g. typedef struct { PetscScalar u,v,omega,temperature; } Node; and write residual evaluation using Node **f,**u; DMDAVecGetArray(DM da,Vec local,&u); DMDAVecGetArray(DM da,Vec global,&f); ... f[i][j].omega = ... ... DMDAVecRestoreArray(DM da,Vec local,&u); DMDAVecRestoreArray(DM da,Vec global,&f); """ Now the three questions: 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void *". It makes the above convenient. But the third argument of the unstructured version Vec{Get,Restore}Array() is of type "PetscScalar **", which means that in an unstructured case, with the same Node struct, I would write "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" to get the same functionality. Why is it this way? More specifically, why not have the argument to VecGetArray() be of type "void *"? 2) Given that the "recommended approach" above works just fine, why do DMDAVec{Get,Restore}ArrayDOF() exist? (I.e. is there something I am missing about C indexing?) 3) There are parts of the PETSc API that refer to "dof" and parts that refer to "block size". Is this a systematic distinction with an underlying reason? It seems "block size" is more generic, but also it seems that it could replace "dof" everywhere. Thanks for addressing silly questions. Ed -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri May 27 14:51:58 2016 From: jed at jedbrown.org (Jed Brown) Date: Fri, 27 May 2016 13:51:58 -0600 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: <874m9juvmp.fsf@jedbrown.org> Ed Bueler writes: > 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void *". > It makes the above convenient. But the third argument of the unstructured > version Vec{Get,Restore}Array() is of type "PetscScalar **", which means > that in an unstructured case, with the same Node struct, I would write > "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" > to get the same functionality. Why is it this way? More specifically, why > not have the argument to VecGetArray() be of type "void *"? My philosophy is that unstructured indexing needs more information anyway, so you wouldn't typically use the generic VecGetArray() anyway. For example, we use DMPlexPointLocalRead() to access the arrays. > 2) Given that the "recommended approach" above works just fine, why > do DMDAVec{Get,Restore}ArrayDOF() exist? (I.e. is there something I am > missing about C indexing?) The struct approach is nice if the number of components is known at compile time, but if the number of variables is dynamic (e.g., species in a reacting flow) then it can't be used. The DOF variants are convenient for that. > 3) There are parts of the PETSc API that refer to "dof" and parts that > refer to "block size". Is this a systematic distinction with an underlying > reason? It seems "block size" is more generic, but also it seems that it > could replace "dof" everywhere. I think that's correct. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From dave.mayhem23 at gmail.com Fri May 27 15:06:23 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 27 May 2016 21:06:23 +0100 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: On 27 May 2016 at 20:34, Ed Bueler wrote: > Dear PETSc -- > > This is an "am I using it correctly" question. Probably the API has the > current design because of something I am missing. > > First, a quote from the PETSc manual which I fully understand; it works > great and gives literate code (to the extent possible...): > > """ > The recommended approach for multi-component PDEs is to declare a struct > representing the fields defined > at each node of the grid, e.g. > > typedef struct { > PetscScalar u,v,omega,temperature; > } Node; > > and write residual evaluation using > > Node **f,**u; > DMDAVecGetArray(DM da,Vec local,&u); > DMDAVecGetArray(DM da,Vec global,&f); > ... > f[i][j].omega = ... > Note that here the indexing should be f[ *j* ][ *i* ].omega > ... > DMDAVecRestoreArray(DM da,Vec local,&u); > DMDAVecRestoreArray(DM da,Vec global,&f); > """ > > Now the three questions: > > 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void > *". It makes the above convenient. But the third argument of the > unstructured version Vec{Get,Restore}Array() is of type "PetscScalar **", > which means that in an unstructured case, with the same Node struct, I > would write > "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" > to get the same functionality. Why is it this way? More specifically, > why not have the argument to VecGetArray() be of type "void *"? > I would say the reason why the last arg to VecGetArray() is not void* is because it is intended to give you direct access to the pointer associated with the entries within the vector - these are also PetscScalar's > > 2) Given that the "recommended approach" > I don't believe it is ever recommended anywhere to do the following: VecGetArray(DM da,Vec local,(PetscScalar**)&u) Trying to trick the compile with such a cast is just begging for memory corruption to occur. above works just fine, why do DMDAVec{Get,Restore}ArrayDOF() exist? (I.e. > is there something I am missing about C indexing?) > As an additional point, DMDAVec{Get,Restore}ArrayDOF() return void *array so that the same API will work for 1D, 2D and 3D DMDA's which would require PetscScalar **data, PetscScalar ***data, PetscScalar ****data respectively. Cheers, Dave > 3) There are parts of the PETSc API that refer to "dof" and parts that > refer to "block size". Is this a systematic distinction with an underlying > reason? It seems "block size" is more generic, but also it seems that it > could replace "dof" everywhere. > > Thanks for addressing silly questions. > > Ed > > > -- > Ed Bueler > Dept of Math and Stat and Geophysical Institute > University of Alaska Fairbanks > Fairbanks, AK 99775-6660 > 301C Chapman and 410D Elvey > 907 474-7693 and 907 474-7199 (fax 907 474-5394) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elbueler at alaska.edu Fri May 27 15:24:19 2016 From: elbueler at alaska.edu (Ed Bueler) Date: Fri, 27 May 2016 12:24:19 -0800 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: Dave -- Perhaps you should re-read my questions. My points were missed. But you motivated me to look slightly deeper; see below. > I would say the reason why the last arg to VecGetArray() is not void* is because > it is intended to give you direct access to the pointer associated with the entries > within the vector - these are also PetscScalar's Yes, that is fine. But DMDAVecGetArray() does it the other way, using "void *" instead. The question was, why does the PETSc API have two different approaches. Jed's "more information" answer about unstructured grids is a bit vague, but it must be the one that motivates the difference. > I don't believe it is ever recommended anywhere to do the following: > VecGetArray(DM da,Vec local,(PetscScalar**)&u) I do not recommend it; I am annoyed that I need it! My point was that this awkward construct allowed the "recommended approach"---which is a quote from the PETSc manual and not my own recommendation---to proceed with VecGetArray(). Whereas this awkward construct is not needed in user code which uses DMDAVecGetArray(). > Trying to trick the compile with such a cast is just begging for memory > corruption to occur. Maybe so, but you do it all the time with PETSc structured grids. In fact, the third line of the implementation of DMDAVecGetArray() is " VecGetArray(vec,(PetscScalar**)array); " That is, in the "recommended approach", the cast occurs inside DMDAVecGetArray(). The same cast is simply exposed to the user if they want to get a struct-valued pointer from VecGetArray(). Ed On Fri, May 27, 2016 at 12:06 PM, Dave May wrote: > > > On 27 May 2016 at 20:34, Ed Bueler wrote: > >> Dear PETSc -- >> >> This is an "am I using it correctly" question. Probably the API has the >> current design because of something I am missing. >> >> First, a quote from the PETSc manual which I fully understand; it works >> great and gives literate code (to the extent possible...): >> >> """ >> The recommended approach for multi-component PDEs is to declare a struct >> representing the fields defined >> at each node of the grid, e.g. >> >> typedef struct { >> PetscScalar u,v,omega,temperature; >> } Node; >> >> and write residual evaluation using >> >> Node **f,**u; >> DMDAVecGetArray(DM da,Vec local,&u); >> DMDAVecGetArray(DM da,Vec global,&f); >> ... >> f[i][j].omega = ... >> > > Note that here the indexing should be > f[ *j* ][ *i* ].omega > > > >> ... >> DMDAVecRestoreArray(DM da,Vec local,&u); >> DMDAVecRestoreArray(DM da,Vec global,&f); >> """ >> >> Now the three questions: >> >> 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void >> *". It makes the above convenient. But the third argument of the >> unstructured version Vec{Get,Restore}Array() is of type "PetscScalar **", >> which means that in an unstructured case, with the same Node struct, I >> would write >> "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" >> to get the same functionality. Why is it this way? More specifically, >> why not have the argument to VecGetArray() be of type "void *"? >> > > I would say the reason why the last arg to VecGetArray() is not void* is > because it is intended to give you direct access to the pointer associated > with the entries within the vector - these are also PetscScalar's > > >> >> 2) Given that the "recommended approach" >> > > I don't believe it is ever recommended anywhere to do the following: > VecGetArray(DM da,Vec local,(PetscScalar**)&u) > > Trying to trick the compile with such a cast is just begging for memory > corruption to occur. > > above works just fine, why do DMDAVec{Get,Restore}ArrayDOF() exist? (I.e. >> is there something I am missing about C indexing?) >> > > As an additional point, DMDAVec{Get,Restore}ArrayDOF() return > > void *array > > so that the same API will work for 1D, 2D and 3D DMDA's which would > require PetscScalar **data, PetscScalar ***data, PetscScalar ****data > respectively. > > > Cheers, > Dave > > >> 3) There are parts of the PETSc API that refer to "dof" and parts that >> refer to "block size". Is this a systematic distinction with an underlying >> reason? It seems "block size" is more generic, but also it seems that it >> could replace "dof" everywhere. >> >> Thanks for addressing silly questions. >> >> Ed >> >> >> -- >> Ed Bueler >> Dept of Math and Stat and Geophysical Institute >> University of Alaska Fairbanks >> Fairbanks, AK 99775-6660 >> 301C Chapman and 410D Elvey >> 907 474-7693 and 907 474-7199 (fax 907 474-5394) >> > > -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Fri May 27 17:47:56 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 27 May 2016 23:47:56 +0100 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: On 27 May 2016 at 21:24, Ed Bueler wrote: > Dave -- > > Perhaps you should re-read my questions. > Actually - maybe we got our wires crossed from the beginning. I'm going back to the original email as I missed something. >> """ >> The recommended approach for multi-component PDEs is to declare a struct >> representing the fields defined >> at each node of the grid, e.g. >> >> typedef struct { >> PetscScalar u,v,omega,temperature; >> } Node; >> >> and write residual evaluation using >> >> Node **f,**u; >> DMDAVecGetArray(DM da,Vec local,&u); >> DMDAVecGetArray(DM da,Vec global,&f); >> ... >> >> 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void *". >> It makes the above convenient. But the third argument of the unstructured >> version Vec{Get,Restore}Array() is of type "PetscScalar **", which means >> that in an unstructured case, with the same Node struct, I would write >> "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" >> to get the same functionality. Why is it this way? More specifically, >> why not have the argument to VecGetArray() be of type "void *"? >> > > Is the quoted text "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" really what you meant? Sorry I didn't spot this on the first read, but probably you meant something else as VecGetArray() only takes two args (Vec,PetscScalar**). This code Node **u; VecGetArray(Vec local,(PetscScalar**)&u); would not be correct, neither would Node ***u; VecGetArray(Vec local,(PetscScalar**)&u); if the DMDA was defined in 3d. > > I would say the reason why the last arg to VecGetArray() is not void* is > because it is intended to give you direct access to the pointer associated > with the entries within the vector - these are also PetscScalar's > > >> >> 2) Given that the "recommended approach" >> > > I don't believe it is ever recommended anywhere to do the following: > VecGetArray(DM da,Vec local,(PetscScalar**)&u) > > Trying to trick the compile with such a cast is just begging for memory > corruption to occur. > > above works just fine, why do DMDAVec{Get,Restore}ArrayDOF() exist? (I.e. >> is there something I am missing about C indexing?) >> > > As an additional point, DMDAVec{Get,Restore}ArrayDOF() return > > void *array > > so that the same API will work for 1D, 2D and 3D DMDA's which would > require PetscScalar **data, PetscScalar ***data, PetscScalar ****data > respectively. > > > Cheers, > Dave > > >> 3) There are parts of the PETSc API that refer to "dof" and parts that >> refer to "block size". Is this a systematic distinction with an underlying >> reason? It seems "block size" is more generic, but also it seems that it >> could replace "dof" everywhere. >> >> Thanks for addressing silly questions. >> >> Ed >> >> >> -- >> Ed Bueler >> Dept of Math and Stat and Geophysical Institute >> University of Alaska Fairbanks >> Fairbanks, AK 99775-6660 >> 301C Chapman and 410D Elvey >> 907 474-7693 and 907 474-7199 (fax 907 474-5394) >> > > -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From elbueler at alaska.edu Fri May 27 18:18:16 2016 From: elbueler at alaska.edu (Ed Bueler) Date: Fri, 27 May 2016 15:18:16 -0800 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: Dave -- You are right that I had a cut-paste-edit error in my VecGetArray() invocation in the email. Sorry about that. I should of cut and pasted from my functioning, and DMDA-free, code. In this context I meant Vec v; ... // create or load the Vec Node *u; VecGetArray(v,(PetscScalar**)&u); This *is* correct, and it works fine in the right context, without memory leaks. I am *not* using a DMDA in this case. At all. My original quote from the PETSc User Manual should be read *as is*, however. It *does* refer to DMDAVecGetArray(). And you are right that *it* has an error: should be [j][i] not [i][j]. And DMDAVecGetArray() does a cast like the above, but internally. Again, my original question was about distinquishing/explaining the different types of the returned pointers from DMDAVecGetArray() and VecGetArray(). Ed On Fri, May 27, 2016 at 2:47 PM, Dave May wrote: > > > On 27 May 2016 at 21:24, Ed Bueler wrote: > >> Dave -- >> >> Perhaps you should re-read my questions. >> > > Actually - maybe we got our wires crossed from the beginning. > I'm going back to the original email as I missed something. > > >>> """ >>> The recommended approach for multi-component PDEs is to declare a struct >>> representing the fields defined >>> at each node of the grid, e.g. >>> >>> typedef struct { >>> PetscScalar u,v,omega,temperature; >>> } Node; >>> >>> and write residual evaluation using >>> >>> Node **f,**u; >>> DMDAVecGetArray(DM da,Vec local,&u); >>> DMDAVecGetArray(DM da,Vec global,&f); >>> ... >>> >>> > > 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void >>> *". It makes the above convenient. But the third argument of the >>> unstructured version Vec{Get,Restore}Array() is of type "PetscScalar **", >>> which means that in an unstructured case, with the same Node struct, I >>> would write >>> "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" >>> to get the same functionality. Why is it this way? More specifically, >>> why not have the argument to VecGetArray() be of type "void *"? >>> >> >> > Is the quoted text > "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" > really what you meant? > > Sorry I didn't spot this on the first read, but probably you meant > something else as VecGetArray() only takes two args (Vec,PetscScalar**). > > This code > > Node **u; > VecGetArray(Vec local,(PetscScalar**)&u); > > would not be correct, neither would > > Node ***u; > VecGetArray(Vec local,(PetscScalar**)&u); > if the DMDA was defined in 3d. > > > >> >> I would say the reason why the last arg to VecGetArray() is not void* is >> because it is intended to give you direct access to the pointer associated >> with the entries within the vector - these are also PetscScalar's >> >> >>> >>> 2) Given that the "recommended approach" >>> >> >> I don't believe it is ever recommended anywhere to do the following: >> VecGetArray(DM da,Vec local,(PetscScalar**)&u) >> >> Trying to trick the compile with such a cast is just begging for memory >> corruption to occur. >> >> above works just fine, why do DMDAVec{Get,Restore}ArrayDOF() exist? >>> (I.e. is there something I am missing about C indexing?) >>> >> >> As an additional point, DMDAVec{Get,Restore}ArrayDOF() return >> >> void *array >> >> so that the same API will work for 1D, 2D and 3D DMDA's which would >> require PetscScalar **data, PetscScalar ***data, PetscScalar ****data >> respectively. >> >> >> Cheers, >> Dave >> >> >>> 3) There are parts of the PETSc API that refer to "dof" and parts that >>> refer to "block size". Is this a systematic distinction with an underlying >>> reason? It seems "block size" is more generic, but also it seems that it >>> could replace "dof" everywhere. >>> >>> Thanks for addressing silly questions. >>> >>> Ed >>> >>> >>> -- >>> Ed Bueler >>> Dept of Math and Stat and Geophysical Institute >>> University of Alaska Fairbanks >>> Fairbanks, AK 99775-6660 >>> 301C Chapman and 410D Elvey >>> 907 474-7693 and 907 474-7199 (fax 907 474-5394) >>> >> >> > > > -- > Ed Bueler > Dept of Math and Stat and Geophysical Institute > University of Alaska Fairbanks > Fairbanks, AK 99775-6660 > 301C Chapman and 410D Elvey > 907 474-7693 and 907 474-7199 (fax 907 474-5394) > > -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gomer at stanford.edu Fri May 27 18:34:21 2016 From: gomer at stanford.edu (Paul Urbanczyk) Date: Fri, 27 May 2016 16:34:21 -0700 Subject: [petsc-users] Error with MatView on multiple processors Message-ID: Hello, I'm having trouble with the MatView function drawing the matrix structure(s) when I execute my code on multiple processors. When I run on a single processor, the code runs fine, and the graphics window displays cleanly. When I run with multiple processors, I get error messages (see below). The matrices are constructed with DMCreateMatrix(da, &A_matrix). I then set the values with MatSetValuesStencil(A_matrix,1,&row,2,col_A,value_A,INSERT_VALUES). Finally, I call MatAssemblyBegin(A_matrix,MAT_FINAL_ASSEMBLY) and MatAssemblyEnd(A_matrix,MAT_FINAL_ASSEMBLY). I also test that the matrices are assembled with MatAssembled(A_matrix, &is_assembled_bool), and it appears they are successfully assembled. Any help/advice is greatly appreciated. Thanks in advance! -Paul Urbanczyk [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Wrong type of object: Parameter # 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by gomer Fri May 27 16:29:01 2016 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 [0]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c [0]PETSC ERROR: #2 MatView_MPI_DA() line 557 in /home/gomer/local/petsc/src/dm/impls/da/fdda.c [0]PETSC ERROR: #3 MatView() line 901 in /home/gomer/local/petsc/src/mat/interface/matrix.c [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Invalid argument [1]PETSC ERROR: Wrong type of object: Parameter # 1 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by gomer Fri May 27 16:29:01 2016 [1]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 [1]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c [1]PETSC ERROR: #2 MatView_MPI_DA() line 557 in /home/gomer/local/petsc/src/dm/impls/da/fdda.c [1]PETSC ERROR: #3 MatView() line 901 in /home/gomer/local/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Invalid argument [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Invalid argument [1]PETSC ERROR: Wrong type of object: Parameter # 1 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by gomer Fri May 27 16:29:01 2016 [1]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 [1]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c [1]PETSC ERROR: #5 MatView_MPI_DA() line 557 in /home/gomer/local/petsc/src/dm/impls/da/fdda.c [1]PETSC ERROR: #6 MatView() line 901 in /home/gomer/local/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: Wrong type of object: Parameter # 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by gomer Fri May 27 16:29:01 2016 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 [0]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c [0]PETSC ERROR: #5 MatView_MPI_DA() line 557 in /home/gomer/local/petsc/src/dm/impls/da/fdda.c [0]PETSC ERROR: #6 MatView() line 901 in /home/gomer/local/petsc/src/mat/interface/matrix.c From dave.mayhem23 at gmail.com Fri May 27 18:44:46 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Sat, 28 May 2016 00:44:46 +0100 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: Hi Ed, On 28 May 2016 at 00:18, Ed Bueler wrote: > Dave -- > > You are right that I had a cut-paste-edit error in my VecGetArray() > invocation in the email. Sorry about that. I should of cut and pasted > from my functioning, and DMDA-free, code. > > In this context I meant > > Vec v; > ... // create or load the Vec > Node *u; > VecGetArray(v,(PetscScalar**)&u); > > This *is* correct, and it works fine in the right context, without memory > leaks. I am *not* using a DMDA in this case. At all. > Ah okay. > > My original quote from the PETSc User Manual should be read *as is*, > however. It *does* refer to DMDAVecGetArray(). And you are right that > *it* has an error: should be [j][i] not [i][j]. > Oh crap, that indexing error is in the manual (multiple times)! Yikes. > And DMDAVecGetArray() does a cast like the above, but internally. > > Again, my original question was about distinquishing/explaining the > different types of the returned pointers from DMDAVecGetArray() and > VecGetArray(). > Okay. VecGetArray() knows nothing about any DM. It only provides access to the underlying entries in the vector which are PetscScalar's. So the last arg is naturally PetscScalar**. DMDAVecGet{Array,ArrayDOF}() exploits the DMDA ijk structure. DMDAVecGet{Array,ArrayDOF}() exist solely for the convenience of the user who wants to write an FD code, and wishes to express the discrete operators with multi-dimensional arrays, e.g. which can be indexed like x[j+1][i-1]. DMDAVecGetArray() maps entries from a Vec to a users data type, which can be indexed with the dimension of the DMDA (ijk). Since the dimension of the DMDA can be 1,2,3 and the blocksize (e.g. number of members in your user struct) is defined by the user - the last arg must be void*. DMDAVecGetArrayDOF() maps entries from a Vec to multi-dimensional array indexed by the dimension of the DMDA AND the number of fields (defined via the block size). Since the dimension of the DMDA can be 1,2 or 3, again, the last arg must be void* unless a separate API is introduced for 1D, 2D and 3D. Why do both exist? Well, Jed provided one reasons - the block size may be a runtime choice and thus the definition of the struct cannot be changed at runtime. Another reason could be the user just doesn't think it is useful to attach names (i.e. though members in a struct) to their DOFs - hence they want DMDAVecGetArrayDOF(). This could arise if you used the DMDA to describe a DG discretization. The DOFs would then just represent coefficients associated with your basis function. Maybe the user just prefers to write out loops. I see DMDAVec{Get,Restore}XXX as tools to help the user. The user can pick the API they prefer. I use the DMDA, but I always use plain old VecGetArray() and obtain the result in a variable declared as PetscScalar*. I don't bother with mapping the entries into a struct. I see no advantage to this in terms of code clarity. I prefer not to use DMDAVecGetArray() and DMDAVecGetArrayDOF() as these methods allocate additional memory. In my opinion, the line you refer to from the manual regarding multi-component PDEs should only applied in the context of usage with the DMDA. Others may disagree. I hope I finally helped answered your question. Cheers, Dave > > Ed > > > On Fri, May 27, 2016 at 2:47 PM, Dave May wrote: > >> >> >> On 27 May 2016 at 21:24, Ed Bueler wrote: >> >>> Dave -- >>> >>> Perhaps you should re-read my questions. >>> >> >> Actually - maybe we got our wires crossed from the beginning. >> I'm going back to the original email as I missed something. >> >> >>>> """ >>>> The recommended approach for multi-component PDEs is to declare a >>>> struct representing the fields defined >>>> at each node of the grid, e.g. >>>> >>>> typedef struct { >>>> PetscScalar u,v,omega,temperature; >>>> } Node; >>>> >>>> and write residual evaluation using >>>> >>>> Node **f,**u; >>>> DMDAVecGetArray(DM da,Vec local,&u); >>>> DMDAVecGetArray(DM da,Vec global,&f); >>>> ... >>>> >>>> >> >> 1) The third argument to DMDAVec{Get,Restore}Array() is of type "void >>>> *". It makes the above convenient. But the third argument of the >>>> unstructured version Vec{Get,Restore}Array() is of type "PetscScalar **", >>>> which means that in an unstructured case, with the same Node struct, I >>>> would write >>>> "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" >>>> to get the same functionality. Why is it this way? More specifically, >>>> why not have the argument to VecGetArray() be of type "void *"? >>>> >>> >>> >> Is the quoted text >> "VecGetArray(DM da,Vec local,(PetscScalar **)&u);" >> really what you meant? >> >> Sorry I didn't spot this on the first read, but probably you meant >> something else as VecGetArray() only takes two args (Vec,PetscScalar**). >> >> This code >> >> Node **u; >> VecGetArray(Vec local,(PetscScalar**)&u); >> >> would not be correct, neither would >> >> Node ***u; >> VecGetArray(Vec local,(PetscScalar**)&u); >> if the DMDA was defined in 3d. >> >> >> >>> >>> I would say the reason why the last arg to VecGetArray() is not void* is >>> because it is intended to give you direct access to the pointer associated >>> with the entries within the vector - these are also PetscScalar's >>> >>> >>>> >>>> 2) Given that the "recommended approach" >>>> >>> >>> I don't believe it is ever recommended anywhere to do the following: >>> VecGetArray(DM da,Vec local,(PetscScalar**)&u) >>> >>> Trying to trick the compile with such a cast is just begging for memory >>> corruption to occur. >>> >>> above works just fine, why do DMDAVec{Get,Restore}ArrayDOF() exist? >>>> (I.e. is there something I am missing about C indexing?) >>>> >>> >>> As an additional point, DMDAVec{Get,Restore}ArrayDOF() return >>> >>> void *array >>> >>> so that the same API will work for 1D, 2D and 3D DMDA's which would >>> require PetscScalar **data, PetscScalar ***data, PetscScalar ****data >>> respectively. >>> >>> >>> Cheers, >>> Dave >>> >>> >>>> 3) There are parts of the PETSc API that refer to "dof" and parts that >>>> refer to "block size". Is this a systematic distinction with an underlying >>>> reason? It seems "block size" is more generic, but also it seems that it >>>> could replace "dof" everywhere. >>>> >>>> Thanks for addressing silly questions. >>>> >>>> Ed >>>> >>>> >>>> -- >>>> Ed Bueler >>>> Dept of Math and Stat and Geophysical Institute >>>> University of Alaska Fairbanks >>>> Fairbanks, AK 99775-6660 >>>> 301C Chapman and 410D Elvey >>>> 907 474-7693 and 907 474-7199 (fax 907 474-5394) >>>> >>> >>> >> >> >> -- >> Ed Bueler >> Dept of Math and Stat and Geophysical Institute >> University of Alaska Fairbanks >> Fairbanks, AK 99775-6660 >> 301C Chapman and 410D Elvey >> 907 474-7693 and 907 474-7199 (fax 907 474-5394) >> >> > > > -- > Ed Bueler > Dept of Math and Stat and Geophysical Institute > University of Alaska Fairbanks > Fairbanks, AK 99775-6660 > 301C Chapman and 410D Elvey > 907 474-7693 and 907 474-7199 (fax 907 474-5394) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri May 27 19:44:57 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 27 May 2016 19:44:57 -0500 Subject: [petsc-users] Error with MatView on multiple processors In-Reply-To: References: Message-ID: On Fri, May 27, 2016 at 6:34 PM, Paul Urbanczyk wrote: > Hello, > > I'm having trouble with the MatView function drawing the matrix > structure(s) when I execute my code on multiple processors. > > When I run on a single processor, the code runs fine, and the graphics > window displays cleanly. > Lets start with an example. I do this cd $PETSC_DIR/src/snes/examples/tutorials make ex19 Make sure it runs ./ex19 -snes_monitor Make sure it runs in parallel (maybe you need $PETSC_DIR/$PETSC_ARCH/bin/mpiexec) mpiexec -n 2 ./ex19 -snes_monitor Make sure it can draw mpiexec -n 2 ./ex19 -snes_monitor -mat_view draw -draw_pause 1 This runs fine for me. Can you try it? Thanks, Matt When I run with multiple processors, I get error messages (see below). > > The matrices are constructed with DMCreateMatrix(da, &A_matrix). > > I then set the values with > MatSetValuesStencil(A_matrix,1,&row,2,col_A,value_A,INSERT_VALUES). > > Finally, I call MatAssemblyBegin(A_matrix,MAT_FINAL_ASSEMBLY) and > MatAssemblyEnd(A_matrix,MAT_FINAL_ASSEMBLY). > > I also test that the matrices are assembled with MatAssembled(A_matrix, > &is_assembled_bool), and it appears they are successfully assembled. > > Any help/advice is greatly appreciated. > > Thanks in advance! > > -Paul Urbanczyk > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Invalid argument > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by > gomer Fri May 27 16:29:01 2016 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx > --with-fc=mpif90 > [0]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [0]PETSC ERROR: #2 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [0]PETSC ERROR: #3 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Invalid argument > [1]PETSC ERROR: Wrong type of object: Parameter # 1 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by > gomer Fri May 27 16:29:01 2016 > [1]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx > --with-fc=mpif90 > [1]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [1]PETSC ERROR: #2 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [1]PETSC ERROR: #3 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Invalid argument > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Invalid argument > [1]PETSC ERROR: Wrong type of object: Parameter # 1 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by > gomer Fri May 27 16:29:01 2016 > [1]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx > --with-fc=mpif90 > [1]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [1]PETSC ERROR: #5 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [1]PETSC ERROR: #6 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by > gomer Fri May 27 16:29:01 2016 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx > --with-fc=mpif90 > [0]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [0]PETSC ERROR: #5 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [0]PETSC ERROR: #6 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gomer at stanford.edu Fri May 27 20:38:58 2016 From: gomer at stanford.edu (Paul Urbanczyk) Date: Fri, 27 May 2016 18:38:58 -0700 Subject: [petsc-users] Error with MatView on multiple processors In-Reply-To: References: Message-ID: <95704256-96f8-e5e3-1d60-1af39c209fea@stanford.edu> On 05/27/2016 05:44 PM, Matthew Knepley wrote: > On Fri, May 27, 2016 at 6:34 PM, Paul Urbanczyk > wrote: > > Hello, > > I'm having trouble with the MatView function drawing the matrix > structure(s) when I execute my code on multiple processors. > > When I run on a single processor, the code runs fine, and the > graphics window displays cleanly. > > > Lets start with an example. I do this > > cd $PETSC_DIR/src/snes/examples/tutorials > make ex19 > > Make sure it runs > > ./ex19 -snes_monitor > > Make sure it runs in parallel (maybe you need > $PETSC_DIR/$PETSC_ARCH/bin/mpiexec) > > mpiexec -n 2 ./ex19 -snes_monitor > > Make sure it can draw > > mpiexec -n 2 ./ex19 -snes_monitor -mat_view draw -draw_pause 1 > > This runs fine for me. Can you try it? > > Thanks, > > Matt Hello Matt, Yes, this example seems to run just fine. How should I proceed? -Paul > > When I run with multiple processors, I get error messages (see below). > > The matrices are constructed with DMCreateMatrix(da, &A_matrix). > > I then set the values with > MatSetValuesStencil(A_matrix,1,&row,2,col_A,value_A,INSERT_VALUES). > > Finally, I call MatAssemblyBegin(A_matrix,MAT_FINAL_ASSEMBLY) and > MatAssemblyEnd(A_matrix,MAT_FINAL_ASSEMBLY). > > I also test that the matrices are assembled with > MatAssembled(A_matrix, &is_assembled_bool), and it appears they > are successfully assembled. > > Any help/advice is greatly appreciated. > > Thanks in advance! > > -Paul Urbanczyk > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Invalid argument > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named > prometheus by gomer Fri May 27 16:29:01 2016 > [0]PETSC ERROR: Configure options --with-cc=mpicc > --with-cxx=mpicxx --with-fc=mpif90 > [0]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [0]PETSC ERROR: #2 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [0]PETSC ERROR: #3 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Invalid argument > [1]PETSC ERROR: Wrong type of object: Parameter # 1 > [1]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named > prometheus by gomer Fri May 27 16:29:01 2016 > [1]PETSC ERROR: Configure options --with-cc=mpicc > --with-cxx=mpicxx --with-fc=mpif90 > [1]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [1]PETSC ERROR: #2 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [1]PETSC ERROR: #3 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Invalid argument > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Invalid argument > [1]PETSC ERROR: Wrong type of object: Parameter # 1 > [1]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named > prometheus by gomer Fri May 27 16:29:01 2016 > [1]PETSC ERROR: Configure options --with-cc=mpicc > --with-cxx=mpicxx --with-fc=mpif90 > [1]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [1]PETSC ERROR: #5 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [1]PETSC ERROR: #6 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown > [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named > prometheus by gomer Fri May 27 16:29:01 2016 > [0]PETSC ERROR: Configure options --with-cc=mpicc > --with-cxx=mpicxx --with-fc=mpif90 > [0]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in > /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c > [0]PETSC ERROR: #5 MatView_MPI_DA() line 557 in > /home/gomer/local/petsc/src/dm/impls/da/fdda.c > [0]PETSC ERROR: #6 MatView() line 901 in > /home/gomer/local/petsc/src/mat/interface/matrix.c > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri May 27 20:43:49 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 27 May 2016 20:43:49 -0500 Subject: [petsc-users] Error with MatView on multiple processors In-Reply-To: <95704256-96f8-e5e3-1d60-1af39c209fea@stanford.edu> References: <95704256-96f8-e5e3-1d60-1af39c209fea@stanford.edu> Message-ID: On Fri, May 27, 2016 at 8:38 PM, Paul Urbanczyk wrote: > On 05/27/2016 05:44 PM, Matthew Knepley wrote: > > On Fri, May 27, 2016 at 6:34 PM, Paul Urbanczyk > wrote: > >> Hello, >> >> I'm having trouble with the MatView function drawing the matrix >> structure(s) when I execute my code on multiple processors. >> >> When I run on a single processor, the code runs fine, and the graphics >> window displays cleanly. >> > > Lets start with an example. I do this > > cd $PETSC_DIR/src/snes/examples/tutorials > make ex19 > > Make sure it runs > > ./ex19 -snes_monitor > > Make sure it runs in parallel (maybe you need > $PETSC_DIR/$PETSC_ARCH/bin/mpiexec) > > mpiexec -n 2 ./ex19 -snes_monitor > > Make sure it can draw > > mpiexec -n 2 ./ex19 -snes_monitor -mat_view draw -draw_pause 1 > > This runs fine for me. Can you try it? > > Thanks, > > Matt > > Hello Matt, > > Yes, this example seems to run just fine. How should I proceed? > I am not sure what you are doing in your code. Can you try and change one of these examples to do something like what you do? SNEX ex19 seems like it does things mostly the way you describe. Matt > > -Paul > > > When I run with multiple processors, I get error messages (see below). >> >> The matrices are constructed with DMCreateMatrix(da, &A_matrix). >> >> I then set the values with >> MatSetValuesStencil(A_matrix,1,&row,2,col_A,value_A,INSERT_VALUES). >> >> Finally, I call MatAssemblyBegin(A_matrix,MAT_FINAL_ASSEMBLY) and >> MatAssemblyEnd(A_matrix,MAT_FINAL_ASSEMBLY). >> >> I also test that the matrices are assembled with MatAssembled(A_matrix, >> &is_assembled_bool), and it appears they are successfully assembled. >> >> Any help/advice is greatly appreciated. >> >> Thanks in advance! >> >> -Paul Urbanczyk >> >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Invalid argument >> [0]PETSC ERROR: Wrong type of object: Parameter # 1 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by >> gomer Fri May 27 16:29:01 2016 >> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx >> --with-fc=mpif90 >> [0]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [0]PETSC ERROR: #2 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [0]PETSC ERROR: #3 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> [1]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [1]PETSC ERROR: Invalid argument >> [1]PETSC ERROR: Wrong type of object: Parameter # 1 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by >> gomer Fri May 27 16:29:01 2016 >> [1]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx >> --with-fc=mpif90 >> [1]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [1]PETSC ERROR: #2 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [1]PETSC ERROR: #3 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Invalid argument >> [1]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [1]PETSC ERROR: Invalid argument >> [1]PETSC ERROR: Wrong type of object: Parameter # 1 >> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by >> gomer Fri May 27 16:29:01 2016 >> [1]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx >> --with-fc=mpif90 >> [1]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [1]PETSC ERROR: #5 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [1]PETSC ERROR: #6 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> [0]PETSC ERROR: Wrong type of object: Parameter # 1 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named prometheus by >> gomer Fri May 27 16:29:01 2016 >> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx >> --with-fc=mpif90 >> [0]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [0]PETSC ERROR: #5 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [0]PETSC ERROR: #6 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri May 27 21:59:14 2016 From: jed at jedbrown.org (Jed Brown) Date: Fri, 27 May 2016 20:59:14 -0600 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: References: Message-ID: <87y46uubul.fsf@jedbrown.org> Ed Bueler writes: > Jed's "more information" answer about unstructured grids is a bit vague, > but it must be the one that motivates the difference. VecGetArray is most commonly used for algebraic operations that know nothing about the geometry/discretization. DM* functions are normally used to access vectors in a discretization-aware way, such as DMDA with {k,j,i} indexing or DMPlex with indexing by topological "points". If you're doing something unstructured without a DM, you probably have some analogous data structure and I would suggest making a Vec accessor in the language of that discretization. If you really think the flat array of structs that you're getting from VecGetArray (with an awkward cast) is the best interface for your unstructured discretization, I would still suggest making that trivial wrapper. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From gomer at stanford.edu Sat May 28 11:44:20 2016 From: gomer at stanford.edu (Paul Urbanczyk) Date: Sat, 28 May 2016 09:44:20 -0700 Subject: [petsc-users] Error with MatView on multiple processors In-Reply-To: References: <95704256-96f8-e5e3-1d60-1af39c209fea@stanford.edu> Message-ID: Matt, Thank you for your response. I'm going to go back to an earlier version of my code that worked to see if I may have changed something. If not, then I'll come up with a sandbox case to fiddle with. I'll post back if I get stuck again. -Paul On 05/27/2016 06:43 PM, Matthew Knepley wrote: > On Fri, May 27, 2016 at 8:38 PM, Paul Urbanczyk > wrote: > > On 05/27/2016 05:44 PM, Matthew Knepley wrote: > >> On Fri, May 27, 2016 at 6:34 PM, Paul Urbanczyk >> > wrote: >> >> Hello, >> >> I'm having trouble with the MatView function drawing the >> matrix structure(s) when I execute my code on multiple >> processors. >> >> When I run on a single processor, the code runs fine, and the >> graphics window displays cleanly. >> >> >> Lets start with an example. I do this >> >> cd $PETSC_DIR/src/snes/examples/tutorials >> make ex19 >> >> Make sure it runs >> >> ./ex19 -snes_monitor >> >> Make sure it runs in parallel (maybe you need >> $PETSC_DIR/$PETSC_ARCH/bin/mpiexec) >> >> mpiexec -n 2 ./ex19 -snes_monitor >> >> Make sure it can draw >> >> mpiexec -n 2 ./ex19 -snes_monitor -mat_view draw -draw_pause 1 >> >> This runs fine for me. Can you try it? >> >> Thanks, >> >> Matt > Hello Matt, > > Yes, this example seems to run just fine. How should I proceed? > > > I am not sure what you are doing in your code. Can you try and change > one of these examples > to do something like what you do? SNEX ex19 seems like it does things > mostly the way you > describe. > > Matt > > > -Paul >> >> When I run with multiple processors, I get error messages >> (see below). >> >> The matrices are constructed with DMCreateMatrix(da, &A_matrix). >> >> I then set the values with >> MatSetValuesStencil(A_matrix,1,&row,2,col_A,value_A,INSERT_VALUES). >> >> Finally, I call MatAssemblyBegin(A_matrix,MAT_FINAL_ASSEMBLY) >> and MatAssemblyEnd(A_matrix,MAT_FINAL_ASSEMBLY). >> >> I also test that the matrices are assembled with >> MatAssembled(A_matrix, &is_assembled_bool), and it appears >> they are successfully assembled. >> >> Any help/advice is greatly appreciated. >> >> Thanks in advance! >> >> -Paul Urbanczyk >> >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Invalid argument >> [0]PETSC ERROR: Wrong type of object: Parameter # 1 >> [0]PETSC ERROR: See >> http://www.mcs.anl.gov/petsc/documentation/faq.html for >> trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named >> prometheus by gomer Fri May 27 16:29:01 2016 >> [0]PETSC ERROR: Configure options --with-cc=mpicc >> --with-cxx=mpicxx --with-fc=mpif90 >> [0]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [0]PETSC ERROR: #2 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [0]PETSC ERROR: #3 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> [1]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [1]PETSC ERROR: Invalid argument >> [1]PETSC ERROR: Wrong type of object: Parameter # 1 >> [1]PETSC ERROR: See >> http://www.mcs.anl.gov/petsc/documentation/faq.html for >> trouble shooting. >> [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named >> prometheus by gomer Fri May 27 16:29:01 2016 >> [1]PETSC ERROR: Configure options --with-cc=mpicc >> --with-cxx=mpicxx --with-fc=mpif90 >> [1]PETSC ERROR: #1 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [1]PETSC ERROR: #2 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [1]PETSC ERROR: #3 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Invalid argument >> [1]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [1]PETSC ERROR: Invalid argument >> [1]PETSC ERROR: Wrong type of object: Parameter # 1 >> [1]PETSC ERROR: See >> http://www.mcs.anl.gov/petsc/documentation/faq.html for >> trouble shooting. >> [1]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [1]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named >> prometheus by gomer Fri May 27 16:29:01 2016 >> [1]PETSC ERROR: Configure options --with-cc=mpicc >> --with-cxx=mpicxx --with-fc=mpif90 >> [1]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [1]PETSC ERROR: #5 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [1]PETSC ERROR: #6 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> [0]PETSC ERROR: Wrong type of object: Parameter # 1 >> [0]PETSC ERROR: See >> http://www.mcs.anl.gov/petsc/documentation/faq.html for >> trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.1, unknown >> [0]PETSC ERROR: ./urbanSCFD on a arch-linux2-c-debug named >> prometheus by gomer Fri May 27 16:29:01 2016 >> [0]PETSC ERROR: Configure options --with-cc=mpicc >> --with-cxx=mpicxx --with-fc=mpif90 >> [0]PETSC ERROR: #4 AOApplicationToPetsc() line 267 in >> /home/gomer/local/petsc/src/vec/is/ao/interface/ao.c >> [0]PETSC ERROR: #5 MatView_MPI_DA() line 557 in >> /home/gomer/local/petsc/src/dm/impls/da/fdda.c >> [0]PETSC ERROR: #6 MatView() line 901 in >> /home/gomer/local/petsc/src/mat/interface/matrix.c >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From elbueler at alaska.edu Sat May 28 13:08:30 2016 From: elbueler at alaska.edu (Ed Bueler) Date: Sat, 28 May 2016 10:08:30 -0800 Subject: [petsc-users] use of VecGetArray() with dof>1 (block size > 1) In-Reply-To: <87y46uubul.fsf@jedbrown.org> References: <87y46uubul.fsf@jedbrown.org> Message-ID: Jed -- > If you're doing something unstructured without a DM, you probably have some > analogous data structure ... Quite so. > ... and I would suggest making a Vec accessor in the language of that discretization. > If you really think the flat array of structs that you're getting from VecGetArray (with > an awkward cast) is the best interface for your unstructured discretization, I would > still suggest making that trivial wrapper. Sounds good. Given that this is exactly the implementation of DMDAVecGetArray(), i.e. wrapper-with-awkward-cast, I'll do it. So I guess I finally understand the principle: VecGetArray() is roughly-speaking intended to be wrapped, so it has a "raw" interface returning a type-specific pointer for the underlying PetscScalar array. You were saying this all along ... Ed On Fri, May 27, 2016 at 6:59 PM, Jed Brown wrote: > Ed Bueler writes: > > Jed's "more information" answer about unstructured grids is a bit vague, > > but it must be the one that motivates the difference. > > VecGetArray is most commonly used for algebraic operations that know > nothing about the geometry/discretization. DM* functions are normally > used to access vectors in a discretization-aware way, such as DMDA with > {k,j,i} indexing or DMPlex with indexing by topological "points". If > you're doing something unstructured without a DM, you probably have some > analogous data structure and I would suggest making a Vec accessor in > the language of that discretization. If you really think the flat array > of structs that you're getting from VecGetArray (with an awkward cast) > is the best interface for your unstructured discretization, I would > still suggest making that trivial wrapper. > -- Ed Bueler Dept of Math and Stat and Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775-6660 301C Chapman and 410D Elvey 907 474-7693 and 907 474-7199 (fax 907 474-5394) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue May 31 23:12:43 2016 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 1 Jun 2016 12:12:43 +0800 Subject: [petsc-users] Error with PETSc on K computer Message-ID: Hi, I'm trying to run my MPI CFD code on Japan's K computer. My code can run if I didn't make use of the PETSc DMDAVecGetArrayF90 subroutine. If it's called call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) I get the error below. I have no problem with my code on other clusters using the new Intel compilers. I used to have problems with DM when using the old Intel compilers. Now on the K computer, I'm using Fujitsu's Fortran compiler. How can I troubleshoot? Btw, I also tested on the ex13f90 example and it didn't work too. The error is below. My code error: /* size_x,size_y,size_z 76x130x136*//* *//* total grid size = 1343680*//* *//* recommended cores (50k / core) = 26.87360000000000*//* *//* 0*//* *//* 1*//* *//* 1*//* *//*[3]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*//* *//*[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* *//*[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[1]PETSC ERROR: likely location of problem given in stack below*//* *//*[1]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[1]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[1]PETSC ERROR: is given.*//* *//*[1]PETSC ERROR: [1] F90Array3dCreate line 244 /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* *//* 1*//* *//*------------------------------------------------------------------------*//* *//*[3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*//* *//*[3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* *//*[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[3]PETSC ERROR: likely location of problem given in stack below*//* *//*[3]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[0]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*//* *//*[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* *//*[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[0]PETSC ERROR: likely location of problem given in stack below*//* *//*[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[0]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[0]PETSC ERROR: is given.*//* *//*[0]PETSC ERROR: [0] F90Array3dCreate line 244 /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* *//*[0]PETSC ERROR: --------------------- Error Message ----------------------------------------- 1*//* *//*[2]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[2]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range*//* *//*[2]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* *//*[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[2]PETSC ERROR: likely location of problem given in stack below*//* *//*[2]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[2]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[2]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[2]PETSC ERROR: is given.*//* *//*[2]PETSC ERROR: [2] F90Array3dCreate line 244 /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* *//*[2]PETSC ERROR: --------------------- Error Message -----------------------------------------[3]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[3]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[3]PETSC ERROR: is given.*//* *//*[3]PETSC ERROR: [3] F90Array3dCreate line 244 /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* *//*[3]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------*//* *//*[3]PETSC ERROR: Signal received*//* *//*[3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.*//* *//*[3]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* *//*[3]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by Unknown Wed Jun 1 12:54:34 2016*//* *//*[3]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big --with-shared----------------------*//* *//*[0]PETSC ERROR: Signal received*//* *//*[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.*//* *//*[0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* *//*[0]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by Unknown Wed Jun 1 12:54:34 2016*//* *//*[0]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 --with-scalapack-lib=-SCALAPACK --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* *//*[0]PETSC ERROR: #1 User provided function() line 0 in unknown file*//* *//*--------------------------------------------------------------------------*//* *//*[m---------------------*//* *//*[2]PETSC ERROR: Signal received*//* *//*[2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.*//* *//*[2]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* *//*[2]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by Unknown Wed Jun 1 12:54:34 2016*//* *//*[2]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 --with-scalapack-lib=-SCALAPACK --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* *//*[2]PETSC ERROR: #1 User provided function() line 0 in unknown file*//* *//*--------------------------------------------------------------------------*//* *//*[m[1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------*//* *//*[1]PETSC ERROR: Signal received*//* *//*[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.*//* *//*[1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* *//*[1]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by Unknown Wed Jun 1 12:54:34 2016*//* *//*[1]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 --with-scalapack-lib=-SCALAPACK --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* *//*[1]PETSC ERROR: #1 User provided function() line 0 ilibraries=0 --with-blas-lapack-lib=-SSL2 --with-scalapack-lib=-SCALAPACK --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* *//*[3]PETSC ERROR: #1 User provided function() line 0 in unknown file*//* *//*--------------------------------------------------------------------------*//* *//*[mpi::mpi-api::mpi-abort]*//* *//*MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD*//* *//*with errorcode 59.*//* *//* *//*NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.*//* *//*You may or may not see output from other processes, depending on*//* *//*exactly when Open MPI kills them.*//* *//*--------------------------------------------------------------------------*//* *//*[b04-036:28416] /opt/FJSVtclang/GM-1.2.0-20/lib64/libmpi.so.0(orte_errmgr_base_error_abort+0x84) [0xffffffff11360404]*//* *//*[b04-036:28416] /opt/FJSVtclang/GM-1.2.0-20/lib64/libmpi.so.0(ompi_mpi_abort+0x51c) [0xffffffff1110391c]*//* *//*[b04-036:28416] /opt/FJSVtclang/GM-1.2.0-2pi::mpi-api::mpi-abort]*//* *//*MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD*//* *//*with errorcode 59.*/ ex13f90 error: /*[t00196 at b04-036 tutorials]$ mpiexec -np 2 ./ex13f90*//* *//*jwe1050i-w The hardware barrier couldn't be used and continues processing using the software barrier.*//* *//*taken to (standard) corrective action, execution continuing.*//* *//*jwe1050i-w The hardware barrier couldn't be used and continues processing using the software barrier.*//* *//*taken to (standard) corrective action, execution continuing.*//* *//* Hi! We're solving van der Pol using 2 processes.*//* *//* *//* t x1 x2*//* *//*[1]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[1]PETSC ERROR: Caught signal number 10 BUS: Bus Error, possibly illegal memory access*//* *//*[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[0]PETSC ERROR: ------------------------------------------------------------------------*//* *//*[0]PETSC ERROR: Caught signal number 10 BUS: Bus Error, possibly illegal memory access*//* *//*[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger*//* *//*[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* *//*[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* *//*[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors*//* *//*[1]PETSC ERROR: likely location of problem given in stack below*//* *//*[1]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[1]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[1]PETSC ERROR: is given.*//* *//*[1]PETSC ERROR: [1] F90Array4dCreate line 337 /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* *//*[0]PETSC ERROR: likely location of problem given in stack below*//* *//*[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------*//* *//*[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,*//* *//*[0]PETSC ERROR: INSTEAD the line number of the start of the function*//* *//*[0]PETSC ERROR: is given.*//* *//*[0]PETSC ERROR: [0] F90Array4dCreate line 337 /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* *//*[1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------*//* *//*[1]PETSC ERROR: Signal received*//* *//*[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.*//* *//*[1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* *//*[1]PETSC ERROR: ./ex13f90 on a petsc-3.6.3_debug named b04-036 by Unknown Wed Jun 1 13:04:34 2016*//* *//*[1]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 --with-scalapack-lib=-SCALAPACK --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 --with-hyp*//* */ -- Thank you Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 31 23:21:50 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 31 May 2016 23:21:50 -0500 Subject: [petsc-users] Error with PETSc on K computer In-Reply-To: References: Message-ID: Do PETSc examples using VecGetArrayF90() work? say src/vec/vec/examples/tutorials/ex4f90.F Satish On Tue, 31 May 2016, TAY wee-beng wrote: > Hi, > > I'm trying to run my MPI CFD code on Japan's K computer. My code can run if I > didn't make use of the PETSc DMDAVecGetArrayF90 subroutine. If it's called > > call DMDAVecGetArrayF90(da_u,u_local,u_array,ierr) > > I get the error below. I have no problem with my code on other clusters using > the new Intel compilers. I used to have problems with DM when using the old > Intel compilers. Now on the K computer, I'm using Fujitsu's Fortran compiler. > How can I troubleshoot? > > Btw, I also tested on the ex13f90 example and it didn't work too. The error is > below. > > > My code error: > > /* size_x,size_y,size_z 76x130x136*//* > *//* total grid size = 1343680*//* > *//* recommended cores (50k / core) = 26.87360000000000*//* > *//* 0*//* > *//* 1*//* > *//* 1*//* > *//*[3]PETSC ERROR: [1]PETSC ERROR: > ------------------------------------------------------------------------*//* > *//*[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range*//* > *//*[1]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*//* > *//*[1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* > *//*[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*//* > *//*[1]PETSC ERROR: likely location of problem given in stack below*//* > *//*[1]PETSC ERROR: --------------------- Stack Frames > ------------------------------------*//* > *//*[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,*//* > *//*[1]PETSC ERROR: INSTEAD the line number of the start of the > function*//* > *//*[1]PETSC ERROR: is given.*//* > *//*[1]PETSC ERROR: [1] F90Array3dCreate line 244 > /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* > *//* 1*//* > *//*------------------------------------------------------------------------*//* > *//*[3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range*//* > *//*[3]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*//* > *//*[3]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* > *//*[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*//* > *//*[3]PETSC ERROR: likely location of problem given in stack below*//* > *//*[3]PETSC ERROR: --------------------- Stack Frames > ------------------------------------*//* > *//*[0]PETSC ERROR: > ------------------------------------------------------------------------*//* > *//*[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range*//* > *//*[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*//* > *//*[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* > *//*[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*//* > *//*[0]PETSC ERROR: likely location of problem given in stack below*//* > *//*[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------*//* > *//*[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,*//* > *//*[0]PETSC ERROR: INSTEAD the line number of the start of the > function*//* > *//*[0]PETSC ERROR: is given.*//* > *//*[0]PETSC ERROR: [0] F90Array3dCreate line 244 > /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* > *//*[0]PETSC ERROR: --------------------- Error Message > ----------------------------------------- 1*//* > *//*[2]PETSC ERROR: > ------------------------------------------------------------------------*//* > *//*[2]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range*//* > *//*[2]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*//* > *//*[2]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* > *//*[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*//* > *//*[2]PETSC ERROR: likely location of problem given in stack below*//* > *//*[2]PETSC ERROR: --------------------- Stack Frames > ------------------------------------*//* > *//*[2]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,*//* > *//*[2]PETSC ERROR: INSTEAD the line number of the start of the > function*//* > *//*[2]PETSC ERROR: is given.*//* > *//*[2]PETSC ERROR: [2] F90Array3dCreate line 244 > /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* > *//*[2]PETSC ERROR: --------------------- Error Message > -----------------------------------------[3]PETSC ERROR: Note: The EXACT line > numbers in the stack are not available,*//* > *//*[3]PETSC ERROR: INSTEAD the line number of the start of the > function*//* > *//*[3]PETSC ERROR: is given.*//* > *//*[3]PETSC ERROR: [3] F90Array3dCreate line 244 > /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* > *//*[3]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------*//* > *//*[3]PETSC ERROR: Signal received*//* > *//*[3]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting.*//* > *//*[3]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* > *//*[3]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by > Unknown Wed Jun 1 12:54:34 2016*//* > *//*[3]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC > --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" > --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= > --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big > --with-shared----------------------*//* > *//*[0]PETSC ERROR: Signal received*//* > *//*[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting.*//* > *//*[0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* > *//*[0]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by > Unknown Wed Jun 1 12:54:34 2016*//* > *//*[0]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC > --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" > --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= > --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big > --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 > --with-scalapack-lib=-SCALAPACK > --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug > --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 > --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* > *//*[0]PETSC ERROR: #1 User provided function() line 0 in unknown file*//* > *//*--------------------------------------------------------------------------*//* > *//*[m---------------------*//* > *//*[2]PETSC ERROR: Signal received*//* > *//*[2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting.*//* > *//*[2]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* > *//*[2]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by > Unknown Wed Jun 1 12:54:34 2016*//* > *//*[2]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC > --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" > --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= > --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big > --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 > --with-scalapack-lib=-SCALAPACK > --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug > --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 > --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* > *//*[2]PETSC ERROR: #1 User provided function() line 0 in unknown file*//* > *//*--------------------------------------------------------------------------*//* > *//*[m[1]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------*//* > *//*[1]PETSC ERROR: Signal received*//* > *//*[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting.*//* > *//*[1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* > *//*[1]PETSC ERROR: ./a-debug.out on a petsc-3.6.3_debug named b04-036 by > Unknown Wed Jun 1 12:54:34 2016*//* > *//*[1]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC > --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" > --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= > --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big > --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 > --with-scalapack-lib=-SCALAPACK > --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug > --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 > --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* > *//*[1]PETSC ERROR: #1 User provided function() line 0 ilibraries=0 > --with-blas-lapack-lib=-SSL2 --with-scalapack-lib=-SCALAPACK > --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug > --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 > --with-hypre-dir=/home/hp150306/t00196/lib/hypre-2.10.0b-p4*//* > *//*[3]PETSC ERROR: #1 User provided function() line 0 in unknown file*//* > *//*--------------------------------------------------------------------------*//* > *//*[mpi::mpi-api::mpi-abort]*//* > *//*MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD*//* > *//*with errorcode 59.*//* > *//* > *//*NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.*//* > *//*You may or may not see output from other processes, depending on*//* > *//*exactly when Open MPI kills them.*//* > *//*--------------------------------------------------------------------------*//* > *//*[b04-036:28416] > /opt/FJSVtclang/GM-1.2.0-20/lib64/libmpi.so.0(orte_errmgr_base_error_abort+0x84) > [0xffffffff11360404]*//* > *//*[b04-036:28416] > /opt/FJSVtclang/GM-1.2.0-20/lib64/libmpi.so.0(ompi_mpi_abort+0x51c) > [0xffffffff1110391c]*//* > *//*[b04-036:28416] /opt/FJSVtclang/GM-1.2.0-2pi::mpi-api::mpi-abort]*//* > *//*MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD*//* > *//*with errorcode 59.*/ > > ex13f90 error: > > > /*[t00196 at b04-036 tutorials]$ mpiexec -np 2 ./ex13f90*//* > *//*jwe1050i-w The hardware barrier couldn't be used and continues processing > using the software barrier.*//* > *//*taken to (standard) corrective action, execution continuing.*//* > *//*jwe1050i-w The hardware barrier couldn't be used and continues processing > using the software barrier.*//* > *//*taken to (standard) corrective action, execution continuing.*//* > *//* Hi! We're solving van der Pol using 2 processes.*//* > *//* > *//* t x1 x2*//* > *//*[1]PETSC ERROR: > ------------------------------------------------------------------------*//* > *//*[1]PETSC ERROR: Caught signal number 10 BUS: Bus Error, possibly illegal > memory access*//* > *//*[1]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*//* > *//*[0]PETSC ERROR: > ------------------------------------------------------------------------*//* > *//*[0]PETSC ERROR: Caught signal number 10 BUS: Bus Error, possibly illegal > memory access*//* > *//*[0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger*//* > *//*[0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* > *//*[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*//* > *//*[1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind*//* > *//*[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors*//* > *//*[1]PETSC ERROR: likely location of problem given in stack below*//* > *//*[1]PETSC ERROR: --------------------- Stack Frames > ------------------------------------*//* > *//*[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,*//* > *//*[1]PETSC ERROR: INSTEAD the line number of the start of the > function*//* > *//*[1]PETSC ERROR: is given.*//* > *//*[1]PETSC ERROR: [1] F90Array4dCreate line 337 > /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* > *//*[0]PETSC ERROR: likely location of problem given in stack below*//* > *//*[0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------*//* > *//*[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available,*//* > *//*[0]PETSC ERROR: INSTEAD the line number of the start of the > function*//* > *//*[0]PETSC ERROR: is given.*//* > *//*[0]PETSC ERROR: [0] F90Array4dCreate line 337 > /.global/volume2/home/hp150306/t00196/source/petsc-3.6.3/src/sys/f90-src/f90_cwrap.c*//* > *//*[1]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------*//* > *//*[1]PETSC ERROR: Signal received*//* > *//*[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting.*//* > *//*[1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015*//* > *//*[1]PETSC ERROR: ./ex13f90 on a petsc-3.6.3_debug named b04-036 by Unknown > Wed Jun 1 13:04:34 2016*//* > *//*[1]PETSC ERROR: Configure options --with-cc=mpifcc --with-cxx=mpiFCC > --with-fc=mpifrt --with-64-bit-pointers=1 --CC=mpifcc --CFLAGS="-Xg -O0" > --CXX=mpiFCC --CXXFLAGS="-Xg -O0" --FC=mpifrt --FFLAGS="-X9 -O0" --LD_SHARED= > --LDDFLAGS= --with-openmp=1 --with-mpiexec=mpiexec --known-endian=big > --with-shared-libraries=0 --with-blas-lapack-lib=-SSL2 > --with-scalapack-lib=-SCALAPACK > --prefix=/home/hp150306/t00196/lib/petsc-3.6.3_debug > --with-fortran-interfaces=1 --with-debugging=1 --useThreads=0 --with-hypre=1 > --with-hyp*//* > */ > > >