From sean.null at gmail.com Sun Jul 1 20:22:03 2012 From: sean.null at gmail.com (Din Zhao) Date: Mon, 2 Jul 2012 02:22:03 +0100 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost Message-ID: And in general, does any function in petsc have a counterpart in petsc4py? Thanks. From jedbrown at mcs.anl.gov Sun Jul 1 20:55:53 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 1 Jul 2012 17:55:53 -0800 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost In-Reply-To: References: Message-ID: No, but you can use Vec.createGhost() On Sun, Jul 1, 2012 at 5:22 PM, Din Zhao wrote: > And in general, does any function in petsc have a counterpart in petsc4py? > Most do, but the projects are maintained somewhat separately so not every one make it into petsc4py. Also, sometimes a more "pythonic" interface is adopted by petsc4py, so the calls don't line up exactly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Jul 1 22:10:40 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 1 Jul 2012 22:10:40 -0500 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost In-Reply-To: References: Message-ID: On Jul 1, 2012, at 8:55 PM, Jed Brown wrote: > No, but you can use Vec.createGhost() > > On Sun, Jul 1, 2012 at 5:22 PM, Din Zhao wrote: > And in general, does any function in petsc have a counterpart in petsc4py? > > Most do, but the projects are maintained somewhat separately so not every one make it into petsc4py. Also, sometimes a more "pythonic" interface is adopted by petsc4py, so the calls don't line up exactly. We do intend to have the same functionality in both, so if something is a bit different on the pythonic side we still want it to have the same functionality. It is easy for us to miss something so please do report to petsc-maint at mcs.anl.gov missing python functionality. Barry From jedbrown at mcs.anl.gov Mon Jul 2 03:28:14 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 2 Jul 2012 00:28:14 -0800 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost In-Reply-To: References: Message-ID: On Sun, Jul 1, 2012 at 7:10 PM, Barry Smith wrote: > > On Jul 1, 2012, at 8:55 PM, Jed Brown wrote: > > > No, but you can use Vec.createGhost() > > > > On Sun, Jul 1, 2012 at 5:22 PM, Din Zhao wrote: > > And in general, does any function in petsc have a counterpart in > petsc4py? > > > > Most do, but the projects are maintained somewhat separately so not > every one make it into petsc4py. Also, sometimes a more "pythonic" > interface is adopted by petsc4py, so the calls don't line up exactly. > > We do intend to have the same functionality in both, so if something is > a bit different on the pythonic side we still want it to have the same > functionality. > Here is Vec.setMPIGhost() http://code.google.com/p/petsc4py/source/detail?r=25b193708e130337ead16adf6c0a7851b1e90f0b This patch includes a test and a change to handling of the local form. Lisandro, is it okay to depend on the 'with' statement here? Since the old code never called VecGhostRestoreLocalForm(), I consider it to have been non-compliant. Would you prefer to handle this in a different way? http://code.google.com/p/petsc4py/source/detail?r=1f45f9ab5d8df8695bc3e0c6c662658911f519fb -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.null at gmail.com Mon Jul 2 04:42:53 2012 From: sean.null at gmail.com (Din Zhao) Date: Mon, 2 Jul 2012 10:42:53 +0100 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost In-Reply-To: References: Message-ID: <4FEDD605-5404-4F49-83D5-5DA57A56434F@gmail.com> Thank you all. On Jul 2, 2012, at 2:55 AM, Jed Brown wrote: > No, but you can use Vec.createGhost() > > On Sun, Jul 1, 2012 at 5:22 PM, Din Zhao wrote: > And in general, does any function in petsc have a counterpart in petsc4py? > > Most do, but the projects are maintained somewhat separately so not every one make it into petsc4py. Also, sometimes a more "pythonic" interface is adopted by petsc4py, so the calls don't line up exactly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon Jul 2 04:58:23 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 02 Jul 2012 11:58:23 +0200 Subject: [petsc-users] Getting 2d array with updated ghost values from DM global vector Message-ID: <4FF170BF.3060303@gmail.com> Hi, I have used DMDACreate2d for my code and then use: call DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) call DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) to construct the global DM vector b_rhs_semi_global Now I want to get the values with ghost values in a 2d array locally which is declared as: real(8), allocatable :: array2d(:,:) I guess I should use DMDAGetGhostCorners to get the corressponding indices and allocate it. But what should I do next? How can I use something like VecGetArrayF90 to get to the pointer to access the local vector? I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm using intel fortran and they can't work. I can't use gfortran at the moment since I've problems with HYPRE with gfortran in 3D. Thanks -- Yours sincerely, TAY wee-beng From knepley at gmail.com Mon Jul 2 07:49:28 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 2 Jul 2012 06:49:28 -0600 Subject: [petsc-users] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <4FF170BF.3060303@gmail.com> References: <4FF170BF.3060303@gmail.com> Message-ID: On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng wrote: > Hi, > > I have used DMDACreate2d for my code and then use: > > call DMLocalToGlobalBegin(da,b_rhs_**semi_local,INSERT_VALUES,b_** > rhs_semi_global,ierr) > > call DMLocalToGlobalEnd(da,b_rhs_**semi_local,INSERT_VALUES,b_** > rhs_semi_global,ierr) > > to construct the global DM vector b_rhs_semi_global > > Now I want to get the values with ghost values in a 2d array locally which > is declared as: > > real(8), allocatable :: array2d(:,:) > > I guess I should use DMDAGetGhostCorners to get the corressponding indices > and allocate it. But what should I do next? How can I use something like > VecGetArrayF90 to get to the pointer to access the local vector? > > I can't use DMDAVecGetArrayF90/**DMDAVecRestoreArrayF90 since I'm using > intel fortran and they can't work. I can't use gfortran at the moment since > I've problems with HYPRE with gfortran in 3D. > Are you certain of this? That used to be true, but the current version should work for any F90. Matt > Thanks > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.null at gmail.com Mon Jul 2 08:25:13 2012 From: sean.null at gmail.com (Xin Zhao) Date: Mon, 2 Jul 2012 14:25:13 +0100 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost In-Reply-To: <4FEDD605-5404-4F49-83D5-5DA57A56434F@gmail.com> References: <4FEDD605-5404-4F49-83D5-5DA57A56434F@gmail.com> Message-ID: Hi Jed, I've add the code to petsc4py and then I typed in terminal: python setup.py build python setup.py install --user But I still can not find the method Vec.setMPIGhost(). I know the question is stupid, but I really need your help. Regards, Xin On Mon, Jul 2, 2012 at 10:42 AM, Din Zhao wrote: > Thank you all. > > > > On Jul 2, 2012, at 2:55 AM, Jed Brown wrote: > > No, but you can use Vec.createGhost() > > On Sun, Jul 1, 2012 at 5:22 PM, Din Zhao wrote: > >> And in general, does any function in petsc have a counterpart in petsc4py? >> > > Most do, but the projects are maintained somewhat separately so not every > one make it into petsc4py. Also, sometimes a more "pythonic" interface is > adopted by petsc4py, so the calls don't line up exactly. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aron.ahmadia at kaust.edu.sa Mon Jul 2 08:38:57 2012 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Mon, 2 Jul 2012 16:38:57 +0300 Subject: [petsc-users] Is there a corresponding function in petsc4py to VecMPISetFhost In-Reply-To: References: <4FEDD605-5404-4F49-83D5-5DA57A56434F@gmail.com> Message-ID: Xin, Unless you have modified your PYTHONPATH variable or your site-packages settings, this command: python setup.py install --user will not actually install petsc4py in a location visible to your Python installation. Unfortunately, the Python procedures and documentation for "user-local" installations are fairly poor in comparison to the rest of Python's friendliness. I rely on virtualenv to set up an environment that I can just "pip install" into. I recommend using virtualenv, as properly using it can save you a lot of headaches when developing in Python. See this answer on StackExchange to understand how to get set up with virtualenv: http://stackoverflow.com/a/5177027/122022 Once you have created a virtualenv, you can then cd into the directory of code you'd like to use and type: pip install -e . This allows you to modify the code and have those changes reflected when you reload the virtual environment's Python interpreter. If you just need to install once, you can just do: pip install . or python setup.py install And the package will be installed into the virtual environment, though you will need to re-install if you make any changes. Hope this helps, Aron On Mon, Jul 2, 2012 at 4:25 PM, Xin Zhao wrote: > Hi Jed, > > I've add the code to petsc4py and then > I typed in terminal: > python setup.py build > python setup.py install --user > > > But I still can not find the method Vec.setMPIGhost(). > > I know the question is stupid, but I really need your help. > > > Regards, > Xin > > > On Mon, Jul 2, 2012 at 10:42 AM, Din Zhao wrote: > >> Thank you all. >> >> >> >> On Jul 2, 2012, at 2:55 AM, Jed Brown wrote: >> >> No, but you can use Vec.createGhost() >> >> On Sun, Jul 1, 2012 at 5:22 PM, Din Zhao wrote: >> >>> And in general, does any function in petsc have a counterpart in >>> petsc4py? >>> >> >> Most do, but the projects are maintained somewhat separately so not every >> one make it into petsc4py. Also, sometimes a more "pythonic" interface is >> adopted by petsc4py, so the calls don't line up exactly. >> >> > -- ------------------------------ This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Mon Jul 2 08:41:17 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 02 Jul 2012 15:41:17 +0200 Subject: [petsc-users] Getting 2d array with updated ghost values from DM global vector In-Reply-To: References: <4FF170BF.3060303@gmail.com> Message-ID: <4FF1A4FD.8040003@gmail.com> On 2/7/2012 2:49 PM, Matthew Knepley wrote: > On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng > wrote: > > Hi, > > I have used DMDACreate2d for my code and then use: > > call > DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) > > call > DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) > > to construct the global DM vector b_rhs_semi_global > > Now I want to get the values with ghost values in a 2d array > locally which is declared as: > > real(8), allocatable :: array2d(:,:) > > I guess I should use DMDAGetGhostCorners to get the corressponding > indices and allocate it. But what should I do next? How can I use > something like VecGetArrayF90 to get to the pointer to access the > local vector? > > I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm > using intel fortran and they can't work. I can't use gfortran at > the moment since I've problems with HYPRE with gfortran in 3D. > > > Are you certain of this? That used to be true, but the current version > should work for any F90. > > Matt I just tested 3.3-p1 and it still doesn't work (example ex11f90 in dm). Is there a chance petsc-dev can work? > > Thanks > > -- > Yours sincerely, > > TAY wee-beng > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Mon Jul 2 10:10:00 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Mon, 2 Jul 2012 19:10:00 +0400 Subject: [petsc-users] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <4FF1A4FD.8040003@gmail.com> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> Message-ID: <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> Hi, There appears to be a bug in DMDAVecRestoreArrayF90. It is probably only triggered when the intel compilers. gfortran and intel seem to have very different internal implementations of fortran90 allocatable arrays. Developers, can you check if the attached patch makes sense? It will not fix the case of a 3d da with dof>1 since F90Array4dAccess is not implemented. Other than that, it seems to fix ex11f90 under linux and mac OS Blaise On Jul 2, 2012, at 5:41 PM, TAY wee-beng wrote: > On 2/7/2012 2:49 PM, Matthew Knepley wrote: >> On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng wrote: >> Hi, >> >> I have used DMDACreate2d for my code and then use: >> >> call DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >> >> call DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >> >> to construct the global DM vector b_rhs_semi_global >> >> Now I want to get the values with ghost values in a 2d array locally which is declared as: >> >> real(8), allocatable :: array2d(:,:) >> >> I guess I should use DMDAGetGhostCorners to get the corressponding indices and allocate it. But what should I do next? How can I use something like VecGetArrayF90 to get to the pointer to access the local vector? >> >> I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm using intel fortran and they can't work. I can't use gfortran at the moment since I've problems with HYPRE with gfortran in 3D. >> >> Are you certain of this? That used to be true, but the current version should work for any F90. >> >> Matt > > I just tested 3.3-p1 and it still doesn't work (example ex11f90 in dm). Is there a chance petsc-dev can work? >> >> Thanks >> >> -- >> Yours sincerely, >> >> TAY wee-beng >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DMDAVecRestoreArrayF90.patch Type: application/octet-stream Size: 2211 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jul 2 19:10:40 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 2 Jul 2012 19:10:40 -0500 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> Message-ID: <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> Blaise, I don't understand why the patch does anything: - *ierr = VecRestoreArray(*v,0);if (*ierr) return; + PetscScalar *fa; + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? Thanks Barry On Jul 2, 2012, at 10:10 AM, Blaise Bourdin wrote: > Hi, > > There appears to be a bug in DMDAVecRestoreArrayF90. It is probably only triggered when the intel compilers. gfortran and intel seem to have very different internal implementations of fortran90 allocatable arrays. > > Developers, can you check if the attached patch makes sense? It will not fix the case of a 3d da with dof>1 since F90Array4dAccess is not implemented. Other than that, it seems to fix ex11f90 under linux and mac OS > > > > Blaise > > > > On Jul 2, 2012, at 5:41 PM, TAY wee-beng wrote: > >> On 2/7/2012 2:49 PM, Matthew Knepley wrote: >>> On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng wrote: >>> Hi, >>> >>> I have used DMDACreate2d for my code and then use: >>> >>> call DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>> >>> call DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>> >>> to construct the global DM vector b_rhs_semi_global >>> >>> Now I want to get the values with ghost values in a 2d array locally which is declared as: >>> >>> real(8), allocatable :: array2d(:,:) >>> >>> I guess I should use DMDAGetGhostCorners to get the corressponding indices and allocate it. But what should I do next? How can I use something like VecGetArrayF90 to get to the pointer to access the local vector? >>> >>> I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm using intel fortran and they can't work. I can't use gfortran at the moment since I've problems with HYPRE with gfortran in 3D. >>> >>> Are you certain of this? That used to be true, but the current version should work for any F90. >>> >>> Matt >> >> I just tested 3.3-p1 and it still doesn't work (example ex11f90 in dm). Is there a chance petsc-dev can work? >>> >>> Thanks >>> >>> -- >>> Yours sincerely, >>> >>> TAY wee-beng >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin > > > > > > > From bourdin at lsu.edu Tue Jul 3 02:52:52 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Tue, 3 Jul 2012 11:52:52 +0400 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> Message-ID: On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: > > Blaise, > > I don't understand why the patch does anything: > > - *ierr = VecRestoreArray(*v,0);if (*ierr) return; > + PetscScalar *fa; > + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); > + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; > *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); > > All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 src/vec/vec/interface/f90-custom/zvectorf90.c:33 PetscScalar *fa; *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; *__ierr = VecRestoreArray(*x,&fa); Why aren't the calls to F90Array1dAccess and F90Array1dDestroy necessary in the context of DMDAVecGetArrayF90? Blaise > > Thanks > > Barry > > On Jul 2, 2012, at 10:10 AM, Blaise Bourdin wrote: > >> Hi, >> >> There appears to be a bug in DMDAVecRestoreArrayF90. It is probably only triggered when the intel compilers. gfortran and intel seem to have very different internal implementations of fortran90 allocatable arrays. >> >> Developers, can you check if the attached patch makes sense? It will not fix the case of a 3d da with dof>1 since F90Array4dAccess is not implemented. Other than that, it seems to fix ex11f90 under linux and mac OS >> >> >> >> Blaise >> >> >> >> On Jul 2, 2012, at 5:41 PM, TAY wee-beng wrote: >> >>> On 2/7/2012 2:49 PM, Matthew Knepley wrote: >>>> On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng wrote: >>>> Hi, >>>> >>>> I have used DMDACreate2d for my code and then use: >>>> >>>> call DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>>> >>>> call DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>>> >>>> to construct the global DM vector b_rhs_semi_global >>>> >>>> Now I want to get the values with ghost values in a 2d array locally which is declared as: >>>> >>>> real(8), allocatable :: array2d(:,:) >>>> >>>> I guess I should use DMDAGetGhostCorners to get the corressponding indices and allocate it. But what should I do next? How can I use something like VecGetArrayF90 to get to the pointer to access the local vector? >>>> >>>> I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm using intel fortran and they can't work. I can't use gfortran at the moment since I've problems with HYPRE with gfortran in 3D. >>>> >>>> Are you certain of this? That used to be true, but the current version should work for any F90. >>>> >>>> Matt >>> >>> I just tested 3.3-p1 and it still doesn't work (example ex11f90 in dm). Is there a chance petsc-dev can work? >>>> >>>> Thanks >>>> >>>> -- >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >> >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> > -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin From bourdin at lsu.edu Tue Jul 3 03:08:55 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Tue, 3 Jul 2012 12:08:55 +0400 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> Message-ID: <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: > > Blaise, > > I don't understand why the patch does anything: > > - *ierr = VecRestoreArray(*v,0);if (*ierr) return; > + PetscScalar *fa; > + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); > + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; > *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); > > All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 src/vec/vec/interface/f90-custom/zvectorf90.c:33 PetscScalar *fa; *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; *__ierr = VecRestoreArray(*x,&fa); Why aren't the calls to F90Array1dAccess and F90Array1dDestroy necessary in the context of DMDAVecGetArrayF90? Blaise > > Thanks > > Barry > > On Jul 2, 2012, at 10:10 AM, Blaise Bourdin wrote: > >> Hi, >> >> There appears to be a bug in DMDAVecRestoreArrayF90. It is probably only triggered when the intel compilers. gfortran and intel seem to have very different internal implementations of fortran90 allocatable arrays. >> >> Developers, can you check if the attached patch makes sense? It will not fix the case of a 3d da with dof>1 since F90Array4dAccess is not implemented. Other than that, it seems to fix ex11f90 under linux and mac OS >> >> >> >> Blaise >> >> >> >> On Jul 2, 2012, at 5:41 PM, TAY wee-beng wrote: >> >>> On 2/7/2012 2:49 PM, Matthew Knepley wrote: >>>> On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng wrote: >>>> Hi, >>>> >>>> I have used DMDACreate2d for my code and then use: >>>> >>>> call DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>>> >>>> call DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>>> >>>> to construct the global DM vector b_rhs_semi_global >>>> >>>> Now I want to get the values with ghost values in a 2d array locally which is declared as: >>>> >>>> real(8), allocatable :: array2d(:,:) >>>> >>>> I guess I should use DMDAGetGhostCorners to get the corressponding indices and allocate it. But what should I do next? How can I use something like VecGetArrayF90 to get to the pointer to access the local vector? >>>> >>>> I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm using intel fortran and they can't work. I can't use gfortran at the moment since I've problems with HYPRE with gfortran in 3D. >>>> >>>> Are you certain of this? That used to be true, but the current version should work for any F90. >>>> >>>> Matt >>> >>> I just tested 3.3-p1 and it still doesn't work (example ex11f90 in dm). Is there a chance petsc-dev can work? >>>> >>>> Thanks >>>> >>>> -- >>>> Yours sincerely, >>>> >>>> TAY wee-beng >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >> >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> > -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin From gianmail at gmail.com Tue Jul 3 06:39:38 2012 From: gianmail at gmail.com (Gianluca Meneghello) Date: Tue, 3 Jul 2012 11:39:38 +0000 Subject: [petsc-users] -mat_superlu_lwork Message-ID: Dear all, I am trying to use superlu as solver for a large, sparse matrix, and I would like to use -mat_superlu_lwork to speed up the computation. The problem I encounter is that my lwork size I would need is greated than the maximum value for an object of type long int and is not correctly read by the code. Is there a workaround to that? Does using another solver (mumps maybe?) solve this problem and, if so, which option should I use? Thanks in advance Gianluca -- "[Je pense que] l'homme est un monde qui vaut des fois les mondes et que les plus ardentes ambitions sont celles qui ont eu l'orgueil de l'Anonymat" -- Non omnibus, sed mihi et tibi Amedeo Modigliani From C.Klaij at marin.nl Tue Jul 3 07:18:57 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 3 Jul 2012 12:18:57 +0000 Subject: [petsc-users] understanding MatNullSpaceTest Message-ID: I'm trying to understand the use of null spaces. Whatever I do, it always seem to pass the null space test. Could you please tell me what's wrong with this example (c++, petsc-3.3-p1): $ cat nullsp.cc // test null space check #include int main(int argc, char **argv) { PetscErrorCode ierr; PetscInt row,start,end; PetscScalar val[1]; PetscReal norm; Mat A; Vec x,y; MatNullSpace nullsp; PetscBool isNull; ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL); CHKERRQ(ierr); // diagonal matrix ierr = MatCreate(PETSC_COMM_WORLD,&A); CHKERRQ(ierr); ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,24,24); CHKERRQ(ierr); ierr = MatSetType(A,MATMPIAIJ); CHKERRQ(ierr); ierr = MatMPIAIJSetPreallocation(A,1,PETSC_NULL,1,PETSC_NULL); CHKERRQ(ierr); ierr = MatGetOwnershipRange(A,&start,&end); CHKERRQ(ierr); for (row=start; row 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 4.7846e-03 99.8% 1.4400e+02 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 4.000e+01 97.6% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 1 1.0 4.0054e-05 1.1 1.20e+01 1.0 0.0e+00 0.0e+00 0.0e+00 1 17 0 0 0 1 17 0 0 0 1 MatAssemblyBegin 1 1.0 3.7909e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 1 0 0 0 5 1 0 0 0 5 0 MatAssemblyEnd 1 1.0 4.3201e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.9e+01 9 0 0 0 46 9 0 0 0 48 0 VecNorm 2 1.0 2.9700e-03 1.0 4.80e+01 1.0 0.0e+00 0.0e+00 2.0e+00 62 67 0 0 5 62 67 0 0 5 0 VecSet 1 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 2 1.0 7.8678e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterEnd 2 1.0 4.0531e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 1 1.0 9.0599e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 3 0 0 0 Matrix Null Space 1 0 0 0 Vector 5 1 1504 0 Vector Scatter 1 0 0 0 Index Set 2 2 1496 0 PetscRandom 1 1 616 0 Viewer 1 0 0 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 4.29153e-07 Average time for zero size MPI_Send(): 8.58307e-06 #PETSc Option Table entries: -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Wed Jun 20 12:08:20 2012 Configure options: --with-mpi-dir=/opt/refresco/libraries_cklaij/openmpi-1.4.5 --with-clanguage=c++ --with-x=1 --with-debugging=1 --with-hypre-include=/opt/refresco/libraries_cklaij/hypre-2.7.0b/include --with-hypre-lib=/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib/libHYPRE.a --with-ml-include=/opt/refresco/libraries_cklaij/ml-6.2/include --with-ml-lib=/opt/refresco/libraries_cklaij/ml-6.2/lib/libml.a --with-blas-lapack-dir=/opt/intel/mkl ----------------------------------------- Libraries compiled on Wed Jun 20 12:08:20 2012 on lin0133 Machine characteristics: Linux-2.6.32-41-generic-x86_64-with-Ubuntu-10.04-lucid Using PETSc directory: /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1 Using PETSc arch: linux_64bit_debug ----------------------------------------- Using C compiler: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpicxx -wd1572 -g ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpif90 -g ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/include -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/include -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/include -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/include -I/opt/refresco/libraries_cklaij/hypre-2.7.0b/include -I/opt/refresco/libraries_cklaij/ml-6.2/include -I/opt/refresco/libraries_cklaij/openmpi-1.4.5/include ----------------------------------------- Using C linker: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpicxx Using Fortran linker: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpif90 Using libraries: -Wl,-rpath,/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib -L/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib -L/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib -lHYPRE -Wl,-rpath,/opt/refresco/libraries_cklaij/ml-6.2/lib -L/opt/refresco/libraries_cklaij/ml-6.2/lib -lml -Wl,-rpath,/opt/intel/mkl -L/opt/intel/mkl -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread -L/opt/refresco/libraries_cklaij/openmpi-1.4.5/lib -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3 -L/usr/lib/x86_64-linux-gnu -lmpi_f90 -lmpi_f77 -lifport -lifcore -lm -lm -lmpi_cxx -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -ldl ----------------------------------------- dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From hzhang at mcs.anl.gov Tue Jul 3 10:53:19 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 3 Jul 2012 10:53:19 -0500 Subject: [petsc-users] -mat_superlu_lwork In-Reply-To: References: Message-ID: Gianluca: Do you mean 64-bit support for superlu? I do not understand your request, thus forward your request to superlu developer. Hong Dear all, > > I am trying to use superlu as solver for a large, sparse matrix, and I > would like to use -mat_superlu_lwork to speed up the computation. The > problem I encounter is that my lwork size I would need is greated than > the maximum value for an object of type long int and is not correctly > read by the code. > > Is there a workaround to that? Does using another solver (mumps > maybe?) solve this problem and, if so, which option should I use? > > Thanks in advance > > Gianluca > > -- > "[Je pense que] l'homme est un monde qui vaut des fois les mondes et > que les plus ardentes ambitions sont celles qui ont eu l'orgueil de > l'Anonymat" -- Non omnibus, sed mihi et tibi > Amedeo Modigliani > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianmail at gmail.com Tue Jul 3 11:20:45 2012 From: gianmail at gmail.com (Gianluca Meneghello) Date: Tue, 3 Jul 2012 16:20:45 +0000 Subject: [petsc-users] -mat_superlu_lwork In-Reply-To: References: Message-ID: Hong, thanks for your answer. I do not know if my problem request 64 bit support. My understanding is that I can preallocate the working array for superlu with the PETSc database options -mat_superlu_lwork. Following PETSc manual, this has to be specified in bytes. >From my point of view, the problem is that I cannot pass to PETSc an integer greater than 2.147.483.647 (that is, LONG_MAX), thus I am limited to allocating roughly 2 GB of memory. If my factored matrix is bigger, I have to rely on the mallocs/copy. Let me know if it is not clear yet. Gianlu On 3 July 2012 15:53, Hong Zhang wrote: > Gianluca: > > Do you mean 64-bit support for superlu? > I do not understand your request, thus forward your request to superlu > developer. > > Hong > >> Dear all, >> >> I am trying to use superlu as solver for a large, sparse matrix, and I >> would like to use -mat_superlu_lwork to speed up the computation. The >> problem I encounter is that my lwork size I would need is greated than >> the maximum value for an object of type long int and is not correctly >> read by the code. >> >> Is there a workaround to that? Does using another solver (mumps >> maybe?) solve this problem and, if so, which option should I use? >> >> Thanks in advance >> >> Gianluca >> >> -- >> "[Je pense que] l'homme est un monde qui vaut des fois les mondes et >> que les plus ardentes ambitions sont celles qui ont eu l'orgueil de >> l'Anonymat" -- Non omnibus, sed mihi et tibi >> Amedeo Modigliani > > -- "[Je pense que] l'homme est un monde qui vaut des fois les mondes et que les plus ardentes ambitions sont celles qui ont eu l'orgueil de l'Anonymat" -- Non omnibus, sed mihi et tibi Amedeo Modigliani From balay at mcs.anl.gov Tue Jul 3 11:32:21 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 3 Jul 2012 11:32:21 -0500 (CDT) Subject: [petsc-users] -mat_superlu_lwork In-Reply-To: References: Message-ID: Actually its INT_MAX. >From superlu's dgssvx.c >>>>>>> * lwork (input) int * Specifies the size of work array in bytes. void dgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, int *etree, char *equed, double *R, double *C, SuperMatrix *L, SuperMatrix *U, void *work, int lwork, SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, double *rcond, double *ferr, double *berr, mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) <<<<<< So the code is expecting an int - and PETSc passes in this option as int. Sure - 'int' here is limiting the malloc to 2GB - and causing you problems. Satish On Tue, 3 Jul 2012, Gianluca Meneghello wrote: > Hong, > > thanks for your answer. I do not know if my problem request 64 bit support. > > My understanding is that I can preallocate the working array for > superlu with the PETSc database options -mat_superlu_lwork. Following > PETSc manual, this has to be specified in bytes. > > From my point of view, the problem is that I cannot pass to PETSc an > integer greater than 2.147.483.647 (that is, LONG_MAX), thus I am > limited to allocating roughly 2 GB of memory. If my factored matrix is > bigger, I have to rely on the mallocs/copy. > > Let me know if it is not clear yet. > > Gianlu > > > > > > On 3 July 2012 15:53, Hong Zhang wrote: > > Gianluca: > > > > Do you mean 64-bit support for superlu? > > I do not understand your request, thus forward your request to superlu > > developer. > > > > Hong > > > >> Dear all, > >> > >> I am trying to use superlu as solver for a large, sparse matrix, and I > >> would like to use -mat_superlu_lwork to speed up the computation. The > >> problem I encounter is that my lwork size I would need is greated than > >> the maximum value for an object of type long int and is not correctly > >> read by the code. > >> > >> Is there a workaround to that? Does using another solver (mumps > >> maybe?) solve this problem and, if so, which option should I use? > >> > >> Thanks in advance > >> > >> Gianluca > >> > >> -- > >> "[Je pense que] l'homme est un monde qui vaut des fois les mondes et > >> que les plus ardentes ambitions sont celles qui ont eu l'orgueil de > >> l'Anonymat" -- Non omnibus, sed mihi et tibi > >> Amedeo Modigliani > > > > > > > > From hzhang at mcs.anl.gov Tue Jul 3 11:45:14 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 3 Jul 2012 11:45:14 -0500 Subject: [petsc-users] -mat_superlu_lwork In-Reply-To: References: Message-ID: Gianluca: Checking Superlu source code SuperLU_4.3/SRC/dgssvx.c, this input format is required by superlu. Hong Hong, > > thanks for your answer. I do not know if my problem request 64 bit support. > > My understanding is that I can preallocate the working array for > superlu with the PETSc database options -mat_superlu_lwork. Following > PETSc manual, this has to be specified in bytes. > > From my point of view, the problem is that I cannot pass to PETSc an > integer greater than 2.147.483.647 (that is, LONG_MAX), thus I am > limited to allocating roughly 2 GB of memory. If my factored matrix is > bigger, I have to rely on the mallocs/copy. > > Let me know if it is not clear yet. > > Gianlu > > > > > > On 3 July 2012 15:53, Hong Zhang wrote: > > Gianluca: > > > > Do you mean 64-bit support for superlu? > > I do not understand your request, thus forward your request to superlu > > developer. > > > > Hong > > > >> Dear all, > >> > >> I am trying to use superlu as solver for a large, sparse matrix, and I > >> would like to use -mat_superlu_lwork to speed up the computation. The > >> problem I encounter is that my lwork size I would need is greated than > >> the maximum value for an object of type long int and is not correctly > >> read by the code. > >> > >> Is there a workaround to that? Does using another solver (mumps > >> maybe?) solve this problem and, if so, which option should I use? > >> > >> Thanks in advance > >> > >> Gianluca > >> > >> -- > >> "[Je pense que] l'homme est un monde qui vaut des fois les mondes et > >> que les plus ardentes ambitions sont celles qui ont eu l'orgueil de > >> l'Anonymat" -- Non omnibus, sed mihi et tibi > >> Amedeo Modigliani > > > > > > > > -- > "[Je pense que] l'homme est un monde qui vaut des fois les mondes et > que les plus ardentes ambitions sont celles qui ont eu l'orgueil de > l'Anonymat" -- Non omnibus, sed mihi et tibi > Amedeo Modigliani > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xsli at lbl.gov Tue Jul 3 12:18:38 2012 From: xsli at lbl.gov (Xiaoye S. Li) Date: Tue, 3 Jul 2012 10:18:38 -0700 Subject: [petsc-users] -mat_superlu_lwork In-Reply-To: References: Message-ID: Yes, that limits your preallocated work array to be 2GB. Alternatively, you can set lwork=0, rely on superlu to do memory allocation for you. Then you can use more than 2GB memory. Sherry Li On Tue, Jul 3, 2012 at 9:32 AM, Satish Balay wrote: > Actually its INT_MAX. > > From superlu's dgssvx.c > >>>>>>>> > ?* lwork ? (input) int > ?* ? ? ? ? Specifies the size of work array in bytes. > > void > dgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, > ? ? ? ?int *etree, char *equed, double *R, double *C, > ? ? ? ?SuperMatrix *L, SuperMatrix *U, void *work, int lwork, > ? ? ? ?SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, > ? ? ? ?double *rcond, double *ferr, double *berr, > ? ? ? ?mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) > <<<<<< > > So the code is expecting an int - and PETSc passes in this option as int. > > Sure - 'int' here is limiting the malloc to 2GB - and causing you problems. > > Satish > > On Tue, 3 Jul 2012, Gianluca Meneghello wrote: > >> Hong, >> >> thanks for your answer. I do not know if my problem request 64 bit support. >> >> My understanding is that I can preallocate the working array for >> superlu with the PETSc database options -mat_superlu_lwork. Following >> PETSc manual, this has to be specified in bytes. >> >> From my point of view, the problem is that I cannot pass to PETSc an >> integer greater than 2.147.483.647 (that is, LONG_MAX), thus I am >> limited to allocating roughly 2 GB of memory. If my factored matrix is >> bigger, I have to rely on the mallocs/copy. >> >> Let me know if it is not clear yet. >> >> Gianlu >> >> >> >> >> >> On 3 July 2012 15:53, Hong Zhang wrote: >> > Gianluca: >> > >> > Do you mean 64-bit support for superlu? >> > I do not understand your request, thus forward your request to superlu >> > developer. >> > >> > Hong >> > >> >> Dear all, >> >> >> >> I am trying to use superlu as solver for a large, sparse matrix, and I >> >> would like to use -mat_superlu_lwork to speed up the computation. The >> >> problem I encounter is that my lwork size I would need is greated than >> >> the maximum value for an object of type long int and is not correctly >> >> read by the code. >> >> >> >> Is there a workaround to that? Does using another solver (mumps >> >> maybe?) solve this problem and, if so, which option should I use? >> >> >> >> Thanks in advance >> >> >> >> Gianluca >> >> >> >> -- >> >> "[Je pense que] l'homme est un monde qui vaut des fois les mondes et >> >> que les plus ardentes ambitions sont celles qui ont eu l'orgueil de >> >> l'Anonymat" -- Non omnibus, sed mihi et tibi >> >> Amedeo Modigliani >> > >> > >> >> >> >> > From gianmail at gmail.com Tue Jul 3 12:20:00 2012 From: gianmail at gmail.com (Gianluca Meneghello) Date: Tue, 3 Jul 2012 17:20:00 +0000 Subject: [petsc-users] -mat_superlu_lwork In-Reply-To: References: Message-ID: Dear all, thanks for your assistance. I will then continue with superlu's memory allocation! Thanks again Gianluca On 3 July 2012 17:18, Xiaoye S. Li wrote: > Yes, that limits your preallocated work array to be 2GB. > Alternatively, you can set lwork=0, rely on superlu to do memory > allocation for you. Then you can use more than 2GB memory. > > Sherry Li > > > On Tue, Jul 3, 2012 at 9:32 AM, Satish Balay wrote: >> Actually its INT_MAX. >> >> From superlu's dgssvx.c >> >>>>>>>>> >> * lwork (input) int >> * Specifies the size of work array in bytes. >> >> void >> dgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, >> int *etree, char *equed, double *R, double *C, >> SuperMatrix *L, SuperMatrix *U, void *work, int lwork, >> SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, >> double *rcond, double *ferr, double *berr, >> mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) >> <<<<<< >> >> So the code is expecting an int - and PETSc passes in this option as int. >> >> Sure - 'int' here is limiting the malloc to 2GB - and causing you problems. >> >> Satish >> >> On Tue, 3 Jul 2012, Gianluca Meneghello wrote: >> >>> Hong, >>> >>> thanks for your answer. I do not know if my problem request 64 bit support. >>> >>> My understanding is that I can preallocate the working array for >>> superlu with the PETSc database options -mat_superlu_lwork. Following >>> PETSc manual, this has to be specified in bytes. >>> >>> From my point of view, the problem is that I cannot pass to PETSc an >>> integer greater than 2.147.483.647 (that is, LONG_MAX), thus I am >>> limited to allocating roughly 2 GB of memory. If my factored matrix is >>> bigger, I have to rely on the mallocs/copy. >>> >>> Let me know if it is not clear yet. >>> >>> Gianlu >>> >>> >>> >>> >>> >>> On 3 July 2012 15:53, Hong Zhang wrote: >>> > Gianluca: >>> > >>> > Do you mean 64-bit support for superlu? >>> > I do not understand your request, thus forward your request to superlu >>> > developer. >>> > >>> > Hong >>> > >>> >> Dear all, >>> >> >>> >> I am trying to use superlu as solver for a large, sparse matrix, and I >>> >> would like to use -mat_superlu_lwork to speed up the computation. The >>> >> problem I encounter is that my lwork size I would need is greated than >>> >> the maximum value for an object of type long int and is not correctly >>> >> read by the code. >>> >> >>> >> Is there a workaround to that? Does using another solver (mumps >>> >> maybe?) solve this problem and, if so, which option should I use? >>> >> >>> >> Thanks in advance >>> >> >>> >> Gianluca >>> >> >>> >> -- >>> >> "[Je pense que] l'homme est un monde qui vaut des fois les mondes et >>> >> que les plus ardentes ambitions sont celles qui ont eu l'orgueil de >>> >> l'Anonymat" -- Non omnibus, sed mihi et tibi >>> >> Amedeo Modigliani >>> > >>> > >>> >>> >>> >>> >> -- "[Je pense que] l'homme est un monde qui vaut des fois les mondes et que les plus ardentes ambitions sont celles qui ont eu l'orgueil de l'Anonymat" -- Non omnibus, sed mihi et tibi Amedeo Modigliani From bsmith at mcs.anl.gov Tue Jul 3 12:47:06 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 3 Jul 2012 12:47:06 -0500 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> Message-ID: <3CD4EA3A-30CD-45A6-AEC2-5DC1C928D443@mcs.anl.gov> On Jul 3, 2012, at 3:08 AM, Blaise Bourdin wrote: > > On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: > >> >> Blaise, >> >> I don't understand why the patch does anything: >> >> - *ierr = VecRestoreArray(*v,0);if (*ierr) return; >> + PetscScalar *fa; >> + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); >> + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; >> *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); >> >> All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? > > > Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 > > src/vec/vec/interface/f90-custom/zvectorf90.c:33 > PetscScalar *fa; > *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; > *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; It could be the above line is important; but the Accesser and restore array are not. I'll have Satish apply the patch. Thanks Barry > *__ierr = VecRestoreArray(*x,&fa); > > Why aren't the calls to F90Array1dAccess and F90Array1dDestroy necessary in the context of DMDAVecGetArrayF90? > > Blaise > > > >> >> Thanks >> >> Barry >> >> On Jul 2, 2012, at 10:10 AM, Blaise Bourdin wrote: >> >>> Hi, >>> >>> There appears to be a bug in DMDAVecRestoreArrayF90. It is probably only triggered when the intel compilers. gfortran and intel seem to have very different internal implementations of fortran90 allocatable arrays. >>> >>> Developers, can you check if the attached patch makes sense? It will not fix the case of a 3d da with dof>1 since F90Array4dAccess is not implemented. Other than that, it seems to fix ex11f90 under linux and mac OS >>> >>> >>> >>> Blaise >>> >>> >>> >>> On Jul 2, 2012, at 5:41 PM, TAY wee-beng wrote: >>> >>>> On 2/7/2012 2:49 PM, Matthew Knepley wrote: >>>>> On Mon, Jul 2, 2012 at 3:58 AM, TAY wee-beng wrote: >>>>> Hi, >>>>> >>>>> I have used DMDACreate2d for my code and then use: >>>>> >>>>> call DMLocalToGlobalBegin(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>>>> >>>>> call DMLocalToGlobalEnd(da,b_rhs_semi_local,INSERT_VALUES,b_rhs_semi_global,ierr) >>>>> >>>>> to construct the global DM vector b_rhs_semi_global >>>>> >>>>> Now I want to get the values with ghost values in a 2d array locally which is declared as: >>>>> >>>>> real(8), allocatable :: array2d(:,:) >>>>> >>>>> I guess I should use DMDAGetGhostCorners to get the corressponding indices and allocate it. But what should I do next? How can I use something like VecGetArrayF90 to get to the pointer to access the local vector? >>>>> >>>>> I can't use DMDAVecGetArrayF90/DMDAVecRestoreArrayF90 since I'm using intel fortran and they can't work. I can't use gfortran at the moment since I've problems with HYPRE with gfortran in 3D. >>>>> >>>>> Are you certain of this? That used to be true, but the current version should work for any F90. >>>>> >>>>> Matt >>>> >>>> I just tested 3.3-p1 and it still doesn't work (example ex11f90 in dm). Is there a chance petsc-dev can work? >>>>> >>>>> Thanks >>>>> >>>>> -- >>>>> Yours sincerely, >>>>> >>>>> TAY wee-beng >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> >>> >>> -- >>> Department of Mathematics and Center for Computation & Technology >>> Louisiana State University, Baton Rouge, LA 70803, USA >>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >>> >>> >>> >>> >>> >>> >>> >> > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin > > > > > > > From knepley at gmail.com Tue Jul 3 12:51:52 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 3 Jul 2012 11:51:52 -0600 Subject: [petsc-users] understanding MatNullSpaceTest In-Reply-To: References: Message-ID: On Tue, Jul 3, 2012 at 6:18 AM, Klaij, Christiaan wrote: > I'm trying to understand the use of null spaces. Whatever I do, it > always seem to pass the null space test. Could you please tell me > what's wrong with this example (c++, petsc-3.3-p1): > Yes, there was a bug in 3.3. I pushed a fix which should go out in the next patch, and its in petsc-dev. Thanks, Matt > $ cat nullsp.cc > // test null space check > > #include > > int main(int argc, char **argv) { > > PetscErrorCode ierr; > PetscInt row,start,end; > PetscScalar val[1]; > PetscReal norm; > Mat A; > Vec x,y; > MatNullSpace nullsp; > PetscBool isNull; > > ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL); > CHKERRQ(ierr); > > // diagonal matrix > ierr = MatCreate(PETSC_COMM_WORLD,&A); CHKERRQ(ierr); > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,24,24); CHKERRQ(ierr); > ierr = MatSetType(A,MATMPIAIJ); CHKERRQ(ierr); > ierr = MatMPIAIJSetPreallocation(A,1,PETSC_NULL,1,PETSC_NULL); > CHKERRQ(ierr); > ierr = MatGetOwnershipRange(A,&start,&end); CHKERRQ(ierr); > for (row=start; row val[0] = 1.0; > ierr = MatSetValues(A,1,&row,1,&row,val,INSERT_VALUES); CHKERRQ(ierr); > } > ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); CHKERRQ(ierr); > > // random vector > ierr = VecCreate(PETSC_COMM_WORLD,&x); CHKERRQ(ierr); > ierr = VecSetSizes(x,PETSC_DECIDE,24); CHKERRQ(ierr); > ierr = VecSetType(x,VECMPI); CHKERRQ(ierr); > ierr = VecSetRandom(x,PETSC_NULL); CHKERRQ(ierr); > > // null space > ierr = MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_FALSE,1,&x,&nullsp); > CHKERRQ(ierr); > ierr = MatNullSpaceTest(nullsp,A,&isNull); CHKERRQ(ierr); > if (isNull==PETSC_TRUE) { > ierr = PetscPrintf(PETSC_COMM_WORLD,"null space check passed\n"); > } > else { > ierr = PetscPrintf(PETSC_COMM_WORLD,"null space check failed\n"); > } > > // check null space > ierr = VecDuplicate(x,&y); CHKERRQ(ierr); > ierr = MatMult(A,x,y); CHKERRQ(ierr); > ierr = VecNorm(y,NORM_2,&norm); CHKERRQ(ierr); > ierr = PetscPrintf(PETSC_COMM_WORLD,"|Ax| = %G\n",norm); CHKERRQ(ierr); > > ierr = PetscFinalize(); CHKERRQ(ierr); > > return 0; > } > > $ mpiexec -n 2 ./nullsp -log_summary > null space check passed > |Ax| = 2.51613 > > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r > -fCourier9' to print this document *** > > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance Summary: > ---------------------------------------------- > > ./nullsp on a linux_64b named lin0133 with 2 processors, by cklaij Tue Jul > 3 14:03:34 2012 > Using Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 > > Max Max/Min Avg Total > Time (sec): 4.794e-03 1.00000 4.794e-03 > Objects: 1.400e+01 1.00000 1.400e+01 > Flops: 7.200e+01 1.00000 7.200e+01 1.440e+02 > Flops/sec: 1.502e+04 1.00000 1.502e+04 3.004e+04 > Memory: 5.936e+04 1.00000 1.187e+05 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 > MPI Reductions: 4.100e+01 1.00000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N > --> 2N flops > and VecAXPY() for complex vectors of length N > --> 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages > --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts > %Total Avg %Total counts %Total > 0: Main Stage: 4.7846e-03 99.8% 1.4400e+02 100.0% 0.000e+00 > 0.0% 0.000e+00 0.0% 4.000e+01 97.6% > > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and > PetscLogStagePop(). > %T - percent time in this phase %f - percent flops in this > phase > %M - percent messages in this phase %L - percent message lengths > in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time > over all processors) > > ------------------------------------------------------------------------------------------------------------------------ > > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > > Event Count Time (sec) Flops > --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > MatMult 1 1.0 4.0054e-05 1.1 1.20e+01 1.0 0.0e+00 0.0e+00 > 0.0e+00 1 17 0 0 0 1 17 0 0 0 1 > MatAssemblyBegin 1 1.0 3.7909e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.0e+00 1 0 0 0 5 1 0 0 0 5 0 > MatAssemblyEnd 1 1.0 4.3201e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 1.9e+01 9 0 0 0 46 9 0 0 0 48 0 > VecNorm 2 1.0 2.9700e-03 1.0 4.80e+01 1.0 0.0e+00 0.0e+00 > 2.0e+00 62 67 0 0 5 62 67 0 0 5 0 > VecSet 1 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 2 1.0 7.8678e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterEnd 2 1.0 4.0531e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSetRandom 1 1.0 9.0599e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > Reports information only for process 0. > > --- Event Stage 0: Main Stage > > Matrix 3 0 0 0 > Matrix Null Space 1 0 0 0 > Vector 5 1 1504 0 > Vector Scatter 1 0 0 0 > Index Set 2 2 1496 0 > PetscRandom 1 1 616 0 > Viewer 1 0 0 0 > > ======================================================================================================================== > Average time to get PetscTime(): 9.53674e-08 > Average time for MPI_Barrier(): 4.29153e-07 > Average time for zero size MPI_Send(): 8.58307e-06 > #PETSc Option Table entries: > -log_summary > #End of PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 sizeof(PetscInt) 4 > Configure run at: Wed Jun 20 12:08:20 2012 > Configure options: > --with-mpi-dir=/opt/refresco/libraries_cklaij/openmpi-1.4.5 > --with-clanguage=c++ --with-x=1 --with-debugging=1 > --with-hypre-include=/opt/refresco/libraries_cklaij/hypre-2.7.0b/include > --with-hypre-lib=/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib/libHYPRE.a > --with-ml-include=/opt/refresco/libraries_cklaij/ml-6.2/include > --with-ml-lib=/opt/refresco/libraries_cklaij/ml-6.2/lib/libml.a > --with-blas-lapack-dir=/opt/intel/mkl > ----------------------------------------- > Libraries compiled on Wed Jun 20 12:08:20 2012 on lin0133 > Machine characteristics: > Linux-2.6.32-41-generic-x86_64-with-Ubuntu-10.04-lucid > Using PETSc directory: > /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1 > Using PETSc arch: linux_64bit_debug > ----------------------------------------- > > Using C compiler: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpicxx > -wd1572 -g ${COPTFLAGS} ${CFLAGS} > Using Fortran compiler: > /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpif90 -g ${FOPTFLAGS} > ${FFLAGS} > ----------------------------------------- > > Using include paths: > -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/include > -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/include > -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/include > -I/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/include > -I/opt/refresco/libraries_cklaij/hypre-2.7.0b/include > -I/opt/refresco/libraries_cklaij/ml-6.2/include > -I/opt/refresco/libraries_cklaij/openmpi-1.4.5/include > ----------------------------------------- > > Using C linker: /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpicxx > Using Fortran linker: > /opt/refresco/libraries_cklaij/openmpi-1.4.5/bin/mpif90 > Using libraries: > -Wl,-rpath,/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib > -L/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib > -lpetsc -lX11 -lpthread > -Wl,-rpath,/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib > -L/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib -lHYPRE > -Wl,-rpath,/opt/refresco/libraries_cklaij/ml-6.2/lib > -L/opt/refresco/libraries_cklaij/ml-6.2/lib -lml -Wl,-rpath,/opt/intel/mkl > -L/opt/intel/mkl -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 > -lpthread -L/opt/refresco/libraries_cklaij/openmpi-1.4.5/lib > -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.9.293/ipp/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.9.293/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3 -L/usr/lib/x86_64-linux-gnu -lmpi_f90 > -lmpi_f77 -lifport -lifcore -lm -lm -lmpi_cxx -ldl -lmpi -lopen-rte > -lopen-pal -lnsl -lutil -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ > -lgcc_s -lirc -lpthread -lirc_s -ldl > ----------------------------------------- > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jul 3 13:23:28 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 3 Jul 2012 13:23:28 -0500 (CDT) Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <3CD4EA3A-30CD-45A6-AEC2-5DC1C928D443@mcs.anl.gov> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> <3CD4EA3A-30CD-45A6-AEC2-5DC1C928D443@mcs.anl.gov> Message-ID: On Tue, 3 Jul 2012, Barry Smith wrote: > > On Jul 3, 2012, at 3:08 AM, Blaise Bourdin wrote: > > > > > On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: > > > >> > >> Blaise, > >> > >> I don't understand why the patch does anything: > >> > >> - *ierr = VecRestoreArray(*v,0);if (*ierr) return; > >> + PetscScalar *fa; > >> + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); > >> + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; > >> *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); > >> > >> All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? > > > > > > Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 > > > > src/vec/vec/interface/f90-custom/zvectorf90.c:33 > > PetscScalar *fa; > > *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; > > *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; > > It could be the above line is important; but the Accesser and restore array are not. > > I'll have Satish apply the patch. pushed to petsc-3.3 [petsc-dev will get this update] Satish From hgbk2008 at gmail.com Wed Jul 4 01:48:36 2012 From: hgbk2008 at gmail.com (hbui) Date: Wed, 04 Jul 2012 08:48:36 +0200 Subject: [petsc-users] integrate petsc to java Message-ID: <4FF3E744.3060600@gmail.com> Hi I have a finite element program written in Java to perform analysis in structural application. I want to integrate petsc into my program to exploit the distributed parallel solver (with various preconditioners). How could i do that? i think about using swig to generate Java interface but i'm not clear on which should be the starting point. Regards, Giang Bui From renzhengyong at gmail.com Wed Jul 4 05:24:38 2012 From: renzhengyong at gmail.com (RenZhengYong) Date: Wed, 4 Jul 2012 12:24:38 +0200 Subject: [petsc-users] about parallel preconditioned matrix-free gmres In-Reply-To: References: Message-ID: Hi, Matt, Thanks a lot for your suggestions. In the following two subroutines, (1) int mat_vec_product_interface_problem(Mat A, Vec X, Vec Y) for matrix-free GMRES solver (2) int preconditioner_mat_vec(PC pc,Vec X,Vec Y) for shell preconditioner I use the VecScatter() and VecGetArray() to successfully call the parallel GMRES solver to solve my problem. Thanks a lot as usual. Zhengyong On Sat, Jun 30, 2012 at 1:30 PM, Matthew Knepley wrote: > On Fri, Jun 29, 2012 at 7:25 PM, RenZhengYong wrote: > >> Dear Petscs, >> >> Use the uniprocessor complex-value based version petsc, I recently >> successfully make a FETI_DP domain >> decomposition approach working for 3D electromagnetic induction (earth) >> problem. The number of iteration of >> the interface problem seems to be scalable with regard to the number of >> sub-domains. >> >> To do this, I had two subroutines for petsc >> >> (1) int mat_vec_product_interface_problem(Mat A, Vec X, Vec Y) for >> matrix-free GMRES solver >> (2) int preconditioner_mat_vec(PC pc,Vec X,Vec Y) for shell >> preconditioner. >> >> Now, I want to solve the interface problem by paralleled GMRES solver so >> that I can solve real large-scale problems. Could you please tell me the >> easiest way to accomplish it. Which specific data structures of petsc >> should be used. I have been using Petsc for 3.5 years, I really want to >> have a try the real MPI-based Petsc. >> > > 1) The solver logic should be parallel already since it only uses calls to > Vec or Mat functions. The problems will be > in building data structures. > > 2) It looks like your two items above are the things to be parallelized > > 3) Decide how to partition the problem > > 4) Use VecScatter() to communicate data along the interface of your > partitions > > I don't think we can give better advice than that without more specific > questions. Note that there is > a current effort to put BDDCinto PETSc. You can see it in petsc-dev, as > PCBDDC. > > Thanks, > > Matt > > >> Thanks in advance. >> Have a nice weekeed >> Zhengyong >> >> >> -- >> Zhengyong Ren >> AUG Group, Institute of Geophysics >> Department of Geosciences, ETH Zurich >> NO H 47 Sonneggstrasse 5 >> CH-8092, Z?rich, Switzerland >> Tel: +41 44 633 37561 >> e-mail: zhengyong.ren at aug.ig.erdw.ethz.ch >> Gmail: renzhengyong at gmail.com >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Zhengyong Ren AUG Group, Institute of Geophysics Department of Geosciences, ETH Zurich NO H 47 Sonneggstrasse 5 CH-8092, Z?rich, Switzerland Tel: +41 44 633 37561 e-mail: zhengyong.ren at aug.ig.erdw.ethz.ch Gmail: renzhengyong at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From margreet.nool at gmail.com Wed Jul 4 06:33:36 2012 From: margreet.nool at gmail.com (Margreet Nool) Date: Wed, 04 Jul 2012 13:33:36 +0200 Subject: [petsc-users] Efficient reuse of vectors and matrices Message-ID: <4FF42A10.6010203@cwi.nl> Dear list, How can I reuse vectors and matrices efficiently? Our program uses a lot of global matrices and vectors and I want to reduce the amount of memory. Are there PETSC functions, which check whether a Mat or Vec is already created or destroyed? Our programs have been written in Fortran90. The program performs a time stepping process, and I want to know what the best way is to create a global vector only once. As an example, a global vector Vec y is created as a result of a matrix-vector product A . x = y, by calling the PETSC function MatMult. After the first call the Vec Y has already been created. 1. Do we have to store the result into a temporary vector temp_vec and copy its values into Vec y and destroy temp_vec? 2. Do we have to destroy Vec y before a call of MatMult and store the result directly in Vec y? 3. Does there exist a better solution? Thanks in advance, Margreet Nool -- Margreet Nool CWI - Centrum Wiskunde & Informatica Science Park 123, 1098 XG Amsterdam P.O. Box 94079 1090 GB Amsterdam -------------------- tel: +31 20 592 4120 Room, M131 From knepley at gmail.com Wed Jul 4 07:55:05 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Jul 2012 06:55:05 -0600 Subject: [petsc-users] integrate petsc to java In-Reply-To: <4FF3E744.3060600@gmail.com> References: <4FF3E744.3060600@gmail.com> Message-ID: On Wed, Jul 4, 2012 at 12:48 AM, hbui wrote: > Hi > > I have a finite element program written in Java to perform analysis in > structural application. I want to integrate petsc into my program to > exploit the distributed parallel solver (with various preconditioners). How > could i do that? i think about using swig to generate Java interface but > i'm not clear on which should be the starting point. > The first step is passing arrays of doubles between PETSc and Java. This will allow you to fill up Vec and Mat objects. Once that is done, you only need to wrap a few solvers functions. Matt > Regards, > Giang Bui > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 4 08:28:48 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Jul 2012 07:28:48 -0600 Subject: [petsc-users] Efficient reuse of vectors and matrices In-Reply-To: <4FF42A10.6010203@cwi.nl> References: <4FF42A10.6010203@cwi.nl> Message-ID: On Wed, Jul 4, 2012 at 5:33 AM, Margreet Nool wrote: > Dear list, > > How can I reuse vectors and matrices efficiently? > > Our program uses a lot of global matrices and vectors and I want to reduce > the amount of memory. Are there PETSC functions, which check whether a Mat > or Vec is already created or destroyed? Our programs have been written in > Fortran90. > In C, you can initialize the pointer to PETSC_NULL, and then check whether it has been changed (created). However, I do not yet understand why you can't just create all the Vec and Mat structures up front. That is what I usually do. > The program performs a time stepping process, and I want to know what the > best way is to create a global vector only once. As an example, a global > vector Vec y is created as a result of a matrix-vector product A . x = y, > by calling the PETSC function MatMult. After the first call the Vec Y has > already been created. > To clarify, MatMult() does not create anything. It takes in 2 Vec and 1 Mat which have already been created. Matt > 1. Do we have to store the result into a temporary vector temp_vec and > copy its values into Vec y and destroy temp_vec? > 2. Do we have to destroy Vec y before a call of MatMult and store the > result directly in Vec y? > 3. Does there exist a better solution? > > Thanks in advance, > Margreet Nool > > -- > Margreet Nool > CWI - Centrum Wiskunde & Informatica > Science Park 123, > 1098 XG Amsterdam > P.O. Box 94079 > 1090 GB Amsterdam > -------------------- > tel: +31 20 592 4120 > Room, M131 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cirilloalberto at gmail.com Wed Jul 4 09:17:39 2012 From: cirilloalberto at gmail.com (Alberto Cirillo) Date: Wed, 4 Jul 2012 16:17:39 +0200 Subject: [petsc-users] [PETSc-TAO] Question about memory Message-ID: Hi, I have a question about memory management in CUDA environment, using TAO toolkit. When i create a vector with VECSEQCUSP, it's placed on GPU memory, transfered to CPU memory and returned back when operations are done (function evaluation is made by CPU). If I have a VECSEQCUSP vector with dimension that causes the exhaustion of GPU's memory, there is a possibility to swap this vector on CPU's memory in order to avoid "bad alloc" error? For example: if GPU memory is 4GB and CPU memory is 8GB and my working vector is 6GB, PETSc doesn't have a method to manage automatically the vector avoiding the crash? Thanks Alberto -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 4 09:23:02 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 4 Jul 2012 08:23:02 -0600 Subject: [petsc-users] [PETSc-TAO] Question about memory In-Reply-To: References: Message-ID: On Wed, Jul 4, 2012 at 8:17 AM, Alberto Cirillo wrote: > Hi, > > I have a question about memory management in CUDA environment, using TAO > toolkit. > When i create a vector with VECSEQCUSP, it's placed on GPU memory, > transfered to CPU memory and returned back when operations are done > (function evaluation is made by CPU). > If I have a VECSEQCUSP vector with dimension that causes the exhaustion of > GPU's memory, there is a possibility to swap this vector on CPU's memory in > order to avoid "bad alloc" error? > For example: if GPU memory is 4GB and CPU memory is 8GB and my working > vector is 6GB, PETSc doesn't have a method to manage automatically the > vector avoiding the crash? > No. We would recommend that you run in parallel on multiple machines. Matt > Thanks > Alberto > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jul 4 11:58:08 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 4 Jul 2012 11:58:08 -0500 Subject: [petsc-users] integrate petsc to java In-Reply-To: References: <4FF3E744.3060600@gmail.com> Message-ID: Someone has done this for you: http://jpetsctao.zwoggel.net/ good luck. Barry On Jul 4, 2012, at 7:55 AM, Matthew Knepley wrote: > On Wed, Jul 4, 2012 at 12:48 AM, hbui wrote: > Hi > > I have a finite element program written in Java to perform analysis in structural application. I want to integrate petsc into my program to exploit the distributed parallel solver (with various preconditioners). How could i do that? i think about using swig to generate Java interface but i'm not clear on which should be the starting point. > > The first step is passing arrays of doubles between PETSc and Java. This will allow you to fill up Vec and Mat objects. > Once that is done, you only need to wrap a few solvers functions. > > Matt > > Regards, > Giang Bui > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From miguel.fosas at gmail.com Thu Jul 5 04:29:21 2012 From: miguel.fosas at gmail.com (Miguel Fosas) Date: Thu, 5 Jul 2012 11:29:21 +0200 Subject: [petsc-users] SVD residual estimates with SLEPc Message-ID: Hi everyone, I am using SLEPc to compute singular values from a shell matrix A (whose matrix-vector product is very costly to evaluate). After the singular value's are computed, I would like to obtain the error. For this purpose, SLEPc provides the function SVDComputeResidualNorms. However, this function needs to apply twice the operator A, which makes the calculation almost as twice as long (for my case). I know that the estimates that are printed by the monitors (-svd_monitor_all) are pretty good. Is there a way to retrieve the value of such estimates from the internal structures of SLEPc? Thanks, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 5 04:35:24 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 5 Jul 2012 11:35:24 +0200 Subject: [petsc-users] SVD residual estimates with SLEPc In-Reply-To: References: Message-ID: <1A67C2D6-3D56-4A73-B8B5-3ADB1AA89E61@dsic.upv.es> El 05/07/2012, a las 11:29, Miguel Fosas escribi?: > Hi everyone, > > I am using SLEPc to compute singular values from a shell matrix A (whose matrix-vector product is very costly to evaluate). After the singular value's are computed, I would like to obtain the error. For this purpose, SLEPc provides the function SVDComputeResidualNorms. However, this function needs to apply twice the operator A, which makes the calculation almost as twice as long (for my case). > > I know that the estimates that are printed by the monitors (-svd_monitor_all) are pretty good. Is there a way to retrieve the value of such estimates from the internal structures of SLEPc? > > Thanks, > > Miguel This is available for EPS, with EPSGetErrorEstimate(), but not for SVD. If you use an EPS for the SVD solver (that is, SVDCROSS or SVDCYCLIC) then you should be able to extract the EPS object with SVDGetEPS() and use the above function. Jose From hanangul12 at yahoo.co.uk Thu Jul 5 08:34:31 2012 From: hanangul12 at yahoo.co.uk (Abdul Hanan Sheikh) Date: Thu, 5 Jul 2012 14:34:31 +0100 (BST) Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. Message-ID: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> Dear developers and users, Summer greetings. We have few question listen below:? 1. The first question is about adapting " MatMult " function in matrix-free method. We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to provide more than one matrices as input.? Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation with philosophy of development of Petsc. ? 2. An other advance level feedback is needed. ?Re-implementing PCMG function {mg.c } will lead any violation of philosophy of Petsc-development ?? 3. Which one of the above both is more elegant and feasible to work on ? Thanking in anticipation,? Abdul -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 5 08:45:06 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Jul 2012 07:45:06 -0600 Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free > method. > We intend to incorporate a KSP context inside "MatMult" . The immediate > question is how to > provide more than one matrices as input. > You provide extra data through the context for the MATSHELL > Is this idea of incorporating a KSP context inside "MatMult" function > workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. > 2. > An other advance level feedback is needed. > Re-implementing PCMG function { mg.c } will lead any violation of > philosophy of Petsc-development ?? > Again, there is not enough information. Can you do what you want by just replacing the monitors? Thanks, Matt > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From domenico_lahaye at yahoo.com Thu Jul 5 11:40:51 2012 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Thu, 5 Jul 2012 09:40:51 -0700 (PDT) Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> Message-ID: <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> Dear PETSc developers, ? Thank you for your support to Abdul. ?? We are in the process of developing a multilevel Krylov solver for the Helmholtz equation. Abdul has implemented Algorithm I for this purpose. We next would like to implement Algorithm II. Algorithm || amounts to replacing every occurrence of the system matrix $A$ in Algorithm I by $M^{-1} A$. This replacement should occur on all levels. We thought of two ways to realize this replacement. 1) We thought of adopting a matrix-free approach, and to plug in the operation with? $M^{-1}$ there. This would require a ksp context inside MatMult.? ? We wonder whether this a approach is feasible to take. 2) The other approach would by to implement a customized pcmg preconditioner that we can adapt to our needs. This could be a more elegant approach, at the cost of doing more work. Is the assumption that the second approach is more elegant correct and would you be able to give advice on how to tackle this approach? ? Kind wishes, Domenico. ________________________________ ----- Forwarded Message ----- >From: Matthew Knepley >To: Abdul Hanan Sheikh ; PETSc users list >Sent: Thursday, 5 July 2012, 15:45 >Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > >On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > >Dear developers and users, >>Summer greetings. >>We have few question listen below:? >> >> >>1. >> >>The first question is about adapting " MatMult " function in matrix-free method. >>We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to >>provide more than one matrices as input.? > > >You provide extra data through the context for the MATSHELL >? >Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation >> >>with philosophy of development of Petsc. ?? > > >I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. >? >2. >> >>An other advance level feedback is needed. >> >>?Re-implementing PCMG function {mg.c } will lead any violation of philosophy of Petsc-development ?? > > >Again, there is not enough information. Can you do what you want by just replacing the monitors? > > >? Thanks, > > >? ? ?Matt >? >3. >> >>Which one of the above both is more elegant and feasible to work on ? >> >> >> >> >> >>Thanking in anticipation,?Abdul >> >> >> >> >> > > > >-- >What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >-- Norbert Wiener > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 5 12:47:03 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 5 Jul 2012 09:47:03 -0800 Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: Can you reference a paper or some notes on the algorithm? On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye wrote: > Dear PETSc developers, > > Thank you for your support to Abdul. > > We are in the process of developing a multilevel > Krylov solver for the Helmholtz equation. Abdul > has implemented Algorithm I for this purpose. > We next would like to implement Algorithm II. > Algorithm || amounts to replacing every occurrence > of the system matrix $A$ in Algorithm I by > $M^{-1} A$. This replacement should occur on all > levels. We thought of two ways to realize this > replacement. > > 1) We thought of adopting a matrix-free approach, > and to plug in the operation with $M^{-1}$ there. > This would require a ksp context inside MatMult. > We wonder whether this a approach is feasible to > take. > > 2) The other approach would by to implement a > customized pcmg preconditioner that we can adapt > to our needs. This could be a more elegant approach, > at the cost of doing more work. Is the assumption > that the second approach is more elegant correct > and would you be able to give advice on how to tackle > this approach? > > Kind wishes, Domenico. > > > ------------------------------ > ** > > > ----- Forwarded Message ----- > *From:* Matthew Knepley > *To:* Abdul Hanan Sheikh ; PETSc users list < > petsc-users at mcs.anl.gov> > *Sent:* Thursday, 5 July 2012, 15:45 > *Subject:* Re: [petsc-users] Adapting MatMult and PCMG functions in > matrix-free method. > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh > wrote: > > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free > method. > We intend to incorporate a KSP context inside "MatMult" . The immediate > question is how to > provide more than one matrices as input. > > > You provide extra data through the context for the MATSHELL > > > Is this idea of incorporating a KSP context inside "MatMult" function > workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > > > I am not sure you want this. Do you think PCKSP can do what you want? > There is not enough information here to help us answer. > > > 2. > An other advance level feedback is needed. > Re-implementing PCMG function { mg.c } will lead any violation of > philosophy of Petsc-development ?? > > > Again, there is not enough information. Can you do what you want by just > replacing the monitors? > > Thanks, > > Matt > > > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jul 5 13:37:25 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 5 Jul 2012 13:37:25 -0500 Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: On Jul 5, 2012, at 11:40 AM, domenico lahaye wrote: > Dear PETSc developers, > > Thank you for your support to Abdul. > > We are in the process of developing a multilevel > Krylov solver for the Helmholtz equation. Abdul > has implemented Algorithm I for this purpose. > We next would like to implement Algorithm II. > Algorithm || amounts to replacing every occurrence > of the system matrix $A$ in Algorithm I by > $M^{-1} A$. This replacement should occur on all > levels. We thought of two ways to realize this > replacement. > > 1) We thought of adopting a matrix-free approach, > and to plug in the operation with $M^{-1}$ there. > This would require a ksp context inside MatMult. > We wonder whether this a approach is feasible to > take. If you truly are simply replacing a multiply by A everywhere with a approx. multiply by M^{-1}A then 1 is the easier AND more elegant approach. Simply create a MATSHELL and put in its context a struct continuing the A and the KSP used so approximately solve the M^{-1}A. > > 2) The other approach would by to implement a > customized pcmg preconditioner that we can adapt > to our needs. This could be a more elegant approach, > at the cost of doing more work. Is the assumption > that the second approach is more elegant correct > and would you be able to give advice on how to tackle > this approach? > > Kind wishes, Domenico. > > > > > ----- Forwarded Message ----- > From: Matthew Knepley > To: Abdul Hanan Sheikh ; PETSc users list > Sent: Thursday, 5 July 2012, 15:45 > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free method. > We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to > provide more than one matrices as input. > > You provide extra data through the context for the MATSHELL > > Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. > > 2. > An other advance level feedback is needed. > Re-implementing PCMG function { mg.c } will lead any violation of philosophy of Petsc-development ?? > > Again, there is not enough information. Can you do what you want by just replacing the monitors? > > Thanks, > > Matt > > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > From domenico_lahaye at yahoo.com Thu Jul 5 13:38:26 2012 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Thu, 5 Jul 2012 11:38:26 -0700 (PDT) Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> Hi Jed,? ??Thank you for your reply. The Algorithm II is described in the paper? @ARTICLE{yoginabben1, ??author = {Erlangga, Y.A. and R. Nabben}, ??title = {On a multilevel {K}rylov Method for the {H}elmholtz Equation preconditioned by Shifted {L}aplacian}, ??journal = {Electronic Transaction on Num. Analysis (ETNA)}, ??year = {2008}, ??volume = {31}, ??pages = {403--424}, ?} ??Kind wishes, Domenico. ________________________________ From: Jed Brown To: domenico lahaye ; PETSc users list Cc: Abdul Hanan Sheikh Sent: Thursday, July 5, 2012 7:47 PM Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. Can you reference a paper or some notes on the algorithm? On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye wrote: Dear PETSc developers, > >? Thank you for your support to Abdul. > > > >?? We are in the process of developing a multilevel > >Krylov solver for the Helmholtz equation. Abdul > >has implemented Algorithm I for this purpose. > >We next would like to implement Algorithm II. > >Algorithm || amounts to replacing every occurrence > >of the system matrix $A$ in Algorithm I by > >$M^{-1} A$. This replacement should occur on all > >levels. We thought of two ways to realize this > >replacement. > > > >1) We thought of adopting a matrix-free approach, > >and to plug in the operation with? $M^{-1}$ there. > >This would require a ksp context inside MatMult.? ? > >We wonder whether this a approach is feasible to > >take. > > > >2) The other approach would by to implement a > >customized pcmg preconditioner that we can adapt > >to our needs. This could be a more elegant approach, > >at the cost of doing more work. Is the assumption > >that the second approach is more elegant correct > >and would you be able to give advice on how to tackle > >this approach? > > > >? Kind wishes, Domenico. > > > > > > >________________________________ > > > > > >----- Forwarded Message ----- >>From: Matthew Knepley >>To: Abdul Hanan Sheikh ; PETSc users list >>Sent: Thursday, 5 July 2012, 15:45 >>Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. >> >> >>On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: >> >>Dear developers and users, >>>Summer greetings. >>>We have few question listen below:? >>> >>> >>>1. >>> >>>The first question is about adapting " MatMult " function in matrix-free method. >>>We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to >>>provide more than one matrices as input.? >> >> >>You provide extra data through the context for the MATSHELL >>? >>Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation >>> >>>with philosophy of development of Petsc. ?? >> >> >>I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. >>? >>2. >>> >>>An other advance level feedback is needed. >>> >>>?Re-implementing PCMG function {mg.c } will lead any violation of philosophy of Petsc-development ?? >> >> >>Again, there is not enough information. Can you do what you want by just replacing the monitors? >> >> >>? Thanks, >> >> >>? ? ?Matt >>? >>3. >>> >>>Which one of the above both is more elegant and feasible to work on ? >>> >>> >>> >>> >>> >>>Thanking in anticipation,?Abdul >>> >>> >>> >>> >>> >> >> >> >>-- >>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>-- Norbert Wiener >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From domenico_lahaye at yahoo.com Thu Jul 5 13:41:19 2012 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Thu, 5 Jul 2012 11:41:19 -0700 (PDT) Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: <1341513679.58337.YahooMailNeo@web125503.mail.ne1.yahoo.com> Got that. Thx, Domenico. ----- Original Message ----- From: Barry Smith To: domenico lahaye ; PETSc users list Cc: Abdul Hanan Sheikh Sent: Thursday, July 5, 2012 8:37 PM Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. On Jul 5, 2012, at 11:40 AM, domenico lahaye wrote: > Dear PETSc developers, > >? Thank you for your support to Abdul. > >? ? We are in the process of developing a multilevel > Krylov solver for the Helmholtz equation. Abdul > has implemented Algorithm I for this purpose. > We next would like to implement Algorithm II. > Algorithm || amounts to replacing every occurrence > of the system matrix $A$ in Algorithm I by > $M^{-1} A$. This replacement should occur on all > levels. We thought of two ways to realize this > replacement. > > 1) We thought of adopting a matrix-free approach, > and to plug in the operation with? $M^{-1}$ there. > This would require a ksp context inside MatMult.? ? > We wonder whether this a approach is feasible to > take. ? If you truly are simply replacing a multiply by A everywhere with a approx. multiply by M^{-1}A then 1 is the easier AND more elegant approach. Simply create a MATSHELL and put in its context a struct continuing the A and the KSP used so approximately solve the M^{-1}A. > > 2) The other approach would by to implement a > customized pcmg preconditioner that we can adapt > to our needs. This could be a more elegant approach, > at the cost of doing more work. Is the assumption > that the second approach is more elegant correct > and would you be able to give advice on how to tackle > this approach? ? ? > >? Kind wishes, Domenico. > > > > > ----- Forwarded Message ----- > From: Matthew Knepley > To: Abdul Hanan Sheikh ; PETSc users list > Sent: Thursday, 5 July 2012, 15:45 > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free method. > We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to > provide more than one matrices as input. > > You provide extra data through the context for the MATSHELL >? > Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. >? > 2. > An other advance level feedback is needed. >? Re-implementing PCMG function { mg.c } will lead any violation of philosophy of Petsc-development ?? > > Again, there is not enough information. Can you do what you want by just replacing the monitors? > >? Thanks, > >? ? ? Matt >? > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Thu Jul 5 16:02:47 2012 From: hgbk2008 at gmail.com (hbui) Date: Thu, 05 Jul 2012 23:02:47 +0200 Subject: [petsc-users] error compiling on MPI_Win Message-ID: <4FF600F7.1090109@gmail.com> Hi When compiling a small petsc example, i got an error as such: -------------------------------------------------------------------------------------- In file included from /opt/petsc/petsc-dev/include/petscis.h:8:0, from /opt/petsc/petsc-dev/include/petscvec.h:9, from /opt/petsc/petsc-dev/include/petscmat.h:6, from /opt/petsc/petsc-dev/include/petscdm.h:6, from /opt/petsc/petsc-dev/include/petscpc.h:6, from /opt/petsc/petsc-dev/include/petscksp.h:6, from ../PETScSolver.h:14, from ../PETScSolver.cpp:8: /opt/petsc/petsc-dev/include/petscsf.h:69:119: error: 'MPI_Win' has not been declared /opt/petsc/petsc-dev/include/petscsf.h:70:80: error: 'MPI_Win' has not been declared /opt/petsc/petsc-dev/include/petscsf.h:71:105: error: 'MPI_Win' has not been declared In file included from /opt/petsc/petsc-dev/include/petscvec.h:482:0, from /opt/petsc/petsc-dev/include/petscmat.h:6, from /opt/petsc/petsc-dev/include/petscdm.h:6, from /opt/petsc/petsc-dev/include/petscpc.h:6, from /opt/petsc/petsc-dev/include/petscksp.h:6, from ../PETScSolver.h:14, from ../PETScSolver.cpp:8: /opt/petsc/petsc-dev/include/petsc-private/vecimpl.h:498:3: error: 'MPI_Win' does not name a type -------------------------------------------------------------------------------------- I search over the mailing list and see a post which related to this error but i didn't quite understand the answer and the user has also discontinued the thread. http://lists.mcs.anl.gov/pipermail/petsc-dev/2011-November/006227.html I tried to trace back the definition of MPI_Win struct and see as below: #ifndef PETSC_HAVE_MPI_WIN_CREATE #define PETSC_HAVE_MPI_WIN_CREATE 1 #endif #if !defined(PETSC_HAVE_MPI_WIN_CREATE) /* The intent here is to be able to compile even without a complete MPI. */ typedef struct MPI_Win_MISSING *MPI_Win; #endif For some reason the ./configure define PETSC_HAVE_MPI_WIN_CREATE and compiler skip the MPI_Win definition. However, i also don't know where the struct MPI_Win_MISSING is defined. Please advise the method to resolve this compilation error. My system is Ubuntu 12.04, Gcc 4.6.3. Regards, Giang Bui From knepley at gmail.com Thu Jul 5 16:09:48 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 5 Jul 2012 15:09:48 -0600 Subject: [petsc-users] error compiling on MPI_Win In-Reply-To: <4FF600F7.1090109@gmail.com> References: <4FF600F7.1090109@gmail.com> Message-ID: 1) Make sure you have updated BuildSystem: cd $PETSC_DIR/config/BuildSystem hg pull -u 2) Reconfigure and rebuild 3) If its still broken, send configure.log to petsc-maint at mcs.anl.gov Matt On Thu, Jul 5, 2012 at 3:02 PM, hbui wrote: > Hi > > When compiling a small petsc example, i got an error as such: > > ------------------------------**------------------------------** > -------------------------- > In file included from /opt/petsc/petsc-dev/include/**petscis.h:8:0, > from /opt/petsc/petsc-dev/include/**petscvec.h:9, > from /opt/petsc/petsc-dev/include/**petscmat.h:6, > from /opt/petsc/petsc-dev/include/**petscdm.h:6, > from /opt/petsc/petsc-dev/include/**petscpc.h:6, > from /opt/petsc/petsc-dev/include/**petscksp.h:6, > from ../PETScSolver.h:14, > from ../PETScSolver.cpp:8: > /opt/petsc/petsc-dev/include/**petscsf.h:69:119: error: 'MPI_Win' has not > been declared > /opt/petsc/petsc-dev/include/**petscsf.h:70:80: error: 'MPI_Win' has not > been declared > /opt/petsc/petsc-dev/include/**petscsf.h:71:105: error: 'MPI_Win' has not > been declared > In file included from /opt/petsc/petsc-dev/include/**petscvec.h:482:0, > from /opt/petsc/petsc-dev/include/**petscmat.h:6, > from /opt/petsc/petsc-dev/include/**petscdm.h:6, > from /opt/petsc/petsc-dev/include/**petscpc.h:6, > from /opt/petsc/petsc-dev/include/**petscksp.h:6, > from ../PETScSolver.h:14, > from ../PETScSolver.cpp:8: > /opt/petsc/petsc-dev/include/**petsc-private/vecimpl.h:498:3: error: > 'MPI_Win' does not name a type > ------------------------------**------------------------------** > -------------------------- > > I search over the mailing list and see a post which related to this error > but i didn't quite understand the answer and the user has also discontinued > the thread. > http://lists.mcs.anl.gov/**pipermail/petsc-dev/2011-**November/006227.html > > I tried to trace back the definition of MPI_Win struct and see as below: > > #ifndef PETSC_HAVE_MPI_WIN_CREATE > #define PETSC_HAVE_MPI_WIN_CREATE 1 > #endif > > #if !defined(PETSC_HAVE_MPI_WIN_**CREATE) /* The intent here is to be > able to compile even without a complete MPI. */ > typedef struct MPI_Win_MISSING *MPI_Win; > #endif > > For some reason the ./configure define PETSC_HAVE_MPI_WIN_CREATE and > compiler skip the MPI_Win definition. However, i also don't know where the > struct MPI_Win_MISSING is defined. > > Please advise the method to resolve this compilation error. > > My system is Ubuntu 12.04, Gcc 4.6.3. > > Regards, > Giang Bui > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 5 16:10:28 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 5 Jul 2012 13:10:28 -0800 Subject: [petsc-users] error compiling on MPI_Win In-Reply-To: <4FF600F7.1090109@gmail.com> References: <4FF600F7.1090109@gmail.com> Message-ID: On Thu, Jul 5, 2012 at 1:02 PM, hbui wrote: > Hi > > When compiling a small petsc example, i got an error as such: > > > Thanks for investigating. Can you send the full command used to compile this example? > > For some reason the ./configure define PETSC_HAVE_MPI_WIN_CREATE and > compiler skip the MPI_Win definition. and please send configure.log to petsc-maint at mcs.anl.gov > However, i also don't know where the struct MPI_Win_MISSING is defined. > It's not defined, but it's also not dereferenced. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 5 16:41:11 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 5 Jul 2012 13:41:11 -0800 Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: On Thu, Jul 5, 2012 at 10:38 AM, domenico lahaye wrote: > Hi Jed, > > Thank you for your reply. The Algorithm II is described in the paper > > @ARTICLE{yoginabben1, > author = {Erlangga, Y.A. and R. Nabben}, > title = {On a multilevel {K}rylov Method for the {H}elmholtz Equation > preconditioned > by Shifted {L}aplacian}, > journal = {Electronic Transaction on Num. Analysis (ETNA)}, > year = {2008}, > volume = {31}, > pages = {403--424}, > } > As I interpret this method, you have custom interpolation operators and a Krylov smoother that is itself a multigrid cycle with the shifted operator. In PETSc parlance, you have two matrices, the operator A and the preconditioning matrix M. Here M would be the shifted matrix and smoothing would involve MG cycles with coarsened approximations of M. I would start with the PCMG interface, use a Krylov method as the smoother, and perhaps use PCComposite or PCShell as the preconditioner for the Krylov smoother. Eventually the preconditioner for your Krylov smoother will call the "other" MG cycle (a standard method applied to M). Note that this method involves an enormous number of synchronization points as well as high operator complexity, so you may find the cost to be quite high even though the iteration count is not bad. > > Kind wishes, Domenico. > > ------------------------------ > *From:* Jed Brown > *To:* domenico lahaye ; PETSc users list < > petsc-users at mcs.anl.gov> > *Cc:* Abdul Hanan Sheikh > *Sent:* Thursday, July 5, 2012 7:47 PM > > *Subject:* Re: [petsc-users] Adapting MatMult and PCMG functions in > matrix-free method. > > Can you reference a paper or some notes on the algorithm? > > On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye > wrote: > > Dear PETSc developers, > > Thank you for your support to Abdul. > > We are in the process of developing a multilevel > Krylov solver for the Helmholtz equation. Abdul > has implemented Algorithm I for this purpose. > We next would like to implement Algorithm II. > Algorithm || amounts to replacing every occurrence > of the system matrix $A$ in Algorithm I by > $M^{-1} A$. This replacement should occur on all > levels. We thought of two ways to realize this > replacement. > > 1) We thought of adopting a matrix-free approach, > and to plug in the operation with $M^{-1}$ there. > This would require a ksp context inside MatMult. > We wonder whether this a approach is feasible to > take. > > 2) The other approach would by to implement a > customized pcmg preconditioner that we can adapt > to our needs. This could be a more elegant approach, > at the cost of doing more work. Is the assumption > that the second approach is more elegant correct > and would you be able to give advice on how to tackle > this approach? > > Kind wishes, Domenico. > > > ------------------------------ > ** > > > ----- Forwarded Message ----- > *From:* Matthew Knepley > *To:* Abdul Hanan Sheikh ; PETSc users list < > petsc-users at mcs.anl.gov> > *Sent:* Thursday, 5 July 2012, 15:45 > *Subject:* Re: [petsc-users] Adapting MatMult and PCMG functions in > matrix-free method. > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh > wrote: > > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free > method. > We intend to incorporate a KSP context inside "MatMult" . The immediate > question is how to > provide more than one matrices as input. > > > You provide extra data through the context for the MATSHELL > > > Is this idea of incorporating a KSP context inside "MatMult" function > workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > > > I am not sure you want this. Do you think PCKSP can do what you want? > There is not enough information here to help us answer. > > > 2. > An other advance level feedback is needed. > Re-implementing PCMG function { mg.c } will lead any violation of > philosophy of Petsc-development ?? > > > Again, there is not enough information. Can you do what you want by just > replacing the monitors? > > Thanks, > > Matt > > > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Thu Jul 5 16:42:18 2012 From: hgbk2008 at gmail.com (hbui) Date: Thu, 05 Jul 2012 23:42:18 +0200 Subject: [petsc-users] error compiling on MPI_Win In-Reply-To: References: <4FF600F7.1090109@gmail.com> Message-ID: <4FF60A3A.4070304@gmail.com> On 07/05/2012 11:10 PM, Jed Brown wrote: > On Thu, Jul 5, 2012 at 1:02 PM, hbui > wrote: > > Hi > > When compiling a small petsc example, i got an error as such: > > > > Thanks for investigating. Can you send the full command used to > compile this example? > > > For some reason the ./configure define PETSC_HAVE_MPI_WIN_CREATE > and compiler skip the MPI_Win definition. > > > and please send configure.log to petsc-maint at mcs.anl.gov > > > However, i also don't know where the struct MPI_Win_MISSING is > defined. > > > It's not defined, but it's also not dereferenced. Thanks for reply, the compiler command is: g++ -I/opt/petsc/petsc-dev/include -I/opt/petsc/petsc-dev/include/mpiuni -O3 -Wall -c -fmessage-length=0 -MMD -MP -MF"PETScSolver.d" -MT"PETScSolver.d" -o "PETScSolver.o" "../PETScSolver.cpp" I have used the latest BuildSystem (hg pull -u) before compiling. I already sent the configure.log to petsc-maint at mcs.anl.gov Regads, Giang Bui -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: Matthew Knepley Subject: Re: [petsc-users] error compiling on MPI_Win Date: Thu, 5 Jul 2012 15:09:48 -0600 Size: 11826 URL: From jedbrown at mcs.anl.gov Thu Jul 5 16:52:13 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 5 Jul 2012 13:52:13 -0800 Subject: [petsc-users] error compiling on MPI_Win In-Reply-To: <4FF60A3A.4070304@gmail.com> References: <4FF600F7.1090109@gmail.com> <4FF60A3A.4070304@gmail.com> Message-ID: On Thu, Jul 5, 2012 at 1:42 PM, hbui wrote: > Thanks for reply, the compiler command is: > g++ -I/opt/petsc/petsc-dev/include -I/opt/petsc/petsc-dev/include/mpiuni > -O3 -Wall -c -fmessage-length=0 -MMD -MP -MF"PETScSolver.d" > -MT"PETScSolver.d" -o "PETScSolver.o" "../PETScSolver.cpp" > You are using the wrong compiler and including the mpiuni directory. Compilers: C Compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O Fortran Compiler: mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -O Linkers: Shared linker: mpicc -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O Dynamic linker: mpicc -shared -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O MPI: Includes: -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi Use a PETSc makefile or get the compiler and include paths from PETSc. This mpicc appears to be a valid MPI-2 implementation so everything should work with it. > > I have used the latest BuildSystem (hg pull -u) before compiling. I > already sent the configure.log to petsc-maint at mcs.anl.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Thu Jul 5 17:07:31 2012 From: hgbk2008 at gmail.com (hbui) Date: Fri, 06 Jul 2012 00:07:31 +0200 Subject: [petsc-users] [petsc-maint #122877] configure.log of Re: error compiling on MPI_Win In-Reply-To: References: <4FF60A27.1080407@gmail.com> <4FF60C48.4000902@gmail.com> <4FF60DD1.5050503@gmail.com> <4FF60DD1.5050503@gmail.com> Message-ID: <4FF61023.5070503@gmail.com> On 07/05/2012 11:59 PM, Jed Brown wrote: > On Thu, Jul 5, 2012 at 1:57 PM, hbui > wrote: > > > As Jed point out, i should use mpicc instead of g++. I will rewrite > makefile and try to compile again. > > > Yes, use mpicc/mpicxx and be sure to include the PETSc directories > given by the current PETSC_ARCH (those won't include that mpiuni > directory). Thank you very much, the compilation was fine after i changed to mpicc. Giang Bui -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jul 5 19:54:02 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 5 Jul 2012 19:54:02 -0500 Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: <52A4BC2B-E9E0-4EE2-9531-0DEFFAC7EE86@mcs.anl.gov> On Jul 5, 2012, at 4:41 PM, Jed Brown wrote: > On Thu, Jul 5, 2012 at 10:38 AM, domenico lahaye wrote: > Hi Jed, > > Thank you for your reply. The Algorithm II is described in the paper > > @ARTICLE{yoginabben1, > author = {Erlangga, Y.A. and R. Nabben}, > title = {On a multilevel {K}rylov Method for the {H}elmholtz Equation preconditioned > by Shifted {L}aplacian}, > journal = {Electronic Transaction on Num. Analysis (ETNA)}, > year = {2008}, > volume = {31}, > pages = {403--424}, > } > > As I interpret this method, you have custom interpolation operators and a Krylov smoother that is itself a multigrid cycle with the shifted operator. In PETSc parlance, you have two matrices, the operator A and the preconditioning matrix M. Here M would be the shifted matrix and smoothing would involve MG cycles with coarsened approximations of M. I would start with the PCMG interface, use a Krylov method as the smoother, and perhaps use PCComposite or PCShell as the preconditioner for the Krylov smoother. Eventually the preconditioner for your Krylov smoother will call the "other" MG cycle (a standard method applied to M). Note this shouldn't require "hacking" or manually modifying the PCMG code that currently exists. It would just involve clever use of a different PCMG inside the PC of the original PCMG. Barry > > Note that this method involves an enormous number of synchronization points as well as high operator complexity, so you may find the cost to be quite high even though the iteration count is not bad. > > > Kind wishes, Domenico. > > From: Jed Brown > To: domenico lahaye ; PETSc users list > Cc: Abdul Hanan Sheikh > Sent: Thursday, July 5, 2012 7:47 PM > > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > Can you reference a paper or some notes on the algorithm? > > On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye wrote: > Dear PETSc developers, > > Thank you for your support to Abdul. > > We are in the process of developing a multilevel > Krylov solver for the Helmholtz equation. Abdul > has implemented Algorithm I for this purpose. > We next would like to implement Algorithm II. > Algorithm || amounts to replacing every occurrence > of the system matrix $A$ in Algorithm I by > $M^{-1} A$. This replacement should occur on all > levels. We thought of two ways to realize this > replacement. > > 1) We thought of adopting a matrix-free approach, > and to plug in the operation with $M^{-1}$ there. > This would require a ksp context inside MatMult. > We wonder whether this a approach is feasible to > take. > > 2) The other approach would by to implement a > customized pcmg preconditioner that we can adapt > to our needs. This could be a more elegant approach, > at the cost of doing more work. Is the assumption > that the second approach is more elegant correct > and would you be able to give advice on how to tackle > this approach? > > Kind wishes, Domenico. > > > > > ----- Forwarded Message ----- > From: Matthew Knepley > To: Abdul Hanan Sheikh ; PETSc users list > Sent: Thursday, 5 July 2012, 15:45 > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free method. > We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to > provide more than one matrices as input. > > You provide extra data through the context for the MATSHELL > > Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. > > 2. > An other advance level feedback is needed. > Re-implementing PCMG function { mg.c } will lead any violation of philosophy of Petsc-development ?? > > Again, there is not enough information. Can you do what you want by just replacing the monitors? > > Thanks, > > Matt > > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > > > > From domenico_lahaye at yahoo.com Fri Jul 6 00:00:55 2012 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Thu, 5 Jul 2012 22:00:55 -0700 (PDT) Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <52A4BC2B-E9E0-4EE2-9531-0DEFFAC7EE86@mcs.anl.gov> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> <52A4BC2B-E9E0-4EE2-9531-0DEFFAC7EE86@mcs.anl.gov> Message-ID: <1341550855.29881.YahooMailNeo@web125506.mail.ne1.yahoo.com> Thank you for the additional feedback.? What you suggest was my first guess. I did not? see a way however to define the restriction operator? as a product of two operators, in casu the operators? A and M. This then lead to my question of defining? the approximate solve with M inside the MatMult? routine.? Does this make sense?? Domenico.? ----- Original Message ----- From: Barry Smith To: PETSc users list Cc: domenico lahaye ; Abdul Hanan Sheikh Sent: Friday, July 6, 2012 2:54 AM Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. On Jul 5, 2012, at 4:41 PM, Jed Brown wrote: > On Thu, Jul 5, 2012 at 10:38 AM, domenico lahaye wrote: > Hi Jed, > >? Thank you for your reply. The Algorithm II is described in the paper > > @ARTICLE{yoginabben1, >? author = {Erlangga, Y.A. and R. Nabben}, >? title = {On a multilevel {K}rylov Method for the {H}elmholtz Equation preconditioned > ??? by Shifted {L}aplacian}, >? journal = {Electronic Transaction on Num. Analysis (ETNA)}, >? year = {2008}, >? volume = {31}, >? pages = {403--424}, >? } > > As I interpret this method, you have custom interpolation operators and a Krylov smoother that is itself a multigrid cycle with the shifted operator. In PETSc parlance, you have two matrices, the operator A and the preconditioning matrix M. Here M would be the shifted matrix and smoothing would involve MG cycles with coarsened approximations of M. I would start with the PCMG interface, use a Krylov method as the smoother, and perhaps use PCComposite or PCShell as the preconditioner for the Krylov smoother. Eventually the preconditioner for your Krylov smoother will call the "other" MG cycle (a standard method applied to M). ? Note this shouldn't require "hacking" or manually modifying the PCMG code that currently exists. It would just involve clever use of a different PCMG inside the PC of the original PCMG. ? Barry > > Note that this method involves an enormous number of synchronization points as well as high operator complexity, so you may find the cost to be quite high even though the iteration count is not bad. >? > >? Kind wishes, Domenico. > > From: Jed Brown > To: domenico lahaye ; PETSc users list > Cc: Abdul Hanan Sheikh > Sent: Thursday, July 5, 2012 7:47 PM > > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > Can you reference a paper or some notes on the algorithm? > > On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye wrote: > Dear PETSc developers, > >? Thank you for your support to Abdul. > >? ? We are in the process of developing a multilevel > Krylov solver for the Helmholtz equation. Abdul > has implemented Algorithm I for this purpose. > We next would like to implement Algorithm II. > Algorithm || amounts to replacing every occurrence > of the system matrix $A$ in Algorithm I by > $M^{-1} A$. This replacement should occur on all > levels. We thought of two ways to realize this > replacement. > > 1) We thought of adopting a matrix-free approach, > and to plug in the operation with? $M^{-1}$ there. > This would require a ksp context inside MatMult.? ? > We wonder whether this a approach is feasible to > take. > > 2) The other approach would by to implement a > customized pcmg preconditioner that we can adapt > to our needs. This could be a more elegant approach, > at the cost of doing more work. Is the assumption > that the second approach is more elegant correct > and would you be able to give advice on how to tackle > this approach? > >? Kind wishes, Domenico. > > > > > ----- Forwarded Message ----- > From: Matthew Knepley > To: Abdul Hanan Sheikh ; PETSc users list > Sent: Thursday, 5 July 2012, 15:45 > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > Dear developers and users, > Summer greetings. > We have few question listen below: > > 1. > The first question is about adapting " MatMult " function in matrix-free method. > We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to > provide more than one matrices as input. > > You provide extra data through the context for the MATSHELL >? > Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation > with philosophy of development of Petsc. ? > > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. >? > 2. > An other advance level feedback is needed. >? Re-implementing PCMG function { mg.c } will lead any violation of philosophy of Petsc-development ?? > > Again, there is not enough information. Can you do what you want by just replacing the monitors? > >? Thanks, > >? ? ? Matt >? > 3. > Which one of the above both is more elegant and feasible to work on ? > > > Thanking in anticipation, > Abdul > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Fri Jul 6 09:02:58 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Fri, 06 Jul 2012 09:02:58 -0500 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> <3CD4EA3A-30CD-45A6-AEC2-5DC1C928D443@mcs.anl.gov> Message-ID: <4FF6F011.8060003@gmail.com> On 3/7/2012 1:23 PM, Satish Balay wrote: > On Tue, 3 Jul 2012, Barry Smith wrote: > >> On Jul 3, 2012, at 3:08 AM, Blaise Bourdin wrote: >> >>> On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: >>> >>>> Blaise, >>>> >>>> I don't understand why the patch does anything: >>>> >>>> - *ierr = VecRestoreArray(*v,0);if (*ierr) return; >>>> + PetscScalar *fa; >>>> + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); >>>> + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; >>>> *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); >>>> >>>> All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? >>> >>> Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 >>> >>> src/vec/vec/interface/f90-custom/zvectorf90.c:33 >>> PetscScalar *fa; >>> *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; >>> *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; >> It could be the above line is important; but the Accesser and restore array are not. >> >> I'll have Satish apply the patch. > pushed to petsc-3.3 [petsc-dev will get this update] > > Satish Hi, I just tested with the latest petsc-dev but it doesn't work in intel linux for ex11f90. Has the patch been applied? Also, is there any chance of it working under 3d with multiple dof since that's what I'm using and I have other problems with gfortran. Lastly, if the patch is applied, it works with 3d da with 1 dof? Is that right? From bourdin at lsu.edu Fri Jul 6 06:24:54 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Fri, 6 Jul 2012 15:24:54 +0400 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <4FF6F011.8060003@gmail.com> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> <3CD4EA3A-30CD-45A6-AEC2-5DC1C928D443@mcs.anl.gov> <4FF6F011.8060003@gmail.com> Message-ID: <24B6C15D-EAB7-4105-A8BC-F30C79389CFC@lsu.edu> On Jul 6, 2012, at 6:02 PM, TAY wee-beng wrote: > On 3/7/2012 1:23 PM, Satish Balay wrote: >> On Tue, 3 Jul 2012, Barry Smith wrote: >> >>> On Jul 3, 2012, at 3:08 AM, Blaise Bourdin wrote: >>> >>>> On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: >>>> >>>>> Blaise, >>>>> >>>>> I don't understand why the patch does anything: >>>>> >>>>> - *ierr = VecRestoreArray(*v,0);if (*ierr) return; >>>>> + PetscScalar *fa; >>>>> + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); >>>>> + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; >>>>> *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); >>>>> >>>>> All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? >>>> >>>> Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 >>>> >>>> src/vec/vec/interface/f90-custom/zvectorf90.c:33 >>>> PetscScalar *fa; >>>> *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; >>>> *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; >>> It could be the above line is important; but the Accesser and restore array are not. >>> >>> I'll have Satish apply the patch. >> pushed to petsc-3.3 [petsc-dev will get this update] >> >> Satish > Hi, > > I just tested with the latest petsc-dev but it doesn't work in intel linux for ex11f90. Has the patch been applied? Try to clone petsc-3.3 from the mercurial repository http://petsc.cs.iit.edu/petsc/releases/petsc-3.3/ It looks like the first patch has not made its way to the tarball or petsc-dev yet. You can also apply the patch manually: cd $PETSC_DIR patch -p1 < DMDAVecGetArrayF90.patch > Also, is there any chance of it working under 3d with multiple dof since that's what I'm using and I have other problems with gfortran. Lastly, if the patch is applied, it works with 3d da with 1 dof? Is that right? I am sending another patch that should take care of the 3d case with >1 dof to the developers list. Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin From B.Sanderse at cwi.nl Fri Jul 6 08:58:25 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Fri, 6 Jul 2012 15:58:25 +0200 Subject: [petsc-users] list of vectors and matrices Message-ID: <79C77875-E911-4615-B36F-5F64477FF97D@cwi.nl> Dear all, I would like to get as output a list of all vectors and matrices (if possible with their size and memory usage) that are in memory at a certain moment while the code is running. Is this possible? Regards, Benjamin From knepley at gmail.com Fri Jul 6 09:04:30 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 6 Jul 2012 08:04:30 -0600 Subject: [petsc-users] list of vectors and matrices In-Reply-To: <79C77875-E911-4615-B36F-5F64477FF97D@cwi.nl> References: <79C77875-E911-4615-B36F-5F64477FF97D@cwi.nl> Message-ID: On Fri, Jul 6, 2012 at 7:58 AM, Benjamin Sanderse wrote: > Dear all, > > I would like to get as output a list of all vectors and matrices (if > possible with their size and memory usage) that are in memory at a certain > moment while the code is running. Is this possible? > We do not keep global lists of all objects. You could modify the logging structures to do what you want. Matt > Regards, > > Benjamin -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Jul 6 10:46:39 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 6 Jul 2012 10:46:39 -0500 Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: <1341550855.29881.YahooMailNeo@web125506.mail.ne1.yahoo.com> References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> <52A4BC2B-E9E0-4EE2-9531-0DEFFAC7EE86@mcs.anl.gov> <1341550855.29881.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: On Jul 6, 2012, at 12:00 AM, domenico lahaye wrote: > Thank you for the additional feedback. > > What you suggest was my first guess. I did not > see a way however to define the restriction operator > as a product of two operators, in casu the operators > A and M. This then lead to my question of defining > the approximate solve with M inside the MatMult > routine. > MATSHELL is the way to construct custom MatMults. If that is your question. Barry > Does this make sense? > > Domenico. > > ----- Original Message ----- > From: Barry Smith > To: PETSc users list > Cc: domenico lahaye ; Abdul Hanan Sheikh > Sent: Friday, July 6, 2012 2:54 AM > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > > On Jul 5, 2012, at 4:41 PM, Jed Brown wrote: > > > On Thu, Jul 5, 2012 at 10:38 AM, domenico lahaye wrote: > > Hi Jed, > > > > Thank you for your reply. The Algorithm II is described in the paper > > > > @ARTICLE{yoginabben1, > > author = {Erlangga, Y.A. and R. Nabben}, > > title = {On a multilevel {K}rylov Method for the {H}elmholtz Equation preconditioned > > by Shifted {L}aplacian}, > > journal = {Electronic Transaction on Num. Analysis (ETNA)}, > > year = {2008}, > > volume = {31}, > > pages = {403--424}, > > } > > > > As I interpret this method, you have custom interpolation operators and a Krylov smoother that is itself a multigrid cycle with the shifted operator. In PETSc parlance, you have two matrices, the operator A and the preconditioning matrix M. Here M would be the shifted matrix and smoothing would involve MG cycles with coarsened approximations of M. I would start with the PCMG interface, use a Krylov method as the smoother, and perhaps use PCComposite or PCShell as the preconditioner for the Krylov smoother. Eventually the preconditioner for your Krylov smoother will call the "other" MG cycle (a standard method applied to M). > > Note this shouldn't require "hacking" or manually modifying the PCMG code that currently exists. It would just involve clever use of a different PCMG inside the PC of the original PCMG. > > Barry > > > > > Note that this method involves an enormous number of synchronization points as well as high operator complexity, so you may find the cost to be quite high even though the iteration count is not bad. > > > > > > Kind wishes, Domenico. > > > > From: Jed Brown > > To: domenico lahaye ; PETSc users list > > Cc: Abdul Hanan Sheikh > > Sent: Thursday, July 5, 2012 7:47 PM > > > > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > > > Can you reference a paper or some notes on the algorithm? > > > > On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye wrote: > > Dear PETSc developers, > > > > Thank you for your support to Abdul. > > > > We are in the process of developing a multilevel > > Krylov solver for the Helmholtz equation. Abdul > > has implemented Algorithm I for this purpose. > > We next would like to implement Algorithm II. > > Algorithm || amounts to replacing every occurrence > > of the system matrix $A$ in Algorithm I by > > $M^{-1} A$. This replacement should occur on all > > levels. We thought of two ways to realize this > > replacement. > > > > 1) We thought of adopting a matrix-free approach, > > and to plug in the operation with $M^{-1}$ there. > > This would require a ksp context inside MatMult. > > We wonder whether this a approach is feasible to > > take. > > > > 2) The other approach would by to implement a > > customized pcmg preconditioner that we can adapt > > to our needs. This could be a more elegant approach, > > at the cost of doing more work. Is the assumption > > that the second approach is more elegant correct > > and would you be able to give advice on how to tackle > > this approach? > > > > Kind wishes, Domenico. > > > > > > > > > > ----- Forwarded Message ----- > > From: Matthew Knepley > > To: Abdul Hanan Sheikh ; PETSc users list > > Sent: Thursday, 5 July 2012, 15:45 > > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > > Dear developers and users, > > Summer greetings. > > We have few question listen below: > > > > 1. > > The first question is about adapting " MatMult " function in matrix-free method. > > We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to > > provide more than one matrices as input. > > > > You provide extra data through the context for the MATSHELL > > > > Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation > > with philosophy of development of Petsc. ? > > > > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. > > > > 2. > > An other advance level feedback is needed. > > Re-implementing PCMG function { mg.c } will lead any violation of philosophy of Petsc-development ?? > > > > Again, there is not enough information. Can you do what you want by just replacing the monitors? > > > > Thanks, > > > > Matt > > > > 3. > > Which one of the above both is more elegant and feasible to work on ? > > > > > > Thanking in anticipation, > > Abdul > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > > > > > > From domenico_lahaye at yahoo.com Fri Jul 6 10:54:34 2012 From: domenico_lahaye at yahoo.com (domenico lahaye) Date: Fri, 6 Jul 2012 08:54:34 -0700 (PDT) Subject: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. In-Reply-To: References: <1341495271.4863.YahooMailNeo@web28704.mail.ir2.yahoo.com> <1341496330.95236.YahooMailNeo@web28703.mail.ir2.yahoo.com> <1341506451.85850.YahooMailNeo@web125506.mail.ne1.yahoo.com> <1341513506.35776.YahooMailNeo@web125506.mail.ne1.yahoo.com> <52A4BC2B-E9E0-4EE2-9531-0DEFFAC7EE86@mcs.anl.gov> <1341550855.29881.YahooMailNeo@web125506.mail.ne1.yahoo.com> Message-ID: <1341590074.13151.YahooMailNeo@web125506.mail.ne1.yahoo.com> Barry, ? We will look into it and let you now in case other questions arise. ? Have a nice weekend, Domenico. ________________________________ From: Barry Smith To: domenico lahaye Cc: PETSc users list ; Abdul Hanan Sheikh Sent: Friday, July 6, 2012 5:46 PM Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. ? On Jul 6, 2012, at 12:00 AM, domenico lahaye wrote: > Thank you for the additional feedback. > > What you suggest was my first guess. I did not > see a way however to define the restriction operator > as a product of two operators, in casu the operators > A and M. This then lead to my question of defining > the approximate solve with M inside the MatMult > routine. > ? ? MATSHELL is the way to construct custom MatMults.? If that is your question. ? ? Barry > Does this make sense? > > Domenico. > > ----- Original Message ----- > From: Barry Smith > To: PETSc users list > Cc: domenico lahaye ; Abdul Hanan Sheikh > Sent: Friday, July 6, 2012 2:54 AM > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > > On Jul 5, 2012, at 4:41 PM, Jed Brown wrote: > > > On Thu, Jul 5, 2012 at 10:38 AM, domenico lahaye wrote: > > Hi Jed, > > > >? Thank you for your reply. The Algorithm II is described in the paper > > > > @ARTICLE{yoginabben1, > >? author = {Erlangga, Y.A. and R. Nabben}, > >? title = {On a multilevel {K}rylov Method for the {H}elmholtz Equation preconditioned > >? ? by Shifted {L}aplacian}, > >? journal = {Electronic Transaction on Num. Analysis (ETNA)}, > >? year = {2008}, > >? volume = {31}, > >? pages = {403--424}, > >? } > > > > As I interpret this method, you have custom interpolation operators and a Krylov smoother that is itself a multigrid cycle with the shifted operator. In PETSc parlance, you have two matrices, the operator A and the preconditioning matrix M. Here M would be the shifted matrix and smoothing would involve MG cycles with coarsened approximations of M. I would start with the PCMG interface, use a Krylov method as the smoother, and perhaps use PCComposite or PCShell as the preconditioner for the Krylov smoother. Eventually the preconditioner for your Krylov smoother will call the "other" MG cycle (a standard method applied to M). > >? Note this shouldn't require "hacking" or manually modifying the PCMG code that currently exists. It would just involve clever use of a different PCMG inside the PC of the original PCMG. > >? Barry > > > > > Note that this method involves an enormous number of synchronization points as well as high operator complexity, so you may find the cost to be quite high even though the iteration count is not bad. > >? > > > >? Kind wishes, Domenico. > > > > From: Jed Brown > > To: domenico lahaye ; PETSc users list > > Cc: Abdul Hanan Sheikh > > Sent: Thursday, July 5, 2012 7:47 PM > > > > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > > > Can you reference a paper or some notes on the algorithm? > > > > On Thu, Jul 5, 2012 at 8:40 AM, domenico lahaye wrote: > > Dear PETSc developers, > > > >? Thank you for your support to Abdul. > > > >? ? We are in the process of developing a multilevel > > Krylov solver for the Helmholtz equation. Abdul > > has implemented Algorithm I for this purpose. > > We next would like to implement Algorithm II. > > Algorithm || amounts to replacing every occurrence > > of the system matrix $A$ in Algorithm I by > > $M^{-1} A$. This replacement should occur on all > > levels. We thought of two ways to realize this > > replacement. > > > > 1) We thought of adopting a matrix-free approach, > > and to plug in the operation with? $M^{-1}$ there. > > This would require a ksp context inside MatMult.? ? > > We wonder whether this a approach is feasible to > > take. > > > > 2) The other approach would by to implement a > > customized pcmg preconditioner that we can adapt > > to our needs. This could be a more elegant approach, > > at the cost of doing more work. Is the assumption > > that the second approach is more elegant correct > > and would you be able to give advice on how to tackle > > this approach? > > > >? Kind wishes, Domenico. > > > > > > > > > > ----- Forwarded Message ----- > > From: Matthew Knepley > > To: Abdul Hanan Sheikh ; PETSc users list > > Sent: Thursday, 5 July 2012, 15:45 > > Subject: Re: [petsc-users] Adapting MatMult and PCMG functions in matrix-free method. > > > > On Thu, Jul 5, 2012 at 7:34 AM, Abdul Hanan Sheikh wrote: > > Dear developers and users, > > Summer greetings. > > We have few question listen below: > > > > 1. > > The first question is about adapting " MatMult " function in matrix-free method. > > We intend to incorporate a KSP context inside "MatMult" . The immediate question is how to > > provide more than one matrices as input. > > > > You provide extra data through the context for the MATSHELL > >? > > Is this idea of incorporating a KSP context inside "MatMult" function workable ? Does it make any confrontation > > with philosophy of development of Petsc. ? > > > > I am not sure you want this. Do you think PCKSP can do what you want? There is not enough information here to help us answer. > >? > > 2. > > An other advance level feedback is needed. > >? Re-implementing PCMG function { mg.c } will lead any violation of philosophy of Petsc-development ?? > > > > Again, there is not enough information. Can you do what you want by just replacing the monitors? > > > >? Thanks, > > > >? ? ? Matt > >? > > 3. > > Which one of the above both is more elegant and feasible to work on ? > > > > > > Thanking in anticipation, > > Abdul > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Jul 7 10:36:08 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Sat, 07 Jul 2012 17:36:08 +0200 Subject: [petsc-users] [petsc-dev] Getting 2d array with updated ghost values from DM global vector In-Reply-To: <24B6C15D-EAB7-4105-A8BC-F30C79389CFC@lsu.edu> References: <4FF170BF.3060303@gmail.com> <4FF1A4FD.8040003@gmail.com> <54E13E1A-BB29-47FD-B671-3BC3E5B6329D@lsu.edu> <106872A9-6A6F-41F7-9291-1F1389A0597C@mcs.anl.gov> <0DB46F4E-EB0D-4300-B188-EBCF454EC99C@lsu.edu> <3CD4EA3A-30CD-45A6-AEC2-5DC1C928D443@mcs.anl.gov> <4FF6F011.8060003@gmail.com> <24B6C15D-EAB7-4105-A8BC-F30C79389CFC@lsu.edu> Message-ID: <4FF85768.1090508@gmail.com> Thanks to everyone which helped to fix the bug. Let me restate my problem. I have used DMDACreate2d (dof = 2) for my code and solve the equation. Using /call KSPSolve(ksp_semi,b_rhs_semi_global,velocity_global,ierr) call DMGlobalToLocalBegin(da,velocity_global,INSERT_VALUES,velocity_local,ierr) call DMGlobalToLocalEnd(da,velocity_global,INSERT_VALUES,velocity_local,ierr) call VecGetArrayF90(velocity_local,p_velocity,ierr) do j = start_ij(2),end_ij(2) do i = start_ij(1),end_ij(1) du(i,j)=p_velocity(ij) dv(i,j)=p_velocity(ij+1) ij=ij+2 end do end do/ /call VecRestoreArrayF90(velocity_local,p_velocity,ierr)/ The above du,dv correspond to the region without the ghost cell. I would like to have the ghost cell values as well. I tried using: /PetscScalar,pointer :: velocity_array(:,:,:) call DMDAVecGetArrayF90(da,velocity_local,velocity_array,ierr) call DMDAVecRestoreArrayF90(da,velocity_local,velocity_array,ierr) / I checked by looking at the values at : velocity_array(0,34-1,47-1:48-1) -> 0 is for 1st dof, -1 becos array starts from 0, 47-48 is where the grid divides the But the values obtained is wrong. So how should I do it? Yours sincerely, TAY wee-beng On 6/7/2012 1:24 PM, Blaise Bourdin wrote: > On Jul 6, 2012, at 6:02 PM, TAY wee-beng wrote: > >> On 3/7/2012 1:23 PM, Satish Balay wrote: >>> On Tue, 3 Jul 2012, Barry Smith wrote: >>> >>>> On Jul 3, 2012, at 3:08 AM, Blaise Bourdin wrote: >>>> >>>>> On Jul 3, 2012, at 4:10 AM, Barry Smith wrote: >>>>> >>>>>> Blaise, >>>>>> >>>>>> I don't understand why the patch does anything: >>>>>> >>>>>> - *ierr = VecRestoreArray(*v,0);if (*ierr) return; >>>>>> + PetscScalar *fa; >>>>>> + *ierr = F90Array1dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); >>>>>> + *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; >>>>>> *ierr = F90Array1dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); >>>>>> >>>>>> All that passing &fa into VecRestoreArray() does is cause fa to be zeroed. Why would that have any affect on anything? >>>>> Not sure either, I quite don't understand this code, but I noticed that the logic of VecRestoreArrayF90 was different from that of DMDAVecRestoreArrayF90 >>>>> >>>>> src/vec/vec/interface/f90-custom/zvectorf90.c:33 >>>>> PetscScalar *fa; >>>>> *__ierr = F90Array1dAccess(ptr,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; >>>>> *__ierr = F90Array1dDestroy(ptr,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd));if (*__ierr) return; >>>> It could be the above line is important; but the Accesser and restore array are not. >>>> >>>> I'll have Satish apply the patch. >>> pushed to petsc-3.3 [petsc-dev will get this update] >>> >>> Satish >> Hi, >> >> I just tested with the latest petsc-dev but it doesn't work in intel linux for ex11f90. Has the patch been applied? > Try to clone petsc-3.3 from the mercurial repositoryhttp://petsc.cs.iit.edu/petsc/releases/petsc-3.3/ It looks like the first patch has not made its way to the tarball or petsc-dev yet. > You can also apply the patch manually: > cd $PETSC_DIR > patch -p1 < DMDAVecGetArrayF90.patch > >> Also, is there any chance of it working under 3d with multiple dof since that's what I'm using and I have other problems with gfortran. Lastly, if the patch is applied, it works with 3d da with 1 dof? Is that right? > I am sending another patch that should take care of the 3d case with >1 dof to the developers list. > > Blaise > -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Sat Jul 7 11:00:11 2012 From: w_ang_temp at 163.com (w_ang_temp) Date: Sun, 8 Jul 2012 00:00:11 +0800 (CST) Subject: [petsc-users] About DIVERGED_ITS Message-ID: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> Hello, I am a little puzzled that I get the right result while the converged reason says that 'Linear solve did not converge due to DIVERGED_ITS iterations 10000'. This infomation means that the iterations reach the maximum iterations. But the result is right now. So why says 'did not converge'? Can I think that the result is right and can be used? Thanks. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 7 11:03:21 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 7 Jul 2012 10:03:21 -0600 Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> Message-ID: On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: > Hello, > > I am a little puzzled that I get the right result while the converged > reason says that 'Linear solve did not > > converge due to DIVERGED_ITS iterations 10000'. This infomation means that > the iterations reach the maximum > > iterations. But the result is right now. So why says 'did not converge'? > Can I think that the result is right and > > can be used? > Obviously, your definition of "right" is not the same as the convergence tolerances you are using. Matt > > Thanks. > > Jim > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Sat Jul 7 11:15:25 2012 From: w_ang_temp at 163.com (w_ang_temp) Date: Sun, 8 Jul 2012 00:15:25 +0800 (CST) Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> Message-ID: <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> Maybe it is a problem of mathematical concept. I compare the result with the true result which is computed and validated by other tools. I think it is right if I get the same result. ? 2012-07-08 00:03:21?"Matthew Knepley" ??? On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: Hello, I am a little puzzled that I get the right result while the converged reason says that 'Linear solve did not converge due to DIVERGED_ITS iterations 10000'. This infomation means that the iterations reach the maximum iterations. But the result is right now. So why says 'did not converge'? Can I think that the result is right and can be used? Obviously, your definition of "right" is not the same as the convergence tolerances you are using. Matt Thanks. Jim -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Sat Jul 7 11:28:54 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Sat, 7 Jul 2012 12:28:54 -0400 Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> Message-ID: <84D88821-E153-4718-B2C8-0FD148060A50@columbia.edu> It sounds like your -ksp_rtol is too small. Experiment with looser tolerances until your solution is not "correct" to see how much accuracy you want. On Jul 7, 2012, at 12:15 PM, w_ang_temp wrote: > Maybe it is a problem of mathematical concept. I compare the result with the true result which is > > computed and validated by other tools. I think it is right if I get the same result. > > ? 2012-07-08 00:03:21?"Matthew Knepley" ??? > On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: > Hello, > > I am a little puzzled that I get the right result while the converged reason says that 'Linear solve did not > > converge due to DIVERGED_ITS iterations 10000'. This infomation means that the iterations reach the maximum > > iterations. But the result is right now. So why says 'did not converge'? Can I think that the result is right and > > can be used? > > Obviously, your definition of "right" is not the same as the convergence tolerances you are using. > > Matt > > > Thanks. > > Jim > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.null at gmail.com Sat Jul 7 11:58:24 2012 From: sean.null at gmail.com (Xin Zhao) Date: Sat, 7 Jul 2012 17:58:24 +0100 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py Message-ID: Dear all, I generate a matrix L by DA =PETSc.DA().create(...some...) L = DA.create() Then I want to preallocate memory for L L.setPreallocationNNZ((3,2)) This works when for mpiexec -np 1 but it gives the error message below when mpiexec -np 4 [3] MatAnyAIJSetPreallocation() line 311 in petsc4py-1.2/src/include/custom.h [3] Operation done in wrong order [3] matrix is already preallocated How to solve this? Thanks in advance. Cheers, Xin -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 7 12:01:15 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 7 Jul 2012 11:01:15 -0600 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py In-Reply-To: References: Message-ID: On Sat, Jul 7, 2012 at 10:58 AM, Xin Zhao wrote: > Dear all, > > I generate a matrix L by > DA =PETSc.DA().create(...some...) > L = DA.create() > Is this createMatrix()? The matrix returned from a DA is already preallocated. Matt > Then I want to preallocate memory for L > L.setPreallocationNNZ((3,2)) > This works when for mpiexec -np 1 > but it gives the error message below when mpiexec -np 4 > > [3] MatAnyAIJSetPreallocation() line 311 in > petsc4py-1.2/src/include/custom.h > [3] Operation done in wrong order > [3] matrix is already preallocated > > How to solve this? > > Thanks in advance. > > Cheers, > Xin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.null at gmail.com Sat Jul 7 12:02:50 2012 From: sean.null at gmail.com (Xin Zhao) Date: Sat, 7 Jul 2012 18:02:50 +0100 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py In-Reply-To: References: Message-ID: yeap...sorry... L=DA.createMat() then L.setPreallocationNNZ((3,2)) On Sat, Jul 7, 2012 at 6:01 PM, Matthew Knepley wrote: > On Sat, Jul 7, 2012 at 10:58 AM, Xin Zhao wrote: > >> Dear all, >> >> I generate a matrix L by >> DA =PETSc.DA().create(...some...) >> L = DA.create() >> > > Is this createMatrix()? The matrix returned from a DA is already > preallocated. > > Matt > > >> Then I want to preallocate memory for L >> L.setPreallocationNNZ((3,2)) >> This works when for mpiexec -np 1 >> but it gives the error message below when mpiexec -np 4 >> >> [3] MatAnyAIJSetPreallocation() line 311 in >> petsc4py-1.2/src/include/custom.h >> [3] Operation done in wrong order >> [3] matrix is already preallocated >> >> How to solve this? >> >> Thanks in advance. >> >> Cheers, >> Xin >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 7 12:04:38 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 7 Jul 2012 11:04:38 -0600 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py In-Reply-To: References: Message-ID: On Sat, Jul 7, 2012 at 11:02 AM, Xin Zhao wrote: > yeap...sorry... > L=DA.createMat() > > then > L.setPreallocationNNZ((3,2)) > As I said, its already preallocated. Matt > On Sat, Jul 7, 2012 at 6:01 PM, Matthew Knepley wrote: > >> On Sat, Jul 7, 2012 at 10:58 AM, Xin Zhao wrote: >> >>> Dear all, >>> >>> I generate a matrix L by >>> DA =PETSc.DA().create(...some...) >>> L = DA.create() >>> >> >> Is this createMatrix()? The matrix returned from a DA is already >> preallocated. >> >> Matt >> >> >>> Then I want to preallocate memory for L >>> L.setPreallocationNNZ((3,2)) >>> This works when for mpiexec -np 1 >>> but it gives the error message below when mpiexec -np 4 >>> >>> [3] MatAnyAIJSetPreallocation() line 311 in >>> petsc4py-1.2/src/include/custom.h >>> [3] Operation done in wrong order >>> [3] matrix is already preallocated >>> >>> How to solve this? >>> >>> Thanks in advance. >>> >>> Cheers, >>> Xin >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.null at gmail.com Sat Jul 7 12:08:15 2012 From: sean.null at gmail.com (Xin Zhao) Date: Sat, 7 Jul 2012 18:08:15 +0100 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py In-Reply-To: References: Message-ID: So if it is written as if PETSc.COMM_WORLD.get_Rank() == 0: L.setPreallocationNNZ((3,2)) will it achieve what I intend to do? On Sat, Jul 7, 2012 at 6:04 PM, Matthew Knepley wrote: > On Sat, Jul 7, 2012 at 11:02 AM, Xin Zhao wrote: > >> yeap...sorry... >> L=DA.createMat() >> >> then >> L.setPreallocationNNZ((3,2)) >> > > As I said, its already preallocated. > > Matt > > >> On Sat, Jul 7, 2012 at 6:01 PM, Matthew Knepley wrote: >> >>> On Sat, Jul 7, 2012 at 10:58 AM, Xin Zhao wrote: >>> >>>> Dear all, >>>> >>>> I generate a matrix L by >>>> DA =PETSc.DA().create(...some...) >>>> L = DA.create() >>>> >>> >>> Is this createMatrix()? The matrix returned from a DA is already >>> preallocated. >>> >>> Matt >>> >>> >>>> Then I want to preallocate memory for L >>>> L.setPreallocationNNZ((3,2)) >>>> This works when for mpiexec -np 1 >>>> but it gives the error message below when mpiexec -np 4 >>>> >>>> [3] MatAnyAIJSetPreallocation() line 311 in >>>> petsc4py-1.2/src/include/custom.h >>>> [3] Operation done in wrong order >>>> [3] matrix is already preallocated >>>> >>>> How to solve this? >>>> >>>> Thanks in advance. >>>> >>>> Cheers, >>>> Xin >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 7 12:17:38 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 7 Jul 2012 11:17:38 -0600 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py In-Reply-To: References: Message-ID: On Sat, Jul 7, 2012 at 11:08 AM, Xin Zhao wrote: > So if it is written as > if PETSc.COMM_WORLD.get_Rank() == 0: > L.setPreallocationNNZ((3,2)) > > will it achieve what I intend to do? > No. No no no. You do not have to call setPreallocation(). That is why I said (twice) that the matrix is already preallocated. If something is "already preallocated", it does not have to be allocated again. Matt > > > On Sat, Jul 7, 2012 at 6:04 PM, Matthew Knepley wrote: > >> On Sat, Jul 7, 2012 at 11:02 AM, Xin Zhao wrote: >> >>> yeap...sorry... >>> L=DA.createMat() >>> >>> then >>> L.setPreallocationNNZ((3,2)) >>> >> >> As I said, its already preallocated. >> >> Matt >> >> >>> On Sat, Jul 7, 2012 at 6:01 PM, Matthew Knepley wrote: >>> >>>> On Sat, Jul 7, 2012 at 10:58 AM, Xin Zhao wrote: >>>> >>>>> Dear all, >>>>> >>>>> I generate a matrix L by >>>>> DA =PETSc.DA().create(...some...) >>>>> L = DA.create() >>>>> >>>> >>>> Is this createMatrix()? The matrix returned from a DA is already >>>> preallocated. >>>> >>>> Matt >>>> >>>> >>>>> Then I want to preallocate memory for L >>>>> L.setPreallocationNNZ((3,2)) >>>>> This works when for mpiexec -np 1 >>>>> but it gives the error message below when mpiexec -np 4 >>>>> >>>>> [3] MatAnyAIJSetPreallocation() line 311 in >>>>> petsc4py-1.2/src/include/custom.h >>>>> [3] Operation done in wrong order >>>>> [3] matrix is already preallocated >>>>> >>>>> How to solve this? >>>>> >>>>> Thanks in advance. >>>>> >>>>> Cheers, >>>>> Xin >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.null at gmail.com Sat Jul 7 13:16:30 2012 From: sean.null at gmail.com (Xin Zhao) Date: Sat, 7 Jul 2012 19:16:30 +0100 Subject: [petsc-users] Problem with Mat.setpreallocationNNZ in petsc4py In-Reply-To: References: Message-ID: Thanks. Than Than Thanks. On Sat, Jul 7, 2012 at 6:17 PM, Matthew Knepley wrote: > On Sat, Jul 7, 2012 at 11:08 AM, Xin Zhao wrote: > >> So if it is written as >> if PETSc.COMM_WORLD.get_Rank() == 0: >> L.setPreallocationNNZ((3,2)) >> >> will it achieve what I intend to do? >> > > No. No no no. You do not have to call setPreallocation(). That is why I > said (twice) that > the matrix is already preallocated. If something is "already > preallocated", it does not > have to be allocated again. > > Matt > > >> >> >> On Sat, Jul 7, 2012 at 6:04 PM, Matthew Knepley wrote: >> >>> On Sat, Jul 7, 2012 at 11:02 AM, Xin Zhao wrote: >>> >>>> yeap...sorry... >>>> L=DA.createMat() >>>> >>>> then >>>> L.setPreallocationNNZ((3,2)) >>>> >>> >>> As I said, its already preallocated. >>> >>> Matt >>> >>> >>>> On Sat, Jul 7, 2012 at 6:01 PM, Matthew Knepley wrote: >>>> >>>>> On Sat, Jul 7, 2012 at 10:58 AM, Xin Zhao wrote: >>>>> >>>>>> Dear all, >>>>>> >>>>>> I generate a matrix L by >>>>>> DA =PETSc.DA().create(...some...) >>>>>> L = DA.create() >>>>>> >>>>> >>>>> Is this createMatrix()? The matrix returned from a DA is already >>>>> preallocated. >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Then I want to preallocate memory for L >>>>>> L.setPreallocationNNZ((3,2)) >>>>>> This works when for mpiexec -np 1 >>>>>> but it gives the error message below when mpiexec -np 4 >>>>>> >>>>>> [3] MatAnyAIJSetPreallocation() line 311 in >>>>>> petsc4py-1.2/src/include/custom.h >>>>>> [3] Operation done in wrong order >>>>>> [3] matrix is already preallocated >>>>>> >>>>>> How to solve this? >>>>>> >>>>>> Thanks in advance. >>>>>> >>>>>> Cheers, >>>>>> Xin >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwu at mymail.mines.edu Sat Jul 7 15:58:02 2012 From: pwu at mymail.mines.edu (Panruo Wu) Date: Sat, 7 Jul 2012 14:58:02 -0600 Subject: [petsc-users] mutiple DA Message-ID: Hello, If I create 2 DAs with (almost) identical parameters except DA name and dof like: call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, DMDA_BOUNDARY_GHOSTED, & stype, M, N, m, n, dof1, s & PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & da1, ierr) call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, DMDA_BOUNDARY_GHOSTED, & stype, M, N, m, n, dof2, s & PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & da2, ierr) my question is, will the two DAs have the same distribution scheme? Specifically, will the DMDAGetCorners() give the same results when querying da1 & da2? Thanks, Panruo Wu From bsmith at mcs.anl.gov Sat Jul 7 16:10:55 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 7 Jul 2012 16:10:55 -0500 Subject: [petsc-users] mutiple DA In-Reply-To: References: Message-ID: <5F7CFFE0-E3AB-4384-89E9-CC8A559FA269@mcs.anl.gov> So long as you have the same boundary types and the same array sizes in the i and j direction they give the same distribution. Barry On Jul 7, 2012, at 3:58 PM, Panruo Wu wrote: > Hello, > > If I create 2 DAs with (almost) identical parameters except DA name > and dof like: > > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, > DMDA_BOUNDARY_GHOSTED, & > stype, M, N, m, n, dof1, s & > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & > da1, ierr) > > > > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, > DMDA_BOUNDARY_GHOSTED, & > stype, M, N, m, n, dof2, s & > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & > da2, ierr) > > > my question is, will the two DAs have the same distribution scheme? > Specifically, > will the DMDAGetCorners() give the same results when querying da1 & da2? > > Thanks, > Panruo Wu From B.Sanderse at cwi.nl Mon Jul 9 07:53:56 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Mon, 09 Jul 2012 14:53:56 +0200 (CEST) Subject: [petsc-users] random vector In-Reply-To: <67a61786-6fdd-47a5-bd73-3cc7e40c66ac@zembox02.zaas.igi.nl> Message-ID: Hello all, I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. Here is part of the code that I use to generate a random vector: PetscRandom :: rctx ... do I = 1,n call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) call Poisson end do It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? Regards, Benjamin From C.Klaij at marin.nl Mon Jul 9 08:50:44 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 9 Jul 2012 13:50:44 +0000 Subject: [petsc-users] vecview problem Message-ID: I'm having a segmentation fault with vecview in fortran90 with this program. Am I doing something wrong? $ cat bug.F90 module bug use petscksp implicit none #include "finclude/petsckspdef.h" PetscErrorCode, public :: ierr Vec, private :: x public bugSubroutine contains subroutine bugSubroutine() call setVector() call VecView(x,PETSC_VIEWER_DEFAULT,ierr); CHKERRQ(ierr) end subroutine bugSubroutine subroutine setVector() call VecCreate(PETSC_COMM_WORLD,x,ierr); CHKERRQ(ierr) call VecSetSizes(x,PETSC_DECIDE,5,ierr); CHKERRQ(ierr) call VecSetType(x,VECMPI,ierr); CHKERRQ(ierr) call VecView(x,PETSC_VIEWER_DEFAULT,ierr); CHKERRQ(ierr) end subroutine setVector end module bug program testBug use bug use petscksp implicit none #include "finclude/petsckspdef.h" call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call bugSubroutine(); call PetscFinalize(ierr) end program testBug $ mpiexec -n 1 ./bug Vector Object: 1 MPI processes type: mpi Process [0] 0 0 0 0 0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] VecView line 747 /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/vector.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./bug on a linux_64b named lin0133 by cklaij Mon Jul 9 15:46:34 2012 [0]PETSC ERROR: Libraries linked from /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib [0]PETSC ERROR: Configure run at Wed Jun 20 12:08:20 2012 [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/refresco/libraries_cklaij/openmpi-1.4.5 --with-clanguage=c++ --with-x=1 --with-debugging=1 --with-hypre-include=/opt/refresco/libraries_cklaij/hypre-2.7.0b/include --with-hypre-lib=/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib/libHYPRE.a --with-ml-include=/opt/refresco/libraries_cklaij/ml-6.2/include --with-ml-lib=/opt/refresco/libraries_cklaij/ml-6.2/lib/libml.a --with-blas-lapack-dir=/opt/intel/mkl [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- (gdb) run Starting program: /home/CKlaij/Programming/PETSc/Laplace/bug [Thread debugging using libthread_db enabled] Vector Object: 1 MPI processes type: mpi Process [0] 0 0 0 0 0 Program received signal SIGSEGV, Segmentation fault. 0x000000000043f7ee in VecView (vec=0xd8b1f0, viewer=0x69706d00000000) at /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/vector.c:753 753 PetscValidHeaderSpecific(viewer,PETSC_VIEWER_CLASSID,2); (gdb) bt #0 0x000000000043f7ee in VecView (vec=0xd8b1f0, viewer=0x69706d00000000) at /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/vector.c:753 #1 0x00000000004319e8 in vecview_ (x=0xb56e20, vin=0x822788, ierr=0xb56e38) at /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/ftn-custom/zvectorf.c:56 #2 0x000000000042c936 in bug_mp_bugsubroutine_ () #3 0x000000000042cafb in testbug () at bug.F90:37 dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From bsmith at mcs.anl.gov Mon Jul 9 08:51:18 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Jul 2012 08:51:18 -0500 Subject: [petsc-users] random vector In-Reply-To: References: Message-ID: <2A26FCA7-9FD0-48CE-A8A5-CDBF565CE963@mcs.anl.gov> Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context Barry On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: > Hello all, > > I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. > Here is part of the code that I use to generate a random vector: > > > PetscRandom :: rctx > > ... > > do I = 1,n > > > call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) > call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) > call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) > > call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) > > call Poisson > > end do > > > It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? > > Regards, > > > Benjamin From hzhang at mcs.anl.gov Mon Jul 9 08:51:32 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 9 Jul 2012 08:51:32 -0500 Subject: [petsc-users] random vector In-Reply-To: References: <67a61786-6fdd-47a5-bd73-3cc7e40c66ac@zembox02.zaas.igi.nl> Message-ID: Benjamin: > > > I am trying to solve a Poisson equation several times with random > right-hand side vectors in order to do parallel scalability tests. > Here is part of the code that I use to generate a random vector: > This is intended. You can use option ' -random_seed ' to change it. See http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscRandomGetSeed.html Hong > > > PetscRandom :: rctx > > ... > > do I = 1,n > > > call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) > call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) > call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) > > call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) > > call Poisson > > end do > > > It appears that f does not change during the execution of the do-loop. In > fact its value is even always the same for I=1 when I run the code several > times. Apparently I am missing something. Can anybody help? > > Regards, > > > Benjamin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jul 9 09:01:13 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Jul 2012 09:01:13 -0500 Subject: [petsc-users] vecview problem In-Reply-To: References: Message-ID: <843F88F3-12FF-4FA1-AE8C-088D2DD99BD8@mcs.anl.gov> There is no such viewer as PETSC_VIEWER_DEFAULT (you just got lucky it didn't crash in the first call). Maybe you want PETSC_VIEWER_STDOUT_WORLD? Barry PETSC_VIEWER_DEFAULT is for setting the particular format a viewer uses. On Jul 9, 2012, at 8:50 AM, Klaij, Christiaan wrote: > I'm having a segmentation fault with vecview in fortran90 > with this program. Am I doing something wrong? > > $ cat bug.F90 > module bug > > use petscksp > implicit none > #include "finclude/petsckspdef.h" > > PetscErrorCode, public :: ierr > Vec, private :: x > > public bugSubroutine > > contains > > subroutine bugSubroutine() > call setVector() > call VecView(x,PETSC_VIEWER_DEFAULT,ierr); CHKERRQ(ierr) > end subroutine bugSubroutine > > subroutine setVector() > call VecCreate(PETSC_COMM_WORLD,x,ierr); CHKERRQ(ierr) > call VecSetSizes(x,PETSC_DECIDE,5,ierr); CHKERRQ(ierr) > call VecSetType(x,VECMPI,ierr); CHKERRQ(ierr) > call VecView(x,PETSC_VIEWER_DEFAULT,ierr); CHKERRQ(ierr) > end subroutine setVector > > end module bug > > program testBug > > use bug > use petscksp > implicit none > #include "finclude/petsckspdef.h" > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > call bugSubroutine(); > call PetscFinalize(ierr) > > end program testBug > > $ mpiexec -n 1 ./bug > Vector Object: 1 MPI processes > type: mpi > Process [0] > 0 > 0 > 0 > 0 > 0 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecView line 747 /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/vector.c > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./bug on a linux_64b named lin0133 by cklaij Mon Jul 9 15:46:34 2012 > [0]PETSC ERROR: Libraries linked from /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/linux_64bit_debug/lib > [0]PETSC ERROR: Configure run at Wed Jun 20 12:08:20 2012 > [0]PETSC ERROR: Configure options --with-mpi-dir=/opt/refresco/libraries_cklaij/openmpi-1.4.5 --with-clanguage=c++ --with-x=1 --with-debugging=1 --with-hypre-include=/opt/refresco/libraries_cklaij/hypre-2.7.0b/include --with-hypre-lib=/opt/refresco/libraries_cklaij/hypre-2.7.0b/lib/libHYPRE.a --with-ml-include=/opt/refresco/libraries_cklaij/ml-6.2/include --with-ml-lib=/opt/refresco/libraries_cklaij/ml-6.2/lib/libml.a --with-blas-lapack-dir=/opt/intel/mkl > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > > (gdb) run > Starting program: /home/CKlaij/Programming/PETSc/Laplace/bug > [Thread debugging using libthread_db enabled] > Vector Object: 1 MPI processes > type: mpi > Process [0] > 0 > 0 > 0 > 0 > 0 > > Program received signal SIGSEGV, Segmentation fault. > 0x000000000043f7ee in VecView (vec=0xd8b1f0, viewer=0x69706d00000000) > at /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/vector.c:753 > 753 PetscValidHeaderSpecific(viewer,PETSC_VIEWER_CLASSID,2); > (gdb) bt > #0 0x000000000043f7ee in VecView (vec=0xd8b1f0, viewer=0x69706d00000000) > at /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/vector.c:753 > #1 0x00000000004319e8 in vecview_ (x=0xb56e20, vin=0x822788, ierr=0xb56e38) > at /home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p1/src/vec/vec/interface/ftn-custom/zvectorf.c:56 > #2 0x000000000042c936 in bug_mp_bugsubroutine_ () > #3 0x000000000042cafb in testbug () at bug.F90:37 > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > From B.Sanderse at cwi.nl Mon Jul 9 09:03:15 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Mon, 09 Jul 2012 16:03:15 +0200 (CEST) Subject: [petsc-users] random vector In-Reply-To: <2A26FCA7-9FD0-48CE-A8A5-CDBF565CE963@mcs.anl.gov> Message-ID: <104ea3af-c7ac-4c72-b5fc-3945db74d2e1@zembox02.zaas.igi.nl> Thanks a lot. I need to have the PetscRandomCreate inside the loop, so I will use the RandomSetSeed. However, when running the code below I get the following error. PetscRandom :: rctx call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) In PetscRandomSetSeed, I is an integer. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. Benjamin ----- Original Message ----- From: "Barry Smith" To: "PETSc users list" Sent: Monday, July 9, 2012 3:51:18 PM Subject: Re: [petsc-users] random vector Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context Barry On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: > Hello all, > > I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. > Here is part of the code that I use to generate a random vector: > > > PetscRandom :: rctx > > ... > > do I = 1,n > > > call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) > call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) > call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) > > call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) > > call Poisson > > end do > > > It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? > > Regards, > > > Benjamin From bsmith at mcs.anl.gov Mon Jul 9 09:08:08 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Jul 2012 09:08:08 -0500 Subject: [petsc-users] random vector In-Reply-To: <104ea3af-c7ac-4c72-b5fc-3945db74d2e1@zembox02.zaas.igi.nl> References: <104ea3af-c7ac-4c72-b5fc-3945db74d2e1@zembox02.zaas.igi.nl> Message-ID: <9511CF17-24EA-4801-9F5E-03C88DFF8940@mcs.anl.gov> Please update to using petsc-3.3 On Jul 9, 2012, at 9:03 AM, Benjamin Sanderse wrote: > Thanks a lot. I need to have the PetscRandomCreate inside the loop, Why? > so I will use the RandomSetSeed. > However, when running the code below I get the following error. > > PetscRandom :: rctx > > call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) call PetscRandomSetType(rctx,PETSCRAND,ierr); > call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) > call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) > call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) > call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) > > In PetscRandomSetSeed, I is an integer. > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 > [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib > [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 > [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > > > Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. > > Benjamin > > > ----- Original Message ----- > From: "Barry Smith" > To: "PETSc users list" > Sent: Monday, July 9, 2012 3:51:18 PM > Subject: Re: [petsc-users] random vector > > > Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. > > Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context > > > Barry > > On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: > >> Hello all, >> >> I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. >> Here is part of the code that I use to generate a random vector: >> >> >> PetscRandom :: rctx >> >> ... >> >> do I = 1,n >> >> >> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >> >> call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) >> >> call Poisson >> >> end do >> >> >> It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? >> >> Regards, >> >> >> Benjamin > From B.Sanderse at cwi.nl Mon Jul 9 09:13:10 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Mon, 09 Jul 2012 16:13:10 +0200 (CEST) Subject: [petsc-users] random vector In-Reply-To: <9511CF17-24EA-4801-9F5E-03C88DFF8940@mcs.anl.gov> Message-ID: <400354d4-d113-4591-b997-1351beb6361f@zembox02.zaas.igi.nl> ----- Original Message ----- From: "Barry Smith" To: "PETSc users list" Sent: Monday, July 9, 2012 4:08:08 PM Subject: Re: [petsc-users] random vector Please update to using petsc-3.3 On Jul 9, 2012, at 9:03 AM, Benjamin Sanderse wrote: > Thanks a lot. I need to have the PetscRandomCreate inside the loop, Why? Just a small convenience; the program is in fact written as do I=1,N call solve_poisson end do and the subroutine solve_poisson has the calls listed below; I didn't want to pass the PetscRandom object into the subroutine call. Benjamin > so I will use the RandomSetSeed. > However, when running the code below I get the following error. > > PetscRandom :: rctx > > call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) call PetscRandomSetType(rctx,PETSCRAND,ierr); > call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) > call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) > call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) > call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) > > In PetscRandomSetSeed, I is an integer. > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 > [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib > [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 > [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > > > Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. > > Benjamin > > > ----- Original Message ----- > From: "Barry Smith" > To: "PETSc users list" > Sent: Monday, July 9, 2012 3:51:18 PM > Subject: Re: [petsc-users] random vector > > > Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. > > Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context > > > Barry > > On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: > >> Hello all, >> >> I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. >> Here is part of the code that I use to generate a random vector: >> >> >> PetscRandom :: rctx >> >> ... >> >> do I = 1,n >> >> >> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >> >> call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) >> >> call Poisson >> >> end do >> >> >> It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? >> >> Regards, >> >> >> Benjamin > From C.Klaij at marin.nl Mon Jul 9 09:55:55 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 9 Jul 2012 14:55:55 +0000 Subject: [petsc-users] vecview problem Message-ID: > There is no such viewer as PETSC_VIEWER_DEFAULT (you just got > lucky it didn't crash in the first call). Maybe you want > PETSC_VIEWER_STDOUT_WORLD? > > Barry > > PETSC_VIEWER_DEFAULT is for setting the particular format a > viewer uses. Yes that's it, thanks! (should I really get a segmentation fault when running this in debug mode? A message "no such viewer" would be nicer.) dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From john.mousel at gmail.com Mon Jul 9 09:57:42 2012 From: john.mousel at gmail.com (John Mousel) Date: Mon, 9 Jul 2012 09:57:42 -0500 Subject: [petsc-users] MPICH error in KSPSolve Message-ID: I'm running on Kraken and am currently working with 4320 cores. I get the following error in KSPSolve. [2711]: (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) PtlMEInsert failed with error : PTL_NO_SPACE MHV_exe: /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. forrtl: error (76): Abort trap signal Image PC Routine Line Source MHV_exe 00000000014758CB Unknown Unknown Unknown MHV_exe 000000000182ED43 Unknown Unknown Unknown MHV_exe 0000000001829460 Unknown Unknown Unknown MHV_exe 00000000017EDE3E Unknown Unknown Unknown MHV_exe 00000000017B3FE6 Unknown Unknown Unknown MHV_exe 00000000017B3738 Unknown Unknown Unknown MHV_exe 00000000017B2B12 Unknown Unknown Unknown MHV_exe 00000000017B428F Unknown Unknown Unknown MHV_exe 000000000177FCE1 Unknown Unknown Unknown MHV_exe 0000000001590A43 Unknown Unknown Unknown MHV_exe 00000000014F909B Unknown Unknown Unknown MHV_exe 00000000014FF53B Unknown Unknown Unknown MHV_exe 00000000014A4E25 Unknown Unknown Unknown MHV_exe 0000000001487D57 Unknown Unknown Unknown MHV_exe 000000000147F726 Unknown Unknown Unknown MHV_exe 000000000137A8D3 Unknown Unknown Unknown MHV_exe 0000000000E97BF2 Unknown Unknown Unknown MHV_exe 000000000098EAF1 Unknown Unknown Unknown MHV_exe 0000000000989C20 Unknown Unknown Unknown MHV_exe 000000000097A9C2 Unknown Unknown Unknown MHV_exe 000000000082FF2D axbsolve_ 539 PetscObjectsOperations.F90 This is somewhere in KSPSolve. Is there an MPICH environment variable that needs tweaking? I couldn't really find much on this particular error. The solver is BiCGStab with Hypre as a preconditioner. -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Sanderse at cwi.nl Mon Jul 9 10:21:28 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Mon, 09 Jul 2012 17:21:28 +0200 (CEST) Subject: [petsc-users] random vector In-Reply-To: <9511CF17-24EA-4801-9F5E-03C88DFF8940@mcs.anl.gov> Message-ID: <6d08ec25-33d8-46ff-8db6-9175a3271ff0@zembox02.zaas.igi.nl> So I updated to 3.3, but still get the same error: Any ideas? [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 17:18:39 2012 [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.3-p1_debug/arch-linux2-c-opt/lib [0]PETSC ERROR: Configure run at Mon Jul 9 16:58:35 2012 [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.3-p1_debug/src/sys/random/interface/randomc.c application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 ----- Original Message ----- From: "Barry Smith" To: "PETSc users list" Sent: Monday, July 9, 2012 4:08:08 PM Subject: Re: [petsc-users] random vector Please update to using petsc-3.3 On Jul 9, 2012, at 9:03 AM, Benjamin Sanderse wrote: > Thanks a lot. I need to have the PetscRandomCreate inside the loop, Why? > so I will use the RandomSetSeed. > However, when running the code below I get the following error. > > PetscRandom :: rctx > > call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) call PetscRandomSetType(rctx,PETSCRAND,ierr); > call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) > call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) > call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) > call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) > > In PetscRandomSetSeed, I is an integer. > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 > [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib > [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 > [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > > > Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. > > Benjamin > > > ----- Original Message ----- > From: "Barry Smith" > To: "PETSc users list" > Sent: Monday, July 9, 2012 3:51:18 PM > Subject: Re: [petsc-users] random vector > > > Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. > > Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context > > > Barry > > On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: > >> Hello all, >> >> I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. >> Here is part of the code that I use to generate a random vector: >> >> >> PetscRandom :: rctx >> >> ... >> >> do I = 1,n >> >> >> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >> >> call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) >> >> call Poisson >> >> end do >> >> >> It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? >> >> Regards, >> >> >> Benjamin > From mark.adams at columbia.edu Mon Jul 9 10:40:35 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 9 Jul 2012 11:40:35 -0400 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: Message-ID: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> Google PTL_NO_SPACE and you will find some NERSC presentations on how to go about fixing this. (I have run into these problems years ago but forget the issues) Also, I would try running with a Jacobi solver to see if that fixes the problem. If so then you might try -pc_type gamg -pc_gamg_agg_nsmooths 1 -pc_gamg_type agg This is a built in AMG solver so perhaps it plays nicer with resources ... Mark On Jul 9, 2012, at 10:57 AM, John Mousel wrote: > I'm running on Kraken and am currently working with 4320 cores. I get the following error in KSPSolve. > > [2711]: (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) PtlMEInsert failed with error : PTL_NO_SPACE > MHV_exe: /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. > forrtl: error (76): Abort trap signal > Image PC Routine Line Source > MHV_exe 00000000014758CB Unknown Unknown Unknown > MHV_exe 000000000182ED43 Unknown Unknown Unknown > MHV_exe 0000000001829460 Unknown Unknown Unknown > MHV_exe 00000000017EDE3E Unknown Unknown Unknown > MHV_exe 00000000017B3FE6 Unknown Unknown Unknown > MHV_exe 00000000017B3738 Unknown Unknown Unknown > MHV_exe 00000000017B2B12 Unknown Unknown Unknown > MHV_exe 00000000017B428F Unknown Unknown Unknown > MHV_exe 000000000177FCE1 Unknown Unknown Unknown > MHV_exe 0000000001590A43 Unknown Unknown Unknown > MHV_exe 00000000014F909B Unknown Unknown Unknown > MHV_exe 00000000014FF53B Unknown Unknown Unknown > MHV_exe 00000000014A4E25 Unknown Unknown Unknown > MHV_exe 0000000001487D57 Unknown Unknown Unknown > MHV_exe 000000000147F726 Unknown Unknown Unknown > MHV_exe 000000000137A8D3 Unknown Unknown Unknown > MHV_exe 0000000000E97BF2 Unknown Unknown Unknown > MHV_exe 000000000098EAF1 Unknown Unknown Unknown > MHV_exe 0000000000989C20 Unknown Unknown Unknown > MHV_exe 000000000097A9C2 Unknown Unknown Unknown > MHV_exe 000000000082FF2D axbsolve_ 539 PetscObjectsOperations.F90 > > This is somewhere in KSPSolve. Is there an MPICH environment variable that needs tweaking? I couldn't really find much on this particular error. > The solver is BiCGStab with Hypre as a preconditioner. > > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor > > Thanks, > > John From john.mousel at gmail.com Mon Jul 9 10:58:08 2012 From: john.mousel at gmail.com (John Mousel) Date: Mon, 9 Jul 2012 10:58:08 -0500 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> Message-ID: Getting rid of the Hypre option seemed to be the trick. On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: > Google PTL_NO_SPACE and you will find some NERSC presentations on how to > go about fixing this. (I have run into these problems years ago but forget > the issues) > > Also, I would try running with a Jacobi solver to see if that fixes the > problem. If so then you might try > > -pc_type gamg > -pc_gamg_agg_nsmooths 1 > -pc_gamg_type agg > > This is a built in AMG solver so perhaps it plays nicer with resources ... > > Mark > > On Jul 9, 2012, at 10:57 AM, John Mousel wrote: > > > I'm running on Kraken and am currently working with 4320 cores. I get > the following error in KSPSolve. > > > > [2711]: > (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) > PtlMEInsert failed with error : PTL_NO_SPACE > > MHV_exe: > /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: > MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. > > forrtl: error (76): Abort trap signal > > Image PC Routine Line > Source > > MHV_exe 00000000014758CB Unknown Unknown > Unknown > > MHV_exe 000000000182ED43 Unknown Unknown > Unknown > > MHV_exe 0000000001829460 Unknown Unknown > Unknown > > MHV_exe 00000000017EDE3E Unknown Unknown > Unknown > > MHV_exe 00000000017B3FE6 Unknown Unknown > Unknown > > MHV_exe 00000000017B3738 Unknown Unknown > Unknown > > MHV_exe 00000000017B2B12 Unknown Unknown > Unknown > > MHV_exe 00000000017B428F Unknown Unknown > Unknown > > MHV_exe 000000000177FCE1 Unknown Unknown > Unknown > > MHV_exe 0000000001590A43 Unknown Unknown > Unknown > > MHV_exe 00000000014F909B Unknown Unknown > Unknown > > MHV_exe 00000000014FF53B Unknown Unknown > Unknown > > MHV_exe 00000000014A4E25 Unknown Unknown > Unknown > > MHV_exe 0000000001487D57 Unknown Unknown > Unknown > > MHV_exe 000000000147F726 Unknown Unknown > Unknown > > MHV_exe 000000000137A8D3 Unknown Unknown > Unknown > > MHV_exe 0000000000E97BF2 Unknown Unknown > Unknown > > MHV_exe 000000000098EAF1 Unknown Unknown > Unknown > > MHV_exe 0000000000989C20 Unknown Unknown > Unknown > > MHV_exe 000000000097A9C2 Unknown Unknown > Unknown > > MHV_exe 000000000082FF2D axbsolve_ 539 > PetscObjectsOperations.F90 > > > > This is somewhere in KSPSolve. Is there an MPICH environment variable > that needs tweaking? I couldn't really find much on this particular error. > > The solver is BiCGStab with Hypre as a preconditioner. > > > > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor > > > > Thanks, > > > > John > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Mon Jul 9 11:17:11 2012 From: john.mousel at gmail.com (John Mousel) Date: Mon, 9 Jul 2012 11:17:11 -0500 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> Message-ID: Mark, I still haven't had much luck getting GAMG to work consistently for my Poisson problem. ML seems to work nicely on low core counts, but I have a problem where I can get long thin portions of grid on some processors instead of nice block like chunks at high core counts, which leads to a pretty tough time for ML. John On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: > Getting rid of the Hypre option seemed to be the trick. > > On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: > >> Google PTL_NO_SPACE and you will find some NERSC presentations on how to >> go about fixing this. (I have run into these problems years ago but forget >> the issues) >> >> Also, I would try running with a Jacobi solver to see if that fixes the >> problem. If so then you might try >> >> -pc_type gamg >> -pc_gamg_agg_nsmooths 1 >> -pc_gamg_type agg >> >> This is a built in AMG solver so perhaps it plays nicer with resources ... >> >> Mark >> >> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >> >> > I'm running on Kraken and am currently working with 4320 cores. I get >> the following error in KSPSolve. >> > >> > [2711]: >> (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) >> PtlMEInsert failed with error : PTL_NO_SPACE >> > MHV_exe: >> /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: >> MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >> > forrtl: error (76): Abort trap signal >> > Image PC Routine Line >> Source >> > MHV_exe 00000000014758CB Unknown Unknown >> Unknown >> > MHV_exe 000000000182ED43 Unknown Unknown >> Unknown >> > MHV_exe 0000000001829460 Unknown Unknown >> Unknown >> > MHV_exe 00000000017EDE3E Unknown Unknown >> Unknown >> > MHV_exe 00000000017B3FE6 Unknown Unknown >> Unknown >> > MHV_exe 00000000017B3738 Unknown Unknown >> Unknown >> > MHV_exe 00000000017B2B12 Unknown Unknown >> Unknown >> > MHV_exe 00000000017B428F Unknown Unknown >> Unknown >> > MHV_exe 000000000177FCE1 Unknown Unknown >> Unknown >> > MHV_exe 0000000001590A43 Unknown Unknown >> Unknown >> > MHV_exe 00000000014F909B Unknown Unknown >> Unknown >> > MHV_exe 00000000014FF53B Unknown Unknown >> Unknown >> > MHV_exe 00000000014A4E25 Unknown Unknown >> Unknown >> > MHV_exe 0000000001487D57 Unknown Unknown >> Unknown >> > MHV_exe 000000000147F726 Unknown Unknown >> Unknown >> > MHV_exe 000000000137A8D3 Unknown Unknown >> Unknown >> > MHV_exe 0000000000E97BF2 Unknown Unknown >> Unknown >> > MHV_exe 000000000098EAF1 Unknown Unknown >> Unknown >> > MHV_exe 0000000000989C20 Unknown Unknown >> Unknown >> > MHV_exe 000000000097A9C2 Unknown Unknown >> Unknown >> > MHV_exe 000000000082FF2D axbsolve_ 539 >> PetscObjectsOperations.F90 >> > >> > This is somewhere in KSPSolve. Is there an MPICH environment variable >> that needs tweaking? I couldn't really find much on this particular error. >> > The solver is BiCGStab with Hypre as a preconditioner. >> > >> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >> > >> > Thanks, >> > >> > John >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Mon Jul 9 11:30:26 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 9 Jul 2012 12:30:26 -0400 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> Message-ID: What problems are you having again with GAMG? Are you problems unsymmetric? ML has several coarsening strategies available and I think the default does aggregation locally and does not aggregate across processor subdomains. If you have poorly shaped domains then you want to use a global coarsening method (these are not expensive). Mark On Jul 9, 2012, at 12:17 PM, John Mousel wrote: > Mark, > > I still haven't had much luck getting GAMG to work consistently for my Poisson problem. ML seems to work nicely on low core counts, but I have a problem where I can get long thin portions of grid on some processors instead of nice block like chunks at high core counts, which leads to a pretty tough time for ML. > > John > > On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: > Getting rid of the Hypre option seemed to be the trick. > > On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: > Google PTL_NO_SPACE and you will find some NERSC presentations on how to go about fixing this. (I have run into these problems years ago but forget the issues) > > Also, I would try running with a Jacobi solver to see if that fixes the problem. If so then you might try > > -pc_type gamg > -pc_gamg_agg_nsmooths 1 > -pc_gamg_type agg > > This is a built in AMG solver so perhaps it plays nicer with resources ... > > Mark > > On Jul 9, 2012, at 10:57 AM, John Mousel wrote: > > > I'm running on Kraken and am currently working with 4320 cores. I get the following error in KSPSolve. > > > > [2711]: (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) PtlMEInsert failed with error : PTL_NO_SPACE > > MHV_exe: /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. > > forrtl: error (76): Abort trap signal > > Image PC Routine Line Source > > MHV_exe 00000000014758CB Unknown Unknown Unknown > > MHV_exe 000000000182ED43 Unknown Unknown Unknown > > MHV_exe 0000000001829460 Unknown Unknown Unknown > > MHV_exe 00000000017EDE3E Unknown Unknown Unknown > > MHV_exe 00000000017B3FE6 Unknown Unknown Unknown > > MHV_exe 00000000017B3738 Unknown Unknown Unknown > > MHV_exe 00000000017B2B12 Unknown Unknown Unknown > > MHV_exe 00000000017B428F Unknown Unknown Unknown > > MHV_exe 000000000177FCE1 Unknown Unknown Unknown > > MHV_exe 0000000001590A43 Unknown Unknown Unknown > > MHV_exe 00000000014F909B Unknown Unknown Unknown > > MHV_exe 00000000014FF53B Unknown Unknown Unknown > > MHV_exe 00000000014A4E25 Unknown Unknown Unknown > > MHV_exe 0000000001487D57 Unknown Unknown Unknown > > MHV_exe 000000000147F726 Unknown Unknown Unknown > > MHV_exe 000000000137A8D3 Unknown Unknown Unknown > > MHV_exe 0000000000E97BF2 Unknown Unknown Unknown > > MHV_exe 000000000098EAF1 Unknown Unknown Unknown > > MHV_exe 0000000000989C20 Unknown Unknown Unknown > > MHV_exe 000000000097A9C2 Unknown Unknown Unknown > > MHV_exe 000000000082FF2D axbsolve_ 539 PetscObjectsOperations.F90 > > > > This is somewhere in KSPSolve. Is there an MPICH environment variable that needs tweaking? I couldn't really find much on this particular error. > > The solver is BiCGStab with Hypre as a preconditioner. > > > > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor > > > > Thanks, > > > > John > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Mon Jul 9 11:39:34 2012 From: john.mousel at gmail.com (John Mousel) Date: Mon, 9 Jul 2012 11:39:34 -0500 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> Message-ID: Mark, The problem is indeed non-symmetric. We went back and forth in March about this problem. I think we ended up concluding that the coarse size couldn't get too small or the null-space presented problems. When I did get it to work, I tried to scale it up, and on my local university cluster, it seemed to just hang when the core counts got above something like 16 cores. I don't really trust that machine though. It's new and has been plagued by hardware incompatability issues since day 1. I could re-examine this on Kraken. Also, what option are you talking about with ML. I thought I had tried all the -pc_ml_CoarsenScheme options, but I could be wrong. John On Mon, Jul 9, 2012 at 11:30 AM, Mark F. Adams wrote: > What problems are you having again with GAMG? Are you problems > unsymmetric? > > ML has several coarsening strategies available and I think the default > does aggregation locally and does not aggregate across processor > subdomains. If you have poorly shaped domains then you want to use a > global coarsening method (these are not expensive). > > Mark > > On Jul 9, 2012, at 12:17 PM, John Mousel wrote: > > Mark, > > I still haven't had much luck getting GAMG to work consistently for my > Poisson problem. ML seems to work nicely on low core counts, but I have a > problem where I can get long thin portions of grid on some processors > instead of nice block like chunks at high core counts, which leads to a > pretty tough time for ML. > > John > > On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: > >> Getting rid of the Hypre option seemed to be the trick. >> >> On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: >> >>> Google PTL_NO_SPACE and you will find some NERSC presentations on how to >>> go about fixing this. (I have run into these problems years ago but forget >>> the issues) >>> >>> Also, I would try running with a Jacobi solver to see if that fixes the >>> problem. If so then you might try >>> >>> -pc_type gamg >>> -pc_gamg_agg_nsmooths 1 >>> -pc_gamg_type agg >>> >>> This is a built in AMG solver so perhaps it plays nicer with resources >>> ... >>> >>> Mark >>> >>> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >>> >>> > I'm running on Kraken and am currently working with 4320 cores. I get >>> the following error in KSPSolve. >>> > >>> > [2711]: >>> (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) >>> PtlMEInsert failed with error : PTL_NO_SPACE >>> > MHV_exe: >>> /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: >>> MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >>> > forrtl: error (76): Abort trap signal >>> > Image PC Routine Line >>> Source >>> > MHV_exe 00000000014758CB Unknown Unknown >>> Unknown >>> > MHV_exe 000000000182ED43 Unknown Unknown >>> Unknown >>> > MHV_exe 0000000001829460 Unknown Unknown >>> Unknown >>> > MHV_exe 00000000017EDE3E Unknown Unknown >>> Unknown >>> > MHV_exe 00000000017B3FE6 Unknown Unknown >>> Unknown >>> > MHV_exe 00000000017B3738 Unknown Unknown >>> Unknown >>> > MHV_exe 00000000017B2B12 Unknown Unknown >>> Unknown >>> > MHV_exe 00000000017B428F Unknown Unknown >>> Unknown >>> > MHV_exe 000000000177FCE1 Unknown Unknown >>> Unknown >>> > MHV_exe 0000000001590A43 Unknown Unknown >>> Unknown >>> > MHV_exe 00000000014F909B Unknown Unknown >>> Unknown >>> > MHV_exe 00000000014FF53B Unknown Unknown >>> Unknown >>> > MHV_exe 00000000014A4E25 Unknown Unknown >>> Unknown >>> > MHV_exe 0000000001487D57 Unknown Unknown >>> Unknown >>> > MHV_exe 000000000147F726 Unknown Unknown >>> Unknown >>> > MHV_exe 000000000137A8D3 Unknown Unknown >>> Unknown >>> > MHV_exe 0000000000E97BF2 Unknown Unknown >>> Unknown >>> > MHV_exe 000000000098EAF1 Unknown Unknown >>> Unknown >>> > MHV_exe 0000000000989C20 Unknown Unknown >>> Unknown >>> > MHV_exe 000000000097A9C2 Unknown Unknown >>> Unknown >>> > MHV_exe 000000000082FF2D axbsolve_ 539 >>> PetscObjectsOperations.F90 >>> > >>> > This is somewhere in KSPSolve. Is there an MPICH environment variable >>> that needs tweaking? I couldn't really find much on this particular error. >>> > The solver is BiCGStab with Hypre as a preconditioner. >>> > >>> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >>> > >>> > Thanks, >>> > >>> > John >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Mon Jul 9 11:52:54 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 9 Jul 2012 12:52:54 -0400 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> Message-ID: <8445905D-EC09-4557-B9D0-3FE2100AFA2B@columbia.edu> On Jul 9, 2012, at 12:39 PM, John Mousel wrote: > Mark, > > The problem is indeed non-symmetric. We went back and forth in March about this problem. I think we ended up concluding that the coarse size couldn't get too small or the null-space presented problems. Oh its singular. I forget what the issues were but an iterative coarse grid solver should be fine for singular problems, perhaps with null space cleaning if the kernel is sneaking in. Actually there is an SVD coarse grid solver: -mg_coarse_pc_type svd That is the most robust. > When I did get it to work, I tried to scale it up, and on my local university cluster, it seemed to just hang when the core counts got above something like 16 cores. I don't really trust that machine though. That's the machine. GAMG does have some issues but I've not seen it hang. > It's new and has been plagued by hardware incompatability issues since day 1. I could re-examine this on Kraken. Also, what option are you talking about with ML. I thought I had tried all the -pc_ml_CoarsenScheme options, but I could be wrong. This sounds like the right one. I try to be careful in my solvers to be invariant to subdomain shapes and sizes and I think Ray Tuminaro (ML developer) at least has options that should be careful about this also. But I don't know much about what they are deploying these days. Mark > > John > > > > On Mon, Jul 9, 2012 at 11:30 AM, Mark F. Adams wrote: > What problems are you having again with GAMG? Are you problems unsymmetric? > > ML has several coarsening strategies available and I think the default does aggregation locally and does not aggregate across processor subdomains. If you have poorly shaped domains then you want to use a global coarsening method (these are not expensive). > > Mark > > On Jul 9, 2012, at 12:17 PM, John Mousel wrote: > >> Mark, >> >> I still haven't had much luck getting GAMG to work consistently for my Poisson problem. ML seems to work nicely on low core counts, but I have a problem where I can get long thin portions of grid on some processors instead of nice block like chunks at high core counts, which leads to a pretty tough time for ML. >> >> John >> >> On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: >> Getting rid of the Hypre option seemed to be the trick. >> >> On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: >> Google PTL_NO_SPACE and you will find some NERSC presentations on how to go about fixing this. (I have run into these problems years ago but forget the issues) >> >> Also, I would try running with a Jacobi solver to see if that fixes the problem. If so then you might try >> >> -pc_type gamg >> -pc_gamg_agg_nsmooths 1 >> -pc_gamg_type agg >> >> This is a built in AMG solver so perhaps it plays nicer with resources ... >> >> Mark >> >> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >> >> > I'm running on Kraken and am currently working with 4320 cores. I get the following error in KSPSolve. >> > >> > [2711]: (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) PtlMEInsert failed with error : PTL_NO_SPACE >> > MHV_exe: /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >> > forrtl: error (76): Abort trap signal >> > Image PC Routine Line Source >> > MHV_exe 00000000014758CB Unknown Unknown Unknown >> > MHV_exe 000000000182ED43 Unknown Unknown Unknown >> > MHV_exe 0000000001829460 Unknown Unknown Unknown >> > MHV_exe 00000000017EDE3E Unknown Unknown Unknown >> > MHV_exe 00000000017B3FE6 Unknown Unknown Unknown >> > MHV_exe 00000000017B3738 Unknown Unknown Unknown >> > MHV_exe 00000000017B2B12 Unknown Unknown Unknown >> > MHV_exe 00000000017B428F Unknown Unknown Unknown >> > MHV_exe 000000000177FCE1 Unknown Unknown Unknown >> > MHV_exe 0000000001590A43 Unknown Unknown Unknown >> > MHV_exe 00000000014F909B Unknown Unknown Unknown >> > MHV_exe 00000000014FF53B Unknown Unknown Unknown >> > MHV_exe 00000000014A4E25 Unknown Unknown Unknown >> > MHV_exe 0000000001487D57 Unknown Unknown Unknown >> > MHV_exe 000000000147F726 Unknown Unknown Unknown >> > MHV_exe 000000000137A8D3 Unknown Unknown Unknown >> > MHV_exe 0000000000E97BF2 Unknown Unknown Unknown >> > MHV_exe 000000000098EAF1 Unknown Unknown Unknown >> > MHV_exe 0000000000989C20 Unknown Unknown Unknown >> > MHV_exe 000000000097A9C2 Unknown Unknown Unknown >> > MHV_exe 000000000082FF2D axbsolve_ 539 PetscObjectsOperations.F90 >> > >> > This is somewhere in KSPSolve. Is there an MPICH environment variable that needs tweaking? I couldn't really find much on this particular error. >> > The solver is BiCGStab with Hypre as a preconditioner. >> > >> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >> > >> > Thanks, >> > >> > John >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jul 9 12:57:56 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Jul 2012 12:57:56 -0500 Subject: [petsc-users] random vector In-Reply-To: <6d08ec25-33d8-46ff-8db6-9175a3271ff0@zembox02.zaas.igi.nl> References: <6d08ec25-33d8-46ff-8db6-9175a3271ff0@zembox02.zaas.igi.nl> Message-ID: <3AB21CB2-9098-4EFB-BE1C-FDFA7E81CE08@mcs.anl.gov> Did you call PetscRandomSetType(rctx,PETSCRAND,ierr); ? On Jul 9, 2012, at 10:21 AM, Benjamin Sanderse wrote: > So I updated to 3.3, but still get the same error: > > Any ideas? > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 17:18:39 2012 > [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.3-p1_debug/arch-linux2-c-opt/lib > [0]PETSC ERROR: Configure run at Mon Jul 9 16:58:35 2012 > [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.3-p1_debug/src/sys/random/interface/randomc.c > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 > > > ----- Original Message ----- > From: "Barry Smith" > To: "PETSc users list" > Sent: Monday, July 9, 2012 4:08:08 PM > Subject: Re: [petsc-users] random vector > > > Please update to using petsc-3.3 > > On Jul 9, 2012, at 9:03 AM, Benjamin Sanderse wrote: > >> Thanks a lot. I need to have the PetscRandomCreate inside the loop, > > Why? > >> so I will use the RandomSetSeed. >> However, when running the code below I get the following error. >> >> PetscRandom :: rctx >> >> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) > > call PetscRandomSetType(rctx,PETSCRAND,ierr); > >> call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) >> call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) >> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >> >> In PetscRandomSetSeed, I is an integer. >> >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Object is in wrong state! >> [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 >> [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib >> [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 >> [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c >> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >> [cli_0]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >> >> >> Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. >> >> Benjamin >> >> >> ----- Original Message ----- >> From: "Barry Smith" >> To: "PETSc users list" >> Sent: Monday, July 9, 2012 3:51:18 PM >> Subject: Re: [petsc-users] random vector >> >> >> Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. >> >> Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context >> >> >> Barry >> >> On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: >> >>> Hello all, >>> >>> I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. >>> Here is part of the code that I use to generate a random vector: >>> >>> >>> PetscRandom :: rctx >>> >>> ... >>> >>> do I = 1,n >>> >>> >>> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >>> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >>> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >>> >>> call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) >>> >>> call Poisson >>> >>> end do >>> >>> >>> It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? >>> >>> Regards, >>> >>> >>> Benjamin >> > From bsmith at mcs.anl.gov Mon Jul 9 13:04:11 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Jul 2012 13:04:11 -0500 Subject: [petsc-users] vecview problem In-Reply-To: References: Message-ID: On Jul 9, 2012, at 9:55 AM, Klaij, Christiaan wrote: >> There is no such viewer as PETSC_VIEWER_DEFAULT (you just got >> lucky it didn't crash in the first call). Maybe you want >> PETSC_VIEWER_STDOUT_WORLD? >> >> Barry >> >> PETSC_VIEWER_DEFAULT is for setting the particular format a >> viewer uses. > > Yes that's it, thanks! (should I really get a segmentation fault > when running this in debug mode? A message "no such viewer" would > be nicer.) The reason it cannot give a nice error message is FORTRAN :-). PETSC_VIEWER_DEFAULT is an integer value while the PETSC_VIEWER_STDOUT_WORLD etc in FORTRAN are special integer values cast to pointers, the compiler doesn't understand this and our conversion routine cannot do a runtime check since the integer values overlap. Sorry Barry > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > From jwiens at sfu.ca Mon Jul 9 14:11:22 2012 From: jwiens at sfu.ca (Jeff Wiens) Date: Mon, 9 Jul 2012 12:11:22 -0700 Subject: [petsc-users] Outputting Petsc data to Visit Message-ID: I am storing a multidimensional solution to a pde in a petsc vector using a distributed array. I would like to output this vector at different time-steps so that it can visualized in VisIt. I want to know the recommended way of doing this in petsc 3.2-p7. I assumed that the HDF5 viewer would be the most appropriate option. However, I am running into several obstacles when using it: - I can't get PetscViewerHDF5SetTimestep/PetscViewerHDF5IncrementTimestep/PetscViewerHDF5PushGroup to work. - I can't find a way to include the spatial coordinates into the hdf5 file (and for it to be used by visit). - It is not immediate obvious to me how a vector field should be saved so it is understood as a vector field in visit. First, I would like to know how other users are outputting petsc data for VisIt. Secondly, for users using HDF5, has anyone run into similar issues. Jeff From B.Sanderse at cwi.nl Mon Jul 9 14:13:43 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Mon, 9 Jul 2012 21:13:43 +0200 Subject: [petsc-users] random vector In-Reply-To: <3AB21CB2-9098-4EFB-BE1C-FDFA7E81CE08@mcs.anl.gov> References: <6d08ec25-33d8-46ff-8db6-9175a3271ff0@zembox02.zaas.igi.nl> <3AB21CB2-9098-4EFB-BE1C-FDFA7E81CE08@mcs.anl.gov> Message-ID: With that call I get: /export/scratch1/sanderse/Programming/windfarm/svn.cwi.nl/branches/benjamin/navier-stokes/fortran/3D/src/time_RK.F:202: undefined reference to `petscrandomsettype_' Fortunately, I tried the following. I added -random_type rand to the execution line, and used call PetscRandomSetFromOptions(rctx,ierr); CHKERRQ(ierr) That works fine. Benjamin Op 9 jul 2012, om 19:57 heeft Barry Smith het volgende geschreven: > > Did you call PetscRandomSetType(rctx,PETSCRAND,ierr); ? > > > On Jul 9, 2012, at 10:21 AM, Benjamin Sanderse wrote: > >> So I updated to 3.3, but still get the same error: >> >> Any ideas? >> >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Object is in wrong state! >> [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 17:18:39 2012 >> [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.3-p1_debug/arch-linux2-c-opt/lib >> [0]PETSC ERROR: Configure run at Mon Jul 9 16:58:35 2012 >> [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.3-p1_debug/src/sys/random/interface/randomc.c >> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >> [cli_0]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >> >> >> ----- Original Message ----- >> From: "Barry Smith" >> To: "PETSc users list" >> Sent: Monday, July 9, 2012 4:08:08 PM >> Subject: Re: [petsc-users] random vector >> >> >> Please update to using petsc-3.3 >> >> On Jul 9, 2012, at 9:03 AM, Benjamin Sanderse wrote: >> >>> Thanks a lot. I need to have the PetscRandomCreate inside the loop, >> >> Why? >> >>> so I will use the RandomSetSeed. >>> However, when running the code below I get the following error. >>> >>> PetscRandom :: rctx >>> >>> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >> >> call PetscRandomSetType(rctx,PETSCRAND,ierr); >> >>> call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) >>> call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) >>> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >>> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >>> >>> In PetscRandomSetSeed, I is an integer. >>> >>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [0]PETSC ERROR: Object is in wrong state! >>> [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 >>> [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib >>> [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 >>> [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c >>> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >>> [cli_0]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >>> >>> >>> Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. >>> >>> Benjamin >>> >>> >>> ----- Original Message ----- >>> From: "Barry Smith" >>> To: "PETSc users list" >>> Sent: Monday, July 9, 2012 3:51:18 PM >>> Subject: Re: [petsc-users] random vector >>> >>> >>> Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. >>> >>> Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context >>> >>> >>> Barry >>> >>> On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: >>> >>>> Hello all, >>>> >>>> I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. >>>> Here is part of the code that I use to generate a random vector: >>>> >>>> >>>> PetscRandom :: rctx >>>> >>>> ... >>>> >>>> do I = 1,n >>>> >>>> >>>> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >>>> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >>>> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >>>> >>>> call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) >>>> >>>> call Poisson >>>> >>>> end do >>>> >>>> >>>> It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? >>>> >>>> Regards, >>>> >>>> >>>> Benjamin >>> >> > -- Ir. B. Sanderse Centrum Wiskunde en Informatica Science Park 123 1098 XG Amsterdam t: +31 20 592 4161 e: sanderse at cwi.nl From cjm2176 at columbia.edu Mon Jul 9 14:23:01 2012 From: cjm2176 at columbia.edu (Colin McAuliffe) Date: Mon, 09 Jul 2012 15:23:01 -0400 Subject: [petsc-users] Scaling GMRES residual In-Reply-To: References: <20120626183128.c88up6ytgk8gowso@cubmail.cc.columbia.edu> Message-ID: <20120709152301.jcd2flsds8ckkws8@cubmail.cc.columbia.edu> Ok thanks, I can see how introducing units to the residual is necessary, but when would it be necessary to introduce units to the state? I would imagine this would only be needed if a numerically differentiated jacobian was being used. Thanks Colin Quoting Jed Brown : > On Tue, Jun 26, 2012 at 2:31 PM, Colin McAuliffe wrote: > >> Hello, >> >> I would like to use GMRES for a monolithic solution of a multiphysics >> problem. There are drastically different units in the problem and so the >> residual norm is dominated by the terms with the largest units. Is there a >> way to tell petsc to scale the residual before taking the norm to avoid >> this? >> > > You can "fix" it with preconditioning, but then you can only use the > preconditioned residual. You should really fix the units when formulating > the problem. (This does not require formal nondimensionalization, just > introduced units so that state and residuals are of order 1.) > -- Colin McAuliffe PhD Candidate Columbia University Department of Civil Engineering and Engineering Mechanics From john.mousel at gmail.com Mon Jul 9 14:41:31 2012 From: john.mousel at gmail.com (John Mousel) Date: Mon, 9 Jul 2012 14:41:31 -0500 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: <8445905D-EC09-4557-B9D0-3FE2100AFA2B@columbia.edu> References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> <8445905D-EC09-4557-B9D0-3FE2100AFA2B@columbia.edu> Message-ID: Can you clarify what you mean by null-space cleaning. I just run SOR on the coarse grid. On Mon, Jul 9, 2012 at 11:52 AM, Mark F. Adams wrote: > > On Jul 9, 2012, at 12:39 PM, John Mousel wrote: > > Mark, > > The problem is indeed non-symmetric. We went back and forth in March about > this problem. I think we ended up concluding that the coarse size couldn't > get too small or the null-space presented problems. > > > Oh its singular. I forget what the issues were but an iterative coarse > grid solver should be fine for singular problems, perhaps with null space > cleaning if the kernel is sneaking in. Actually there is an SVD coarse > grid solver: > > -mg_coarse_pc_type svd > > That is the most robust. > > When I did get it to work, I tried to scale it up, and on my local > university cluster, it seemed to just hang when the core counts got above > something like 16 cores. I don't really trust that machine though. > > > That's the machine. GAMG does have some issues but I've not seen it hang. > > It's new and has been plagued by hardware incompatability issues since day > 1. I could re-examine this on Kraken. Also, what option are you talking > about with ML. I thought I had tried all the -pc_ml_CoarsenScheme options, > but I could be wrong. > > > This sounds like the right one. I try to be careful in my solvers to be > invariant to subdomain shapes and sizes and I think Ray Tuminaro (ML > developer) at least has options that should be careful about this also. > But I don't know much about what they are deploying these days. > > Mark > > > John > > > > On Mon, Jul 9, 2012 at 11:30 AM, Mark F. Adams wrote: > >> What problems are you having again with GAMG? Are you problems >> unsymmetric? >> >> ML has several coarsening strategies available and I think the default >> does aggregation locally and does not aggregate across processor >> subdomains. If you have poorly shaped domains then you want to use a >> global coarsening method (these are not expensive). >> >> Mark >> >> On Jul 9, 2012, at 12:17 PM, John Mousel wrote: >> >> Mark, >> >> I still haven't had much luck getting GAMG to work consistently for my >> Poisson problem. ML seems to work nicely on low core counts, but I have a >> problem where I can get long thin portions of grid on some processors >> instead of nice block like chunks at high core counts, which leads to a >> pretty tough time for ML. >> >> John >> >> On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: >> >>> Getting rid of the Hypre option seemed to be the trick. >>> >>> On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: >>> >>>> Google PTL_NO_SPACE and you will find some NERSC presentations on how >>>> to go about fixing this. (I have run into these problems years ago but >>>> forget the issues) >>>> >>>> Also, I would try running with a Jacobi solver to see if that fixes the >>>> problem. If so then you might try >>>> >>>> -pc_type gamg >>>> -pc_gamg_agg_nsmooths 1 >>>> -pc_gamg_type agg >>>> >>>> This is a built in AMG solver so perhaps it plays nicer with resources >>>> ... >>>> >>>> Mark >>>> >>>> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >>>> >>>> > I'm running on Kraken and am currently working with 4320 cores. I get >>>> the following error in KSPSolve. >>>> > >>>> > [2711]: >>>> (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) >>>> PtlMEInsert failed with error : PTL_NO_SPACE >>>> > MHV_exe: >>>> /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: >>>> MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >>>> > forrtl: error (76): Abort trap signal >>>> > Image PC Routine Line >>>> Source >>>> > MHV_exe 00000000014758CB Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000182ED43 Unknown Unknown >>>> Unknown >>>> > MHV_exe 0000000001829460 Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000017EDE3E Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000017B3FE6 Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000017B3738 Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000017B2B12 Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000017B428F Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000177FCE1 Unknown Unknown >>>> Unknown >>>> > MHV_exe 0000000001590A43 Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000014F909B Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000014FF53B Unknown Unknown >>>> Unknown >>>> > MHV_exe 00000000014A4E25 Unknown Unknown >>>> Unknown >>>> > MHV_exe 0000000001487D57 Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000147F726 Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000137A8D3 Unknown Unknown >>>> Unknown >>>> > MHV_exe 0000000000E97BF2 Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000098EAF1 Unknown Unknown >>>> Unknown >>>> > MHV_exe 0000000000989C20 Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000097A9C2 Unknown Unknown >>>> Unknown >>>> > MHV_exe 000000000082FF2D axbsolve_ 539 >>>> PetscObjectsOperations.F90 >>>> > >>>> > This is somewhere in KSPSolve. Is there an MPICH environment variable >>>> that needs tweaking? I couldn't really find much on this particular error. >>>> > The solver is BiCGStab with Hypre as a preconditioner. >>>> > >>>> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >>>> > >>>> > Thanks, >>>> > >>>> > John >>>> >>>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 9 14:53:14 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 9 Jul 2012 11:53:14 -0800 Subject: [petsc-users] Scaling GMRES residual In-Reply-To: <20120709152301.jcd2flsds8ckkws8@cubmail.cc.columbia.edu> References: <20120626183128.c88up6ytgk8gowso@cubmail.cc.columbia.edu> <20120709152301.jcd2flsds8ckkws8@cubmail.cc.columbia.edu> Message-ID: On Mon, Jul 9, 2012 at 11:23 AM, Colin McAuliffe wrote: > Ok thanks, I can see how introducing units to the residual is necessary, > but when would it be necessary to introduce units to the state? I would > imagine this would only be needed if a numerically differentiated jacobian > was being used. > It depends if you care about symmetry and if you want the step tolerance to be meaningful. It can also affect scaling of boundary conditions. > > Thanks > Colin > > > Quoting Jed Brown : > > On Tue, Jun 26, 2012 at 2:31 PM, Colin McAuliffe > >wrote: >> >> Hello, >>> >>> I would like to use GMRES for a monolithic solution of a multiphysics >>> problem. There are drastically different units in the problem and so the >>> residual norm is dominated by the terms with the largest units. Is there >>> a >>> way to tell petsc to scale the residual before taking the norm to avoid >>> this? >>> >>> >> You can "fix" it with preconditioning, but then you can only use the >> preconditioned residual. You should really fix the units when formulating >> the problem. (This does not require formal nondimensionalization, just >> introduced units so that state and residuals are of order 1.) >> >> > > > -- > Colin McAuliffe > PhD Candidate > Columbia University > Department of Civil Engineering and Engineering Mechanics > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.mousel at gmail.com Mon Jul 9 15:31:26 2012 From: john.mousel at gmail.com (John Mousel) Date: Mon, 9 Jul 2012 15:31:26 -0500 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> <8445905D-EC09-4557-B9D0-3FE2100AFA2B@columbia.edu> Message-ID: Mark, I just tried the following options on Kraken on 1200 cores: -pres_ksp_type bcgsl -pres_pc_type gamg -pres_pc_gamg_type agg -pres_pc_gamg_agg_nsmooths 1 -pres_pc_gamg_threshold 0.05 -pres_mg_levels_ksp_type richardson -pres_mg_levels_pc_type sor -pres_mg_coarse_ksp_typ e richardson -pres_mg_coarse_pc_type sor -pres_mg_coarse_pc_sor_its 4 It hung at [0]PCSetData_AGG bs=1 MM=10672 for nearly 15 minutes. I take it this is not normal. John On Mon, Jul 9, 2012 at 2:41 PM, John Mousel wrote: > Can you clarify what you mean by null-space cleaning. I just run SOR on > the coarse grid. > > > > > On Mon, Jul 9, 2012 at 11:52 AM, Mark F. Adams wrote: > >> >> On Jul 9, 2012, at 12:39 PM, John Mousel wrote: >> >> Mark, >> >> The problem is indeed non-symmetric. We went back and forth in March >> about this problem. I think we ended up concluding that the coarse size >> couldn't get too small or the null-space presented problems. >> >> >> Oh its singular. I forget what the issues were but an iterative coarse >> grid solver should be fine for singular problems, perhaps with null space >> cleaning if the kernel is sneaking in. Actually there is an SVD coarse >> grid solver: >> >> -mg_coarse_pc_type svd >> >> That is the most robust. >> >> When I did get it to work, I tried to scale it up, and on my local >> university cluster, it seemed to just hang when the core counts got above >> something like 16 cores. I don't really trust that machine though. >> >> >> That's the machine. GAMG does have some issues but I've not seen it hang. >> >> It's new and has been plagued by hardware incompatability issues since >> day 1. I could re-examine this on Kraken. Also, what option are you talking >> about with ML. I thought I had tried all the -pc_ml_CoarsenScheme options, >> but I could be wrong. >> >> >> This sounds like the right one. I try to be careful in my solvers to be >> invariant to subdomain shapes and sizes and I think Ray Tuminaro (ML >> developer) at least has options that should be careful about this also. >> But I don't know much about what they are deploying these days. >> >> Mark >> >> >> John >> >> >> >> On Mon, Jul 9, 2012 at 11:30 AM, Mark F. Adams wrote: >> >>> What problems are you having again with GAMG? Are you problems >>> unsymmetric? >>> >>> ML has several coarsening strategies available and I think the default >>> does aggregation locally and does not aggregate across processor >>> subdomains. If you have poorly shaped domains then you want to use a >>> global coarsening method (these are not expensive). >>> >>> Mark >>> >>> On Jul 9, 2012, at 12:17 PM, John Mousel wrote: >>> >>> Mark, >>> >>> I still haven't had much luck getting GAMG to work consistently for my >>> Poisson problem. ML seems to work nicely on low core counts, but I have a >>> problem where I can get long thin portions of grid on some processors >>> instead of nice block like chunks at high core counts, which leads to a >>> pretty tough time for ML. >>> >>> John >>> >>> On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: >>> >>>> Getting rid of the Hypre option seemed to be the trick. >>>> >>>> On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams >>> > wrote: >>>> >>>>> Google PTL_NO_SPACE and you will find some NERSC presentations on how >>>>> to go about fixing this. (I have run into these problems years ago but >>>>> forget the issues) >>>>> >>>>> Also, I would try running with a Jacobi solver to see if that fixes >>>>> the problem. If so then you might try >>>>> >>>>> -pc_type gamg >>>>> -pc_gamg_agg_nsmooths 1 >>>>> -pc_gamg_type agg >>>>> >>>>> This is a built in AMG solver so perhaps it plays nicer with resources >>>>> ... >>>>> >>>>> Mark >>>>> >>>>> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >>>>> >>>>> > I'm running on Kraken and am currently working with 4320 cores. I >>>>> get the following error in KSPSolve. >>>>> > >>>>> > [2711]: >>>>> (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) >>>>> PtlMEInsert failed with error : PTL_NO_SPACE >>>>> > MHV_exe: >>>>> /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: >>>>> MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >>>>> > forrtl: error (76): Abort trap signal >>>>> > Image PC Routine Line >>>>> Source >>>>> > MHV_exe 00000000014758CB Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000182ED43 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 0000000001829460 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000017EDE3E Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000017B3FE6 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000017B3738 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000017B2B12 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000017B428F Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000177FCE1 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 0000000001590A43 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000014F909B Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000014FF53B Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 00000000014A4E25 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 0000000001487D57 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000147F726 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000137A8D3 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 0000000000E97BF2 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000098EAF1 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 0000000000989C20 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000097A9C2 Unknown Unknown >>>>> Unknown >>>>> > MHV_exe 000000000082FF2D axbsolve_ 539 >>>>> PetscObjectsOperations.F90 >>>>> > >>>>> > This is somewhere in KSPSolve. Is there an MPICH environment >>>>> variable that needs tweaking? I couldn't really find much on this >>>>> particular error. >>>>> > The solver is BiCGStab with Hypre as a preconditioner. >>>>> > >>>>> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >>>>> > >>>>> > Thanks, >>>>> > >>>>> > John >>>>> >>>>> >>>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 9 17:35:49 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 9 Jul 2012 17:35:49 -0500 Subject: [petsc-users] Outputting Petsc data to Visit In-Reply-To: References: Message-ID: On Mon, Jul 9, 2012 at 2:11 PM, Jeff Wiens wrote: > I am storing a multidimensional solution to a pde in a petsc vector > using a distributed array. I would like to output this vector at > different time-steps so that it can visualized in VisIt. I want to > know the recommended way of doing this in petsc 3.2-p7. > Please upgrade to petsc-3.3, then use VecView() and PETSCVIEWERVTK. The file name should end in .vts. > > I assumed that the HDF5 viewer would be the most appropriate option. > HDF5 is like a file system. You can put anything there with any structure at all, therefore packages like VisIt can't read it without extra descriptive files (e.g. XDMF) or a custom viewer. > However, I am running into several obstacles when using it: > > - I can't get > PetscViewerHDF5SetTimestep/PetscViewerHDF5IncrementTimestep/PetscViewerHDF5PushGroup > to work. > We need symptoms. > - I can't find a way to include the spatial coordinates into the hdf5 > file (and for it to be used by visit). > Did you use DMDASetCoordinates? That is used by the VTK viewer. > - It is not immediate obvious to me how a vector field should be saved > so it is understood as a vector field in visit. > Make it a vector using the Expression editor in VisIt. > > First, I would like to know how other users are outputting petsc data > for VisIt. Secondly, for users using HDF5, has anyone run into similar > issues. > > Jeff > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jul 9 18:06:19 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 9 Jul 2012 18:06:19 -0500 Subject: [petsc-users] random vector In-Reply-To: References: <6d08ec25-33d8-46ff-8db6-9175a3271ff0@zembox02.zaas.igi.nl> <3AB21CB2-9098-4EFB-BE1C-FDFA7E81CE08@mcs.anl.gov> Message-ID: Benjamin, On Jul 9, 2012, at 2:13 PM, Benjamin Sanderse wrote: > With that call I get: > > /export/scratch1/sanderse/Programming/windfarm/svn.cwi.nl/branches/benjamin/navier-stokes/fortran/3D/src/time_RK.F:202: undefined reference to `petscrandomsettype_' Thanks. This is our fault. Satish, Please add the Fortran interface for petscrandomsettype_ by cloning one of the other set types etc. for petsc 3.3 Thanks Barry > > Fortunately, I tried the following. I added > > -random_type rand > > to the execution line, and used > > call PetscRandomSetFromOptions(rctx,ierr); CHKERRQ(ierr) > > That works fine. > > Benjamin > > > Op 9 jul 2012, om 19:57 heeft Barry Smith het volgende geschreven: > >> >> Did you call PetscRandomSetType(rctx,PETSCRAND,ierr); ? >> >> >> On Jul 9, 2012, at 10:21 AM, Benjamin Sanderse wrote: >> >>> So I updated to 3.3, but still get the same error: >>> >>> Any ideas? >>> >>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [0]PETSC ERROR: Object is in wrong state! >>> [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 17:18:39 2012 >>> [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.3-p1_debug/arch-linux2-c-opt/lib >>> [0]PETSC ERROR: Configure run at Mon Jul 9 16:58:35 2012 >>> [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.3-p1_debug/src/sys/random/interface/randomc.c >>> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >>> [cli_0]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >>> >>> >>> ----- Original Message ----- >>> From: "Barry Smith" >>> To: "PETSc users list" >>> Sent: Monday, July 9, 2012 4:08:08 PM >>> Subject: Re: [petsc-users] random vector >>> >>> >>> Please update to using petsc-3.3 >>> >>> On Jul 9, 2012, at 9:03 AM, Benjamin Sanderse wrote: >>> >>>> Thanks a lot. I need to have the PetscRandomCreate inside the loop, >>> >>> Why? >>> >>>> so I will use the RandomSetSeed. >>>> However, when running the code below I get the following error. >>>> >>>> PetscRandom :: rctx >>>> >>>> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >>> >>> call PetscRandomSetType(rctx,PETSCRAND,ierr); >>> >>>> call PetscRandomSetSeed(rctx,I,ierr); CHKERRQ(ierr) >>>> call PetscRandomSeed(rctx,ierr); CHKERRQ(ierr) >>>> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >>>> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >>>> >>>> In PetscRandomSetSeed, I is an integer. >>>> >>>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [0]PETSC ERROR: Object is in wrong state! >>>> [0]PETSC ERROR: PetscRandom object's type is not set: Argument # 1! >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45 CST 2012 >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: bin/navier-stokes on a arch-linu named slippy.sen.cwi.nl by sanderse Mon Jul 9 15:58:16 2012 >>>> [0]PETSC ERROR: Libraries linked from /export/scratch1/sanderse/software/petsc-3.2-p6-debug/arch-linux2-c-opt/lib >>>> [0]PETSC ERROR: Configure run at Wed Feb 22 18:04:02 2012 >>>> [0]PETSC ERROR: Configure options --download-mpich=1 --with-shared-libraries --download-f-blas-lapack=1 --with-fc=gfortran --with-cxx=g++ --download-hypre --with-hdf5 --download-hdf5 --with-cc=gcc >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: PetscRandomSeed() line 431 in /export/scratch1/sanderse/software/petsc-3.2-p6-debug/src/sys/random/interface/randomc.c >>>> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >>>> [cli_0]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 73) - process 0 >>>> >>>> >>>> Maybe something has changed from 3.2p6 to 3.3? I do not see what is wrong with the PetscRandom object. >>>> >>>> Benjamin >>>> >>>> >>>> ----- Original Message ----- >>>> From: "Barry Smith" >>>> To: "PETSc users list" >>>> Sent: Monday, July 9, 2012 3:51:18 PM >>>> Subject: Re: [petsc-users] random vector >>>> >>>> >>>> Create and destroy the random context OUTSIDE of the loop. Each time you create it it is using the same seed hence giving the same values. >>>> >>>> Note that it is also intentional that if you run the code twice you get the same values each time you run it to help write and debug codes. If you want different values each time you run it you need to call PetscRandomSetSeed() then PetscRandomSeed() after creating the context >>>> >>>> >>>> Barry >>>> >>>> On Jul 9, 2012, at 7:53 AM, Benjamin Sanderse wrote: >>>> >>>>> Hello all, >>>>> >>>>> I am trying to solve a Poisson equation several times with random right-hand side vectors in order to do parallel scalability tests. >>>>> Here is part of the code that I use to generate a random vector: >>>>> >>>>> >>>>> PetscRandom :: rctx >>>>> >>>>> ... >>>>> >>>>> do I = 1,n >>>>> >>>>> >>>>> call PetscRandomCreate(PETSC_COMM_WORLD,rctx,ierr); CHKERRQ(ierr) >>>>> call VecSetRandom(f,rctx,ierr); CHKERRQ(ierr) >>>>> call PetscRandomDestroy(rctx,ierr); CHKERRQ(ierr) >>>>> >>>>> call VecView(f,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr) >>>>> >>>>> call Poisson >>>>> >>>>> end do >>>>> >>>>> >>>>> It appears that f does not change during the execution of the do-loop. In fact its value is even always the same for I=1 when I run the code several times. Apparently I am missing something. Can anybody help? >>>>> >>>>> Regards, >>>>> >>>>> >>>>> Benjamin >>>> >>> >> > > -- > Ir. B. Sanderse > > Centrum Wiskunde en Informatica > Science Park 123 > 1098 XG Amsterdam > > t: +31 20 592 4161 > e: sanderse at cwi.nl > From mark.adams at columbia.edu Mon Jul 9 18:42:58 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 9 Jul 2012 19:42:58 -0400 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> <8445905D-EC09-4557-B9D0-3FE2100AFA2B@columbia.edu> Message-ID: On Jul 9, 2012, at 4:31 PM, John Mousel wrote: > Mark, > > I just tried the following options on Kraken on 1200 cores: > > -pres_ksp_type bcgsl -pres_pc_type gamg -pres_pc_gamg_type agg -pres_pc_gamg_agg_nsmooths 1 -pres_pc_gamg_threshold 0.05 -pres_mg_levels_ksp_type richardson -pres_mg_levels_pc_type sor -pres_mg_coarse_ksp_typ > e richardson -pres_mg_coarse_pc_type sor -pres_mg_coarse_pc_sor_its 4 > > It hung at > > [0]PCSetData_AGG bs=1 MM=10672 Humm, I don't see that print statement in my code anymore. What version of PETSc are you using? This is/was at the very beginning of the code. Are you using '-pc_gamg_sym_graph true'? This has been the default in some versions so if you do not have this it does not mean you are not using it. You should use this parameter. My experience is the code give an internal error, and exits gracefully, if this is wrong but it could manifest itself as a hang (the parallel graph algorithms get confused if they do not have a symmetric graph) Are your problems structurally unsymetric? If not then you can use '-pc_gamg_threshold -1.' and it should work even without '-pc_gamg_sym_graph true'. Unfortunately this uses MatTranspose which is very slow for some reason. I saw it take about 1 minute with 8,000 vertices per processor on a Cray XE6 recently. So if you are running with very large processor subdomains this could explain the 15 minutes. We need to fix this soon. Mark > > for nearly 15 minutes. I take it this is not normal. > > John > > > > On Mon, Jul 9, 2012 at 2:41 PM, John Mousel wrote: > Can you clarify what you mean by null-space cleaning. I just run SOR on the coarse grid. > > > > > On Mon, Jul 9, 2012 at 11:52 AM, Mark F. Adams wrote: > > On Jul 9, 2012, at 12:39 PM, John Mousel wrote: > >> Mark, >> >> The problem is indeed non-symmetric. We went back and forth in March about this problem. I think we ended up concluding that the coarse size couldn't get too small or the null-space presented problems. > > Oh its singular. I forget what the issues were but an iterative coarse grid solver should be fine for singular problems, perhaps with null space cleaning if the kernel is sneaking in. Actually there is an SVD coarse grid solver: > > -mg_coarse_pc_type svd > > That is the most robust. > >> When I did get it to work, I tried to scale it up, and on my local university cluster, it seemed to just hang when the core counts got above something like 16 cores. I don't really trust that machine though. > > That's the machine. GAMG does have some issues but I've not seen it hang. > >> It's new and has been plagued by hardware incompatability issues since day 1. I could re-examine this on Kraken. Also, what option are you talking about with ML. I thought I had tried all the -pc_ml_CoarsenScheme options, but I could be wrong. > > This sounds like the right one. I try to be careful in my solvers to be invariant to subdomain shapes and sizes and I think Ray Tuminaro (ML developer) at least has options that should be careful about this also. But I don't know much about what they are deploying these days. > > Mark > >> >> John >> >> >> >> On Mon, Jul 9, 2012 at 11:30 AM, Mark F. Adams wrote: >> What problems are you having again with GAMG? Are you problems unsymmetric? >> >> ML has several coarsening strategies available and I think the default does aggregation locally and does not aggregate across processor subdomains. If you have poorly shaped domains then you want to use a global coarsening method (these are not expensive). >> >> Mark >> >> On Jul 9, 2012, at 12:17 PM, John Mousel wrote: >> >>> Mark, >>> >>> I still haven't had much luck getting GAMG to work consistently for my Poisson problem. ML seems to work nicely on low core counts, but I have a problem where I can get long thin portions of grid on some processors instead of nice block like chunks at high core counts, which leads to a pretty tough time for ML. >>> >>> John >>> >>> On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: >>> Getting rid of the Hypre option seemed to be the trick. >>> >>> On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: >>> Google PTL_NO_SPACE and you will find some NERSC presentations on how to go about fixing this. (I have run into these problems years ago but forget the issues) >>> >>> Also, I would try running with a Jacobi solver to see if that fixes the problem. If so then you might try >>> >>> -pc_type gamg >>> -pc_gamg_agg_nsmooths 1 >>> -pc_gamg_type agg >>> >>> This is a built in AMG solver so perhaps it plays nicer with resources ... >>> >>> Mark >>> >>> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >>> >>> > I'm running on Kraken and am currently working with 4320 cores. I get the following error in KSPSolve. >>> > >>> > [2711]: (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) PtlMEInsert failed with error : PTL_NO_SPACE >>> > MHV_exe: /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >>> > forrtl: error (76): Abort trap signal >>> > Image PC Routine Line Source >>> > MHV_exe 00000000014758CB Unknown Unknown Unknown >>> > MHV_exe 000000000182ED43 Unknown Unknown Unknown >>> > MHV_exe 0000000001829460 Unknown Unknown Unknown >>> > MHV_exe 00000000017EDE3E Unknown Unknown Unknown >>> > MHV_exe 00000000017B3FE6 Unknown Unknown Unknown >>> > MHV_exe 00000000017B3738 Unknown Unknown Unknown >>> > MHV_exe 00000000017B2B12 Unknown Unknown Unknown >>> > MHV_exe 00000000017B428F Unknown Unknown Unknown >>> > MHV_exe 000000000177FCE1 Unknown Unknown Unknown >>> > MHV_exe 0000000001590A43 Unknown Unknown Unknown >>> > MHV_exe 00000000014F909B Unknown Unknown Unknown >>> > MHV_exe 00000000014FF53B Unknown Unknown Unknown >>> > MHV_exe 00000000014A4E25 Unknown Unknown Unknown >>> > MHV_exe 0000000001487D57 Unknown Unknown Unknown >>> > MHV_exe 000000000147F726 Unknown Unknown Unknown >>> > MHV_exe 000000000137A8D3 Unknown Unknown Unknown >>> > MHV_exe 0000000000E97BF2 Unknown Unknown Unknown >>> > MHV_exe 000000000098EAF1 Unknown Unknown Unknown >>> > MHV_exe 0000000000989C20 Unknown Unknown Unknown >>> > MHV_exe 000000000097A9C2 Unknown Unknown Unknown >>> > MHV_exe 000000000082FF2D axbsolve_ 539 PetscObjectsOperations.F90 >>> > >>> > This is somewhere in KSPSolve. Is there an MPICH environment variable that needs tweaking? I couldn't really find much on this particular error. >>> > The solver is BiCGStab with Hypre as a preconditioner. >>> > >>> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >>> > >>> > Thanks, >>> > >>> > John >>> >>> >>> >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Jul 10 05:39:41 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 10 Jul 2012 12:39:41 +0200 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran Message-ID: <4FFC066D.7060307@gmail.com> Hi, I read in the manual in page 50 that it's recommended to declare struct to represent field for dof > 1 for DM. I'm using Fortran and for testing, I use dof = 1 and write as: /type field //PetscScalar//u (or real(8) :: u) end type field type(field), pointer :: field_u(:,:)/ When I tried to use : /call DMDAVecGetArrayF90(da,x_local,field_u,ierr)/ I got the error : There is no matching specific subroutine for this generic subroutine call. [DMDAVECGETARRAYF90] The da, x_local has been defined with the specific DM routines. It worked if I use : /PetscScalar,pointer :: array(:,:) and call DMDAVecGetArrayF90(da,x_local,array,ierr)/ May I know what did I do wrong? -- Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 10 07:07:57 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Jul 2012 07:07:57 -0500 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: <4FFC066D.7060307@gmail.com> References: <4FFC066D.7060307@gmail.com> Message-ID: On Tue, Jul 10, 2012 at 5:39 AM, TAY wee-beng wrote: > Hi, > > I read in the manual in page 50 that it's recommended to declare struct to > represent field for dof > 1 for DM. > We mean C struct. C makes it easy (just use a pointer type cast). Fortran makes it hard unfortunately. Matt > I'm using Fortran and for testing, I use dof = 1 and write as: > > *type field > > **PetscScalar** u (or real(8) :: u) > > end type field > > type(field), pointer :: field_u(:,:)* > > When I tried to use : > > *call DMDAVecGetArrayF90(da,x_local,field_u,ierr)* > > I got the error : There is no matching specific subroutine for this > generic subroutine call. [DMDAVECGETARRAYF90] > > The da, x_local has been defined with the specific DM routines. It worked > if I use : > > *PetscScalar,pointer :: array(:,:) and > > call DMDAVecGetArrayF90(da,x_local,array,ierr)* > > May I know what did I do wrong? > > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Sanderse at cwi.nl Tue Jul 10 08:22:01 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Tue, 10 Jul 2012 15:22:01 +0200 Subject: [petsc-users] Poisson equation with CG and BoomerAMG Message-ID: <690DB264-41D4-443D-86EE-28509BA719BB@cwi.nl> Hello all, I am solving a Poisson equation with Neumann BC on a structured grid (arising from an incompressible Navier-Stokes problem). Although my mesh is structured, the matrix is 'given' so I am using AMG instead of geometric multigrid, for the moment. To solve the Poisson equation I use CG with preconditioning provided by BoomerAMG, using standard options. I have run my problem for different grid sizes and number of processors, but I am confused regarding the parallel scalability. Attached are some timing results that give the average time spent on solving the Poisson equation. As you can see, when going from 1 to 2 processors, the scaling is very good, even for the case of 200^3 grid points (8 million). For larger number of processors this quickly deteriorates. The cluster I am running on has 8 cores per node and 24GB memory per node. Can someone comment on these results? Is this what I should expect? Some additional information: - I set the NullSpace of the matrix explicitly with MatNullSpaceCreate - The set-up of the problem is not included in the timing results. The set-up is not efficient yet (use of MatRow, for example), and there is some code cleanup to do (too many matrices and vectors), but I think this should not affect the performance of the Poisson solve. - log_summary and ksp_monitor are attached. Thanks a lot, Benjamin -------------- next part -------------- A non-text attachment was scrubbed... Name: log_summary Type: application/octet-stream Size: 10868 bytes Desc: not available URL: -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: timing Type: application/octet-stream Size: 170 bytes Desc: not available URL: -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: ksp_view Type: application/octet-stream Size: 1777 bytes Desc: not available URL: -------------- next part -------------- -- Ir. B. Sanderse Centrum Wiskunde en Informatica Science Park 123 1098 XG Amsterdam t: +31 20 592 4161 e: sanderse at cwi.nl From bsmith at mcs.anl.gov Tue Jul 10 08:30:14 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 10 Jul 2012 08:30:14 -0500 Subject: [petsc-users] Poisson equation with CG and BoomerAMG In-Reply-To: <690DB264-41D4-443D-86EE-28509BA719BB@cwi.nl> References: <690DB264-41D4-443D-86EE-28509BA719BB@cwi.nl> Message-ID: http://www.mcs.anl.gov/petsc/documentation/faq.html#computers likely you will not benefit from using more than 4 or at most 6 cores at of the 8 for each node. This is a memory hardware limitation. Barry On Jul 10, 2012, at 8:22 AM, Benjamin Sanderse wrote: > Hello all, > > I am solving a Poisson equation with Neumann BC on a structured grid (arising from an incompressible Navier-Stokes problem). Although my mesh is structured, the matrix is 'given' so I am using AMG instead of geometric multigrid, for the moment. > To solve the Poisson equation I use CG with preconditioning provided by BoomerAMG, using standard options. I have run my problem for different grid sizes and number of processors, but I am confused regarding the parallel scalability. Attached are some timing results that give the average time spent on solving the Poisson equation. As you can see, when going from 1 to 2 processors, the scaling is very good, even for the case of 200^3 grid points (8 million). For larger number of processors this quickly deteriorates. The cluster I am running on has 8 cores per node and 24GB memory per node. > Can someone comment on these results? Is this what I should expect? > > Some additional information: > - I set the NullSpace of the matrix explicitly with MatNullSpaceCreate > - The set-up of the problem is not included in the timing results. The set-up is not efficient yet (use of MatRow, for example), and there is some code cleanup to do (too many matrices and vectors), but I think this should not affect the performance of the Poisson solve. > - log_summary and ksp_monitor are attached. > > Thanks a lot, > > Benjamin > > > > > > > > -- > Ir. B. Sanderse > > Centrum Wiskunde en Informatica > Science Park 123 > 1098 XG Amsterdam > > t: +31 20 592 4161 > e: sanderse at cwi.nl > From mark.adams at columbia.edu Tue Jul 10 08:31:05 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Tue, 10 Jul 2012 09:31:05 -0400 Subject: [petsc-users] MPICH error in KSPSolve In-Reply-To: References: <571F44E9-941A-46C4-8109-ED97943206B5@columbia.edu> <8445905D-EC09-4557-B9D0-3FE2100AFA2B@columbia.edu> Message-ID: <8A302503-2DDA-4952-A003-BB2EDDF145FA@columbia.edu> On Jul 9, 2012, at 3:41 PM, John Mousel wrote: > Can you clarify what you mean by null-space cleaning. I just run SOR on the coarse grid. > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetNullSpace.html#KSPSetNullSpace > > > On Mon, Jul 9, 2012 at 11:52 AM, Mark F. Adams wrote: > > On Jul 9, 2012, at 12:39 PM, John Mousel wrote: > >> Mark, >> >> The problem is indeed non-symmetric. We went back and forth in March about this problem. I think we ended up concluding that the coarse size couldn't get too small or the null-space presented problems. > > Oh its singular. I forget what the issues were but an iterative coarse grid solver should be fine for singular problems, perhaps with null space cleaning if the kernel is sneaking in. Actually there is an SVD coarse grid solver: > > -mg_coarse_pc_type svd > > That is the most robust. > >> When I did get it to work, I tried to scale it up, and on my local university cluster, it seemed to just hang when the core counts got above something like 16 cores. I don't really trust that machine though. > > That's the machine. GAMG does have some issues but I've not seen it hang. > >> It's new and has been plagued by hardware incompatability issues since day 1. I could re-examine this on Kraken. Also, what option are you talking about with ML. I thought I had tried all the -pc_ml_CoarsenScheme options, but I could be wrong. > > This sounds like the right one. I try to be careful in my solvers to be invariant to subdomain shapes and sizes and I think Ray Tuminaro (ML developer) at least has options that should be careful about this also. But I don't know much about what they are deploying these days. > > Mark > >> >> John >> >> >> >> On Mon, Jul 9, 2012 at 11:30 AM, Mark F. Adams wrote: >> What problems are you having again with GAMG? Are you problems unsymmetric? >> >> ML has several coarsening strategies available and I think the default does aggregation locally and does not aggregate across processor subdomains. If you have poorly shaped domains then you want to use a global coarsening method (these are not expensive). >> >> Mark >> >> On Jul 9, 2012, at 12:17 PM, John Mousel wrote: >> >>> Mark, >>> >>> I still haven't had much luck getting GAMG to work consistently for my Poisson problem. ML seems to work nicely on low core counts, but I have a problem where I can get long thin portions of grid on some processors instead of nice block like chunks at high core counts, which leads to a pretty tough time for ML. >>> >>> John >>> >>> On Mon, Jul 9, 2012 at 10:58 AM, John Mousel wrote: >>> Getting rid of the Hypre option seemed to be the trick. >>> >>> On Mon, Jul 9, 2012 at 10:40 AM, Mark F. Adams wrote: >>> Google PTL_NO_SPACE and you will find some NERSC presentations on how to go about fixing this. (I have run into these problems years ago but forget the issues) >>> >>> Also, I would try running with a Jacobi solver to see if that fixes the problem. If so then you might try >>> >>> -pc_type gamg >>> -pc_gamg_agg_nsmooths 1 >>> -pc_gamg_type agg >>> >>> This is a built in AMG solver so perhaps it plays nicer with resources ... >>> >>> Mark >>> >>> On Jul 9, 2012, at 10:57 AM, John Mousel wrote: >>> >>> > I'm running on Kraken and am currently working with 4320 cores. I get the following error in KSPSolve. >>> > >>> > [2711]: (/ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046) PtlMEInsert failed with error : PTL_NO_SPACE >>> > MHV_exe: /ptmp/ulib/mpt/nightly/5.3/120211/mpich2/src/mpid/cray/src/adi/ptldev.c:2046: MPIDI_CRAY_ptldev_desc_pkt: Assertion `0' failed. >>> > forrtl: error (76): Abort trap signal >>> > Image PC Routine Line Source >>> > MHV_exe 00000000014758CB Unknown Unknown Unknown >>> > MHV_exe 000000000182ED43 Unknown Unknown Unknown >>> > MHV_exe 0000000001829460 Unknown Unknown Unknown >>> > MHV_exe 00000000017EDE3E Unknown Unknown Unknown >>> > MHV_exe 00000000017B3FE6 Unknown Unknown Unknown >>> > MHV_exe 00000000017B3738 Unknown Unknown Unknown >>> > MHV_exe 00000000017B2B12 Unknown Unknown Unknown >>> > MHV_exe 00000000017B428F Unknown Unknown Unknown >>> > MHV_exe 000000000177FCE1 Unknown Unknown Unknown >>> > MHV_exe 0000000001590A43 Unknown Unknown Unknown >>> > MHV_exe 00000000014F909B Unknown Unknown Unknown >>> > MHV_exe 00000000014FF53B Unknown Unknown Unknown >>> > MHV_exe 00000000014A4E25 Unknown Unknown Unknown >>> > MHV_exe 0000000001487D57 Unknown Unknown Unknown >>> > MHV_exe 000000000147F726 Unknown Unknown Unknown >>> > MHV_exe 000000000137A8D3 Unknown Unknown Unknown >>> > MHV_exe 0000000000E97BF2 Unknown Unknown Unknown >>> > MHV_exe 000000000098EAF1 Unknown Unknown Unknown >>> > MHV_exe 0000000000989C20 Unknown Unknown Unknown >>> > MHV_exe 000000000097A9C2 Unknown Unknown Unknown >>> > MHV_exe 000000000082FF2D axbsolve_ 539 PetscObjectsOperations.F90 >>> > >>> > This is somewhere in KSPSolve. Is there an MPICH environment variable that needs tweaking? I couldn't really find much on this particular error. >>> > The solver is BiCGStab with Hypre as a preconditioner. >>> > >>> > -ksp_type bcgsl -pc_type hypre -pc_hypre_type boomeramg -ksp_monitor >>> > >>> > Thanks, >>> > >>> > John >>> >>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 10 08:56:16 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Jul 2012 08:56:16 -0500 Subject: [petsc-users] Poisson equation with CG and BoomerAMG In-Reply-To: References: <690DB264-41D4-443D-86EE-28509BA719BB@cwi.nl> Message-ID: On Tue, Jul 10, 2012 at 8:30 AM, Barry Smith wrote: > > http://www.mcs.anl.gov/petsc/documentation/faq.html#computers > > likely you will not benefit from using more than 4 or at most 6 cores at > of the 8 for each node. This is a memory hardware limitation. > The easiest way to get an idea of the limit is to look at the scaling of VecAXPY. The performance will scale beautifully until it hits the memory bandwidth bottleneck. Matt > Barry > > On Jul 10, 2012, at 8:22 AM, Benjamin Sanderse wrote: > > > Hello all, > > > > I am solving a Poisson equation with Neumann BC on a structured grid > (arising from an incompressible Navier-Stokes problem). Although my mesh is > structured, the matrix is 'given' so I am using AMG instead of geometric > multigrid, for the moment. > > To solve the Poisson equation I use CG with preconditioning provided by > BoomerAMG, using standard options. I have run my problem for different grid > sizes and number of processors, but I am confused regarding the parallel > scalability. Attached are some timing results that give the average time > spent on solving the Poisson equation. As you can see, when going from 1 to > 2 processors, the scaling is very good, even for the case of 200^3 grid > points (8 million). For larger number of processors this quickly > deteriorates. The cluster I am running on has 8 cores per node and 24GB > memory per node. > > Can someone comment on these results? Is this what I should expect? > > > > Some additional information: > > - I set the NullSpace of the matrix explicitly with MatNullSpaceCreate > > - The set-up of the problem is not included in the timing results. The > set-up is not efficient yet (use of MatRow, for example), and there is some > code cleanup to do (too many matrices and vectors), but I think this should > not affect the performance of the Poisson solve. > > - log_summary and ksp_monitor are attached. > > > > Thanks a lot, > > > > Benjamin > > > > > > > > > > > > > > > > -- > > Ir. B. Sanderse > > > > Centrum Wiskunde en Informatica > > Science Park 123 > > 1098 XG Amsterdam > > > > t: +31 20 592 4161 > > e: sanderse at cwi.nl > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 10 09:44:33 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 10 Jul 2012 16:44:33 +0200 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error Message-ID: Hello list! I tried to test one of the FFT examples in src/mat/examples/tests/ by typing "make ex148" in this directory. Unfortunately the compilation failed: mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -fPIC -I/home/thomas/local/src/petsc-3.2-p6/include -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi -D__INSDIR__=src/mat/examples/tests/ ex148.c ex148.c: In function ?PetscInt main(PetscInt, char**)?: ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope make: [ex148.o] Error 1 (ignored) All the other FFT examples seem to use these two Transformation functions as well. Any ideas where these functions are defined ? Regards, Thomas From w_ang_temp at 163.com Tue Jul 10 09:51:29 2012 From: w_ang_temp at 163.com (w_ang_temp) Date: Tue, 10 Jul 2012 22:51:29 +0800 (CST) Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: <84D88821-E153-4718-B2C8-0FD148060A50@columbia.edu> References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> <84D88821-E153-4718-B2C8-0FD148060A50@columbia.edu> Message-ID: <10174c6c.2608e.138715fa3b9.Coremail.w_ang_temp@163.com> In my opinion, convergence in PETSc is decided by rtol, atol and dtol. The divergent hints just show that in the solving process it does not satisfy the rule. The "right" result may be different from the true result at the several back decimal places(I mean that they may be the same with four decimal places but may be not the same with more decimal places). Is it right? >At 2012-07-08 00:28:54,"Mark F. Adams" wrote: >It sounds like your -ksp_rtol is too small. Experiment with looser tolerances until your solution is not "correct" to see >how much accuracy you want. >On Jul 7, 2012, at 12:15 PM, w_ang_temp wrote: > Maybe it is a problem of mathematical concept. I compare the result with the true result which is >computed and validated by other tools. I think it is right if I get the same result. >>? 2012-07-08 00:03:21?"Matthew Knepley" ??? >>On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: >>Hello, >> I am a little puzzled that I get the right result while the converged reason says that 'Linear solve >>did not >>converge due to DIVERGED_ITS iterations 10000'. This infomation means that the iterations >reach >the maximum >>iterations. But the result is right now. So why says 'did not converge'? Can I think that the result is >>right and >>can be used? >>Obviously, your definition of "right" is not the same as the convergence tolerances you are using. >> Matt >> Thanks. >> Jim -- >What most experimenters take for granted before they begin their experiments is infinitely more >interesting than any results to which their experiments lead. >-- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Jul 10 10:31:24 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 10 Jul 2012 10:31:24 -0500 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: Message-ID: Thomas: Please update to petsc-3.3, and build it with FFTW. ex148.c was rewritten using FFTW. Hong > Hello list! > > I tried to test one of the FFT examples in src/mat/examples/tests/ by > typing "make ex148" in this directory. Unfortunately the compilation > failed: > > mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -O3 -fPIC > -I/home/thomas/local/src/petsc-3.2-p6/include > -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include > -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi > -D__INSDIR__=src/mat/examples/tests/ ex148.c > ex148.c: In function ?PetscInt main(PetscInt, char**)?: > ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope > ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope > make: [ex148.o] Error 1 (ignored) > > All the other FFT examples seem to use these two Transformation > functions as well. Any ideas where these functions are defined ? > > Regards, > Thomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 10 10:37:08 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 10 Jul 2012 17:37:08 +0200 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: Message-ID: Hey, thx for your quick response! Is petsc-3.3 compatible with the current slepc-3.2 ?? On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: > Thomas: > Please update to petsc-3.3, and build it with FFTW. > ex148.c was rewritten using FFTW. > > Hong > >> Hello list! >> >> I tried to test one of the FFT examples in src/mat/examples/tests/ by >> typing "make ex148" in this directory. Unfortunately the compilation >> failed: >> >> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 -fPIC >> -I/home/thomas/local/src/petsc-3.2-p6/include >> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> -D__INSDIR__=src/mat/examples/tests/ ex148.c >> ex148.c: In function ?PetscInt main(PetscInt, char**)?: >> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope >> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope >> make: [ex148.o] Error 1 (ignored) >> >> All the other FFT examples seem to use these two Transformation >> functions as well. Any ideas where these functions are defined ? >> >> Regards, >> Thomas > > From jedbrown at mcs.anl.gov Tue Jul 10 10:39:29 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 10 Jul 2012 10:39:29 -0500 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: Message-ID: On Tue, Jul 10, 2012 at 10:37 AM, Thomas Hisch wrote: > Hey, > > thx for your quick response! Is petsc-3.3 compatible with the current > slepc-3.2 ?? > No, but you can use slepc-dev. http://www.grycap.upv.es/slepc/download/download.htm > > On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: > > Thomas: > > Please update to petsc-3.3, and build it with FFTW. > > ex148.c was rewritten using FFTW. > > > > Hong > > > >> Hello list! > >> > >> I tried to test one of the FFT examples in src/mat/examples/tests/ by > >> typing "make ex148" in this directory. Unfortunately the compilation > >> failed: > >> > >> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing > >> -Wno-unknown-pragmas -O3 -fPIC > >> -I/home/thomas/local/src/petsc-3.2-p6/include > >> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include > >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi > >> -D__INSDIR__=src/mat/examples/tests/ ex148.c > >> ex148.c: In function ?PetscInt main(PetscInt, char**)?: > >> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope > >> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this > scope > >> make: [ex148.o] Error 1 (ignored) > >> > >> All the other FFT examples seem to use these two Transformation > >> functions as well. Any ideas where these functions are defined ? > >> > >> Regards, > >> Thomas > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 10 10:55:27 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 10 Jul 2012 17:55:27 +0200 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: Message-ID: On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: > Thomas: > Please update to petsc-3.3, and build it with FFTW. > ex148.c was rewritten using FFTW. No, there is no diff between the ex148.c file in petsc-3.2-p6 and petsc-3.3-p1. With the latest petsc-3.3-p1 I still get the same error message. Regards Thomas > > Hong > >> Hello list! >> >> I tried to test one of the FFT examples in src/mat/examples/tests/ by >> typing "make ex148" in this directory. Unfortunately the compilation >> failed: >> >> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 -fPIC >> -I/home/thomas/local/src/petsc-3.2-p6/include >> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> -D__INSDIR__=src/mat/examples/tests/ ex148.c >> ex148.c: In function ?PetscInt main(PetscInt, char**)?: >> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope >> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope >> make: [ex148.o] Error 1 (ignored) >> >> All the other FFT examples seem to use these two Transformation >> functions as well. Any ideas where these functions are defined ? >> >> Regards, >> Thomas > > From zonexo at gmail.com Tue Jul 10 11:03:27 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 10 Jul 2012 18:03:27 +0200 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: References: <4FFC066D.7060307@gmail.com> Message-ID: <4FFC524F.7050709@gmail.com> Yours sincerely, TAY wee-beng On 10/7/2012 2:07 PM, Matthew Knepley wrote: > On Tue, Jul 10, 2012 at 5:39 AM, TAY wee-beng > wrote: > > Hi, > > I read in the manual in page 50 that it's recommended to declare > struct to represent field for dof > 1 for DM. > > > We mean C struct. C makes it easy (just use a pointer type cast). > Fortran makes it hard unfortunately. > > Matt Ok, I'll try to use another mtd. Btw, if I declare: /PetscScalar,pointer :: array2(:,:,:) with DMDACreate2d using dof = 2, call DMDAVecGetArrayF90(da,x_local,array2,ierr) access array2 .... call DMDAVecRestoreArrayF90(da,x_local,array2,ierr)/ How is the memory for "array2" allocated ? Is it allocated all the time, or only between the DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90? Also, can I "reuse" array2? For e.g., now for y_local: /call DMDAVecGetArrayF90(da,y_local,array2,ierr) access array2 ..../ / call DMDAVecRestoreArrayF90(da,y_local,array2,ierr)/ Thank you! > > I'm using Fortran and for testing, I use dof = 1 and write as: > > /type field > > //PetscScalar//u (or real(8) :: u) > > end type field > > type(field), pointer :: field_u(:,:)/ > > When I tried to use : > > /call DMDAVecGetArrayF90(da,x_local,field_u,ierr)/ > > I got the error : There is no matching specific subroutine for > this generic subroutine call. [DMDAVECGETARRAYF90] > > The da, x_local has been defined with the specific DM routines. It > worked if I use : > > /PetscScalar,pointer :: array(:,:) and > > call DMDAVecGetArrayF90(da,x_local,array,ierr)/ > > May I know what did I do wrong? > > > -- > Yours sincerely, > > TAY wee-beng > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 10 11:05:33 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Jul 2012 11:05:33 -0500 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: <4FFC524F.7050709@gmail.com> References: <4FFC066D.7060307@gmail.com> <4FFC524F.7050709@gmail.com> Message-ID: On Tue, Jul 10, 2012 at 11:03 AM, TAY wee-beng wrote: > > Yours sincerely, > > TAY wee-beng > > On 10/7/2012 2:07 PM, Matthew Knepley wrote: > > On Tue, Jul 10, 2012 at 5:39 AM, TAY wee-beng wrote: > >> Hi, >> >> I read in the manual in page 50 that it's recommended to declare struct >> to represent field for dof > 1 for DM. >> > > We mean C struct. C makes it easy (just use a pointer type cast). > Fortran makes it hard unfortunately. > > Matt > > Ok, I'll try to use another mtd. > > Btw, if I declare: > > *PetscScalar,pointer :: array2(:,:,:) > > with DMDACreate2d using dof = 2, > > call DMDAVecGetArrayF90(da,x_local,array2,ierr) > > access array2 .... > > call DMDAVecRestoreArrayF90(da,x_local,array2,ierr)* > > How is the memory for "array2" allocated ? Is it allocated all the time, > or only between the DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90? > > Also, can I "reuse" array2? For e.g., now for y_local: > > *call DMDAVecGetArrayF90(da,y_local,array2,ierr) > > access array2 ....* * > > call DMDAVecRestoreArrayF90(da,y_local,array2,ierr)* > The right thing to do here is to implement DMDAVecGetArratDOFF90(). Matt > Thank you! > > > >> I'm using Fortran and for testing, I use dof = 1 and write as: >> >> *type field >> >> **PetscScalar** u (or real(8) :: u) >> >> end type field >> >> type(field), pointer :: field_u(:,:)* >> >> When I tried to use : >> >> *call DMDAVecGetArrayF90(da,x_local,field_u,ierr)* >> >> I got the error : There is no matching specific subroutine for this >> generic subroutine call. [DMDAVECGETARRAYF90] >> >> The da, x_local has been defined with the specific DM routines. It worked >> if I use : >> >> *PetscScalar,pointer :: array(:,:) and >> >> call DMDAVecGetArrayF90(da,x_local,array,ierr)* >> >> May I know what did I do wrong? >> >> >> -- >> Yours sincerely, >> >> TAY wee-beng >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajay.rawat83 at gmail.com Tue Jul 10 11:44:52 2012 From: ajay.rawat83 at gmail.com (ajay.rawat83 at gmail.com) Date: Tue, 10 Jul 2012 16:44:52 +0000 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error Message-ID: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> No, try to use petsc-dev with slepc-dev -----Original message----- From: Thomas Hisch Sent: 10/07/2012, 9:07 pm To: PETSc users list Subject: Re: [petsc-users] FFT Matrix Examples/Tests: Compiletime error Hey, thx for your quick response! Is petsc-3.3 compatible with the current slepc-3.2 ?? On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: > Thomas: > Please update to petsc-3.3, and build it with FFTW. > ex148.c was rewritten using FFTW. > > Hong > >> Hello list! >> >> I tried to test one of the FFT examples in src/mat/examples/tests/ by >> typing "make ex148" in this directory. Unfortunately the compilation >> failed: >> >> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 -fPIC >> -I/home/thomas/local/src/petsc-3.2-p6/include >> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> -D__INSDIR__=src/mat/examples/tests/ ex148.c >> ex148.c: In function ?PetscInt main(PetscInt, char**)?: >> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope >> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope >> make: [ex148.o] Error 1 (ignored) >> >> All the other FFT examples seem to use these two Transformation >> functions as well. Any ideas where these functions are defined ? >> >> Regards, >> Thomas > > From t.hisch at gmail.com Tue Jul 10 11:59:55 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 10 Jul 2012 18:59:55 +0200 Subject: [petsc-users] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> References: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> Message-ID: Thx for the hint. Should PETSc-dev in principle work with gcc-4.7, because I get the following error while building petsc: ----------------------------------------- Using C/C++ compile: mpicxx -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -fPIC -I/home/thomas/local/src/petsc-dev/include -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi -D__INSDIR__=./ Using Fortran compile: mpif90 -c -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -I/home/thomas/local/src/petsc-dev/include -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi ----------------------------------------- Using C/C++ linker: mpicxx Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 Using Fortran linker: mpif90 Using Fortran flags: -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument ----------------------------------------- Using libraries: -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib -L/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib -lpetsc -lX11 -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib -lfftw3_mpi -lfftw3 -lpthread -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -llapack -lblas -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 -L/usr/lib/gcc/x86_64-linux-gnu/4.7 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -lgcc_s -lpthread -ldl ------------------------------------------ Using mpiexec: mpiexec ========================================== Building PETSc using CMake with 5 build threads ========================================== Re-run cmake file: Makefile older than: ../CMakeLists.txt -- Configuring done -- Generating done -- Build files have been written to: /home/thomas/local/src/petsc-dev/arch-linux2-cxx-release Scanning dependencies of target petsc [ 0%] Building Fortran object CMakeFiles/petsc.dir/src/sys/f90-mod/petscsysmod.F.o [ 0%] Building Fortran object CMakeFiles/petsc.dir/src/vec/f90-mod/petscvecmod.F.o [ 1%] Building Fortran object CMakeFiles/petsc.dir/src/mat/f90-mod/petscmatmod.F.o [ 1%] Building Fortran object CMakeFiles/petsc.dir/src/dm/f90-mod/petscdmmod.F.o [ 1%] Building Fortran object CMakeFiles/petsc.dir/src/ksp/f90-mod/petsckspmod.F.o [ 1%] Building Fortran object CMakeFiles/petsc.dir/src/snes/f90-mod/petscsnesmod.F.o [ 2%] [ 2%] [ 2%] [ 2%] [ 2%] Building Fortran object CMakeFiles/petsc.dir/src/ts/f90-mod/petsctsmod.F.o Building CXX object CMakeFiles/petsc.dir/src/sys/verbose/verboseinfo.c.o Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/view.c.o Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/viewregall.c.o Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/flush.c.o [ 2%] In file included from /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, from /home/thomas/local/src/petsc-dev/src/sys/verbose/verboseinfo.c:6: /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: error: no matching function for call to ?std::complex::complex(volatile PetscScalar&)? /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: note: candidates are: In file included from /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, from /home/thomas/local/src/petsc-dev/include/petscsys.h:337, from /home/thomas/local/src/petsc-dev/src/sys/verbose/verboseinfo.c:6: /usr/include/c++/4.7/complex:1205:26: note: std::complex::complex(const std::complex&) /usr/include/c++/4.7/complex:1205:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?const std::complex&? /usr/include/c++/4.7/complex:1195:26: note: std::complex::complex(double, double) /usr/include/c++/4.7/complex:1195:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?double? /usr/include/c++/4.7/complex:1193:26: note: std::complex::complex(std::complex::_ComplexT) /usr/include/c++/4.7/complex:1193:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?std::complex::_ComplexT {aka __complex__ double}? /usr/include/c++/4.7/complex:1188:12: note: std::complex::complex(const std::complex&) /usr/include/c++/4.7/complex:1188:12: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?const std::complex&? In file included from /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, from /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, from /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/flush.c:2: /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: error: no matching function for call to ?std::complex::complex(volatile PetscScalar&)? /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: note: candidates are: In file included from /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, from /home/thomas/local/src/petsc-dev/include/petscsys.h:337, from /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, from /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/flush.c:2: /usr/include/c++/4.7/complex:1205:26: note: std::complex::complex(const std::complex&) /usr/include/c++/4.7/complex:1205:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?const std::complex&? /usr/include/c++/4.7/complex:1195:26: note: std::complex::complex(double, double) /usr/include/c++/4.7/complex:1195:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?double? /usr/include/c++/4.7/complex:1193:26: note: std::complex::complex(std::complex::_ComplexT) /usr/include/c++/4.7/complex:1193:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?std::complex::_ComplexT {aka __complex__ double}? /usr/include/c++/4.7/complex:1188:12: note: std::complex::complex(const std::complex&) /usr/include/c++/4.7/complex:1188:12: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?const std::complex&? Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/viewreg.c.o In file included from /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, from /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, from /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/viewregall.c:2: /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: error: no matching function for call to ?std::complex::complex(volatile PetscScalar&)? /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: note: candidates are: In file included from /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, from /home/thomas/local/src/petsc-dev/include/petscsys.h:337, from /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, from /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/viewregall.c:2: /usr/include/c++/4.7/complex:1205:26: note: std::complex::complex(const std::complex&) /usr/include/c++/4.7/complex:1205:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?const std::complex&? /usr/include/c++/4.7/complex:1195:26: note: std::complex::complex(double, double) /usr/include/c++/4.7/complex:1195:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?double? /usr/include/c++/4.7/complex:1193:26: note: std::complex::complex(std::complex::_ComplexT) /usr/include/c++/4.7/complex:1193:26: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?std::complex::_ComplexT {aka __complex__ double}? /usr/include/c++/4.7/complex:1188:12: note: std::complex::complex(const std::complex&) /usr/include/c++/4.7/complex:1188:12: note: no known conversion for argument 1 from ?volatile PetscScalar {aka volatile std::complex}? to ?const std::complex&? ...... I called configure with: ./configure --with-c++-support=1 --with-scalar-type=complex --with-x11=0 --with-clanguage=cxx --with-blas-lapack-dir=/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 CXXOPTFLAGS="-O3" COPTFLAGS="-O3" FOPTFLAGS="-03" --with-shared-libraries=1 --with-debugging=0 --download-fftw=1 Regards Thomas On Tue, Jul 10, 2012 at 6:44 PM, ajay.rawat83 at gmail.com wrote: > No, try to use petsc-dev with slepc-dev > -----Original message----- > From: Thomas Hisch > Sent: 10/07/2012, 9:07 pm > To: PETSc users list > Subject: Re: [petsc-users] FFT Matrix Examples/Tests: Compiletime error > > > Hey, > > thx for your quick response! Is petsc-3.3 compatible with the current > slepc-3.2 ?? > > On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: >> Thomas: >> Please update to petsc-3.3, and build it with FFTW. >> ex148.c was rewritten using FFTW. >> >> Hong >> >>> Hello list! >>> >>> I tried to test one of the FFT examples in src/mat/examples/tests/ by >>> typing "make ex148" in this directory. Unfortunately the compilation >>> failed: >>> >>> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing >>> -Wno-unknown-pragmas -O3 -fPIC >>> -I/home/thomas/local/src/petsc-3.2-p6/include >>> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include >>> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >>> -D__INSDIR__=src/mat/examples/tests/ ex148.c >>> ex148.c: In function ?PetscInt main(PetscInt, char**)?: >>> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope >>> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope >>> make: [ex148.o] Error 1 (ignored) >>> >>> All the other FFT examples seem to use these two Transformation >>> functions as well. Any ideas where these functions are defined ? >>> >>> Regards, >>> Thomas >> >> > From knepley at gmail.com Tue Jul 10 12:04:03 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 10 Jul 2012 12:04:03 -0500 Subject: [petsc-users] [petsc-dev] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> Message-ID: On Tue, Jul 10, 2012 at 11:59 AM, Thomas Hisch wrote: > Thx for the hint. > > Should PETSc-dev in principle work with gcc-4.7, because I get the > following error while building petsc: > Jed is fixing that now. C++ compilers are extraordinarily dumb, and cannot cast a 'volatile std::complex' to 'std::complex'. Should be ready soon. Matt > ----------------------------------------- > Using C/C++ compile: mpicxx -c -Wall -Wwrite-strings > -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -fPIC > -I/home/thomas/local/src/petsc-dev/include > -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include > -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi > -D__INSDIR__=./ > Using Fortran compile: mpif90 -c -fPIC -Wall -Wno-unused-variable > -Wno-unused-dummy-argument > -I/home/thomas/local/src/petsc-dev/include > -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include > -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi > ----------------------------------------- > Using C/C++ linker: mpicxx > Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -O3 > Using Fortran linker: mpif90 > Using Fortran flags: -fPIC -Wall -Wno-unused-variable > -Wno-unused-dummy-argument > ----------------------------------------- > Using libraries: > -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib > -L/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib > -lpetsc -lX11 > -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib > -lfftw3_mpi -lfftw3 -lpthread > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -llapack > -lblas -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 > -L/usr/lib/gcc/x86_64-linux-gnu/4.7 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu > -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 > > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 > > -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 > -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm > -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal > -lnsl -lutil -lgcc_s -lpthread -ldl > ------------------------------------------ > Using mpiexec: mpiexec > ========================================== > Building PETSc using CMake with 5 build threads > ========================================== > Re-run cmake file: Makefile older than: ../CMakeLists.txt > -- Configuring done > -- Generating done > -- Build files have been written to: > /home/thomas/local/src/petsc-dev/arch-linux2-cxx-release > Scanning dependencies of target petsc > [ 0%] Building Fortran object > CMakeFiles/petsc.dir/src/sys/f90-mod/petscsysmod.F.o > [ 0%] Building Fortran object > CMakeFiles/petsc.dir/src/vec/f90-mod/petscvecmod.F.o > [ 1%] Building Fortran object > CMakeFiles/petsc.dir/src/mat/f90-mod/petscmatmod.F.o > [ 1%] Building Fortran object > CMakeFiles/petsc.dir/src/dm/f90-mod/petscdmmod.F.o > [ 1%] Building Fortran object > CMakeFiles/petsc.dir/src/ksp/f90-mod/petsckspmod.F.o > [ 1%] Building Fortran object > CMakeFiles/petsc.dir/src/snes/f90-mod/petscsnesmod.F.o > [ 2%] [ 2%] [ 2%] [ 2%] [ 2%] Building Fortran object > CMakeFiles/petsc.dir/src/ts/f90-mod/petsctsmod.F.o > Building CXX object CMakeFiles/petsc.dir/src/sys/verbose/verboseinfo.c.o > Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/view.c.o > Building CXX object > CMakeFiles/petsc.dir/src/sys/viewer/interface/viewregall.c.o > Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/flush.c.o > [ 2%] In file included from > /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, > from > /home/thomas/local/src/petsc-dev/src/sys/verbose/verboseinfo.c:6: > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In > function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: > error: no matching function for call to > ?std::complex::complex(volatile PetscScalar&)? > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: > note: candidates are: > In file included from > /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, > from > /home/thomas/local/src/petsc-dev/include/petscsys.h:337, > from > /home/thomas/local/src/petsc-dev/src/sys/verbose/verboseinfo.c:6: > /usr/include/c++/4.7/complex:1205:26: note: > std::complex::complex(const std::complex&) > /usr/include/c++/4.7/complex:1205:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?const std::complex&? > /usr/include/c++/4.7/complex:1195:26: note: > std::complex::complex(double, double) > /usr/include/c++/4.7/complex:1195:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?double? > /usr/include/c++/4.7/complex:1193:26: note: > std::complex::complex(std::complex::_ComplexT) > /usr/include/c++/4.7/complex:1193:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?std::complex::_ComplexT {aka > __complex__ double}? > /usr/include/c++/4.7/complex:1188:12: note: > std::complex::complex(const std::complex&) > /usr/include/c++/4.7/complex:1188:12: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?const std::complex&? > In file included from > /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, > from > /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, > from > /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/flush.c:2: > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In > function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: > error: no matching function for call to > ?std::complex::complex(volatile PetscScalar&)? > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: > note: candidates are: > In file included from > /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, > from > /home/thomas/local/src/petsc-dev/include/petscsys.h:337, > from > /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, > from > /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/flush.c:2: > /usr/include/c++/4.7/complex:1205:26: note: > std::complex::complex(const std::complex&) > /usr/include/c++/4.7/complex:1205:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?const std::complex&? > /usr/include/c++/4.7/complex:1195:26: note: > std::complex::complex(double, double) > /usr/include/c++/4.7/complex:1195:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?double? > /usr/include/c++/4.7/complex:1193:26: note: > std::complex::complex(std::complex::_ComplexT) > /usr/include/c++/4.7/complex:1193:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?std::complex::_ComplexT {aka > __complex__ double}? > /usr/include/c++/4.7/complex:1188:12: note: > std::complex::complex(const std::complex&) > /usr/include/c++/4.7/complex:1188:12: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?const std::complex&? > Building CXX object > CMakeFiles/petsc.dir/src/sys/viewer/interface/viewreg.c.o > In file included from > /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, > from > /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, > from > /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/viewregall.c:2: > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In > function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: > error: no matching function for call to > ?std::complex::complex(volatile PetscScalar&)? > /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: > note: candidates are: > In file included from > /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, > from > /home/thomas/local/src/petsc-dev/include/petscsys.h:337, > from > /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, > from > /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/viewregall.c:2: > /usr/include/c++/4.7/complex:1205:26: note: > std::complex::complex(const std::complex&) > /usr/include/c++/4.7/complex:1205:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?const std::complex&? > /usr/include/c++/4.7/complex:1195:26: note: > std::complex::complex(double, double) > /usr/include/c++/4.7/complex:1195:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?double? > /usr/include/c++/4.7/complex:1193:26: note: > std::complex::complex(std::complex::_ComplexT) > /usr/include/c++/4.7/complex:1193:26: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?std::complex::_ComplexT {aka > __complex__ double}? > /usr/include/c++/4.7/complex:1188:12: note: > std::complex::complex(const std::complex&) > /usr/include/c++/4.7/complex:1188:12: note: no known conversion for > argument 1 from ?volatile PetscScalar {aka volatile > std::complex}? to ?const std::complex&? > ...... > > I called configure with: > > ./configure --with-c++-support=1 --with-scalar-type=complex > --with-x11=0 --with-clanguage=cxx > > --with-blas-lapack-dir=/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 > CXXOPTFLAGS="-O3" COPTFLAGS="-O3" FOPTFLAGS="-03" > --with-shared-libraries=1 --with-debugging=0 --download-fftw=1 > > Regards > Thomas > > > On Tue, Jul 10, 2012 at 6:44 PM, ajay.rawat83 at gmail.com > wrote: > > No, try to use petsc-dev with slepc-dev > > -----Original message----- > > From: Thomas Hisch > > Sent: 10/07/2012, 9:07 pm > > To: PETSc users list > > Subject: Re: [petsc-users] FFT Matrix Examples/Tests: Compiletime error > > > > > > Hey, > > > > thx for your quick response! Is petsc-3.3 compatible with the current > > slepc-3.2 ?? > > > > On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: > >> Thomas: > >> Please update to petsc-3.3, and build it with FFTW. > >> ex148.c was rewritten using FFTW. > >> > >> Hong > >> > >>> Hello list! > >>> > >>> I tried to test one of the FFT examples in src/mat/examples/tests/ by > >>> typing "make ex148" in this directory. Unfortunately the compilation > >>> failed: > >>> > >>> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing > >>> -Wno-unknown-pragmas -O3 -fPIC > >>> -I/home/thomas/local/src/petsc-3.2-p6/include > >>> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include > >>> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi > >>> -D__INSDIR__=src/mat/examples/tests/ ex148.c > >>> ex148.c: In function ?PetscInt main(PetscInt, char**)?: > >>> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this > scope > >>> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this > scope > >>> make: [ex148.o] Error 1 (ignored) > >>> > >>> All the other FFT examples seem to use these two Transformation > >>> functions as well. Any ideas where these functions are defined ? > >>> > >>> Regards, > >>> Thomas > >> > >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 10 12:06:38 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 10 Jul 2012 12:06:38 -0500 Subject: [petsc-users] [petsc-dev] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> Message-ID: On Tue, Jul 10, 2012 at 12:04 PM, Matthew Knepley wrote: > On Tue, Jul 10, 2012 at 11:59 AM, Thomas Hisch wrote: > >> Thx for the hint. >> >> Should PETSc-dev in principle work with gcc-4.7, because I get the >> following error while building petsc: >> > > Jed is fixing that now. C++ compilers are extraordinarily dumb, and cannot > cast > a 'volatile std::complex' to 'std::complex'. Should be ready soon. > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/664ec55a8ab3 > > Matt > > >> ----------------------------------------- >> Using C/C++ compile: mpicxx -c -Wall -Wwrite-strings >> -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -fPIC >> -I/home/thomas/local/src/petsc-dev/include >> -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> -D__INSDIR__=./ >> Using Fortran compile: mpif90 -c -fPIC -Wall -Wno-unused-variable >> -Wno-unused-dummy-argument >> -I/home/thomas/local/src/petsc-dev/include >> -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> ----------------------------------------- >> Using C/C++ linker: mpicxx >> Using C/C++ flags: -Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 >> Using Fortran linker: mpif90 >> Using Fortran flags: -fPIC -Wall -Wno-unused-variable >> -Wno-unused-dummy-argument >> ----------------------------------------- >> Using libraries: >> -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib >> -L/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib >> -lpetsc -lX11 >> -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib >> -lfftw3_mpi -lfftw3 -lpthread >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -llapack >> -lblas -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib >> -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 >> -L/usr/lib/gcc/x86_64-linux-gnu/4.7 >> -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu >> -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 >> >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 >> >> -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 >> -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm >> -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal >> -lnsl -lutil -lgcc_s -lpthread -ldl >> ------------------------------------------ >> Using mpiexec: mpiexec >> ========================================== >> Building PETSc using CMake with 5 build threads >> ========================================== >> Re-run cmake file: Makefile older than: ../CMakeLists.txt >> -- Configuring done >> -- Generating done >> -- Build files have been written to: >> /home/thomas/local/src/petsc-dev/arch-linux2-cxx-release >> Scanning dependencies of target petsc >> [ 0%] Building Fortran object >> CMakeFiles/petsc.dir/src/sys/f90-mod/petscsysmod.F.o >> [ 0%] Building Fortran object >> CMakeFiles/petsc.dir/src/vec/f90-mod/petscvecmod.F.o >> [ 1%] Building Fortran object >> CMakeFiles/petsc.dir/src/mat/f90-mod/petscmatmod.F.o >> [ 1%] Building Fortran object >> CMakeFiles/petsc.dir/src/dm/f90-mod/petscdmmod.F.o >> [ 1%] Building Fortran object >> CMakeFiles/petsc.dir/src/ksp/f90-mod/petsckspmod.F.o >> [ 1%] Building Fortran object >> CMakeFiles/petsc.dir/src/snes/f90-mod/petscsnesmod.F.o >> [ 2%] [ 2%] [ 2%] [ 2%] [ 2%] Building Fortran object >> CMakeFiles/petsc.dir/src/ts/f90-mod/petsctsmod.F.o >> Building CXX object CMakeFiles/petsc.dir/src/sys/verbose/verboseinfo.c.o >> Building CXX object CMakeFiles/petsc.dir/src/sys/viewer/interface/view.c.o >> Building CXX object >> CMakeFiles/petsc.dir/src/sys/viewer/interface/viewregall.c.o >> Building CXX object >> CMakeFiles/petsc.dir/src/sys/viewer/interface/flush.c.o >> [ 2%] In file included from >> /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, >> from >> /home/thomas/local/src/petsc-dev/src/sys/verbose/verboseinfo.c:6: >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In >> function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: >> error: no matching function for call to >> ?std::complex::complex(volatile PetscScalar&)? >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: >> note: candidates are: >> In file included from >> /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, >> from >> /home/thomas/local/src/petsc-dev/include/petscsys.h:337, >> from >> /home/thomas/local/src/petsc-dev/src/sys/verbose/verboseinfo.c:6: >> /usr/include/c++/4.7/complex:1205:26: note: >> std::complex::complex(const std::complex&) >> /usr/include/c++/4.7/complex:1205:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?const std::complex&? >> /usr/include/c++/4.7/complex:1195:26: note: >> std::complex::complex(double, double) >> /usr/include/c++/4.7/complex:1195:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?double? >> /usr/include/c++/4.7/complex:1193:26: note: >> std::complex::complex(std::complex::_ComplexT) >> /usr/include/c++/4.7/complex:1193:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?std::complex::_ComplexT {aka >> __complex__ double}? >> /usr/include/c++/4.7/complex:1188:12: note: >> std::complex::complex(const std::complex&) >> /usr/include/c++/4.7/complex:1188:12: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?const std::complex&? >> In file included from >> /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, >> from >> /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, >> from >> /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/flush.c:2: >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In >> function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: >> error: no matching function for call to >> ?std::complex::complex(volatile PetscScalar&)? >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: >> note: candidates are: >> In file included from >> /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, >> from >> /home/thomas/local/src/petsc-dev/include/petscsys.h:337, >> from >> /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, >> from >> /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/flush.c:2: >> /usr/include/c++/4.7/complex:1205:26: note: >> std::complex::complex(const std::complex&) >> /usr/include/c++/4.7/complex:1205:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?const std::complex&? >> /usr/include/c++/4.7/complex:1195:26: note: >> std::complex::complex(double, double) >> /usr/include/c++/4.7/complex:1195:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?double? >> /usr/include/c++/4.7/complex:1193:26: note: >> std::complex::complex(std::complex::_ComplexT) >> /usr/include/c++/4.7/complex:1193:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?std::complex::_ComplexT {aka >> __complex__ double}? >> /usr/include/c++/4.7/complex:1188:12: note: >> std::complex::complex(const std::complex&) >> /usr/include/c++/4.7/complex:1188:12: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?const std::complex&? >> Building CXX object >> CMakeFiles/petsc.dir/src/sys/viewer/interface/viewreg.c.o >> In file included from >> /home/thomas/local/src/petsc-dev/include/petscsys.h:1536:0, >> from >> /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, >> from >> /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/viewregall.c:2: >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h: In >> function ?PetscBool PetscCheckPointer(const void*, PetscDataType)?: >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: >> error: no matching function for call to >> ?std::complex::complex(volatile PetscScalar&)? >> /home/thomas/local/src/petsc-dev/include/petsc-private/petscimpl.h:197:60: >> note: candidates are: >> In file included from >> /home/thomas/local/src/petsc-dev/include/petscmath.h:60:0, >> from >> /home/thomas/local/src/petsc-dev/include/petscsys.h:337, >> from >> /home/thomas/local/src/petsc-dev/include/petsc-private/viewerimpl.h:5, >> from >> /home/thomas/local/src/petsc-dev/src/sys/viewer/interface/viewregall.c:2: >> /usr/include/c++/4.7/complex:1205:26: note: >> std::complex::complex(const std::complex&) >> /usr/include/c++/4.7/complex:1205:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?const std::complex&? >> /usr/include/c++/4.7/complex:1195:26: note: >> std::complex::complex(double, double) >> /usr/include/c++/4.7/complex:1195:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?double? >> /usr/include/c++/4.7/complex:1193:26: note: >> std::complex::complex(std::complex::_ComplexT) >> /usr/include/c++/4.7/complex:1193:26: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?std::complex::_ComplexT {aka >> __complex__ double}? >> /usr/include/c++/4.7/complex:1188:12: note: >> std::complex::complex(const std::complex&) >> /usr/include/c++/4.7/complex:1188:12: note: no known conversion for >> argument 1 from ?volatile PetscScalar {aka volatile >> std::complex}? to ?const std::complex&? >> ...... >> >> I called configure with: >> >> ./configure --with-c++-support=1 --with-scalar-type=complex >> --with-x11=0 --with-clanguage=cxx >> >> --with-blas-lapack-dir=/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 >> CXXOPTFLAGS="-O3" COPTFLAGS="-O3" FOPTFLAGS="-03" >> --with-shared-libraries=1 --with-debugging=0 --download-fftw=1 >> >> Regards >> Thomas >> >> >> On Tue, Jul 10, 2012 at 6:44 PM, ajay.rawat83 at gmail.com >> wrote: >> > No, try to use petsc-dev with slepc-dev >> > -----Original message----- >> > From: Thomas Hisch >> > Sent: 10/07/2012, 9:07 pm >> > To: PETSc users list >> > Subject: Re: [petsc-users] FFT Matrix Examples/Tests: Compiletime error >> > >> > >> > Hey, >> > >> > thx for your quick response! Is petsc-3.3 compatible with the current >> > slepc-3.2 ?? >> > >> > On Tue, Jul 10, 2012 at 5:31 PM, Hong Zhang wrote: >> >> Thomas: >> >> Please update to petsc-3.3, and build it with FFTW. >> >> ex148.c was rewritten using FFTW. >> >> >> >> Hong >> >> >> >>> Hello list! >> >>> >> >>> I tried to test one of the FFT examples in src/mat/examples/tests/ by >> >>> typing "make ex148" in this directory. Unfortunately the compilation >> >>> failed: >> >>> >> >>> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing >> >>> -Wno-unknown-pragmas -O3 -fPIC >> >>> -I/home/thomas/local/src/petsc-3.2-p6/include >> >>> -I/home/thomas/local/src/petsc-3.2-p6/arch-linux2-cxx-release/include >> >>> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> >>> -D__INSDIR__=src/mat/examples/tests/ ex148.c >> >>> ex148.c: In function ?PetscInt main(PetscInt, char**)?: >> >>> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this >> scope >> >>> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this >> scope >> >>> make: [ex148.o] Error 1 (ignored) >> >>> >> >>> All the other FFT examples seem to use these two Transformation >> >>> functions as well. Any ideas where these functions are defined ? >> >>> >> >>> Regards, >> >>> Thomas >> >> >> >> >> > >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 10 12:12:13 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 10 Jul 2012 12:12:13 -0500 Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: <10174c6c.2608e.138715fa3b9.Coremail.w_ang_temp@163.com> References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> <84D88821-E153-4718-B2C8-0FD148060A50@columbia.edu> <10174c6c.2608e.138715fa3b9.Coremail.w_ang_temp@163.com> Message-ID: Possibilities (not mutually exclusive) 1. the convergence tolerance is too tight to converge in double precision arithmetic, so the solver is stagnating (you could try --with-precision=__float128) 2. the tolerance is too tight to converge in 10000 iterations (either increase the number of iterations or use a better preconditioner 3. the tolerance is tighter than necessary to get a solution that is "right enough" for you (try loosening the tolerance) On Tue, Jul 10, 2012 at 9:51 AM, w_ang_temp wrote: > > In my opinion, convergence in PETSc is decided by rtol, atol and dtol. > The divergent hints just show that > > in the solving process it does not satisfy the rule. The "right" result > may be different from the true result at > > the several back decimal places(I mean that they may be the same with four > decimal places but may be not > > the same with more decimal places). > > Is it right? > > > >At 2012-07-08 00:28:54,"Mark F. Adams" wrote: > > >It sounds like your -ksp_rtol is too small. Experiment with looser > tolerances until your solution is not "correct" to see >how much accuracy > you want. > > >On Jul 7, 2012, at 12:15 PM, w_ang_temp wrote: > > > Maybe it is a problem of mathematical concept. I compare the result > with the true result which is > > >computed and validated by other tools. I think it is right if I get the > same result. > > >>? 2012-07-08 00:03:21?"Matthew Knepley" ??? > > >>On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: > >> >>Hello, >> >> >> I am a little puzzled that I get the right result while the >> converged reason says that 'Linear solve >>did not >> >> >>converge due to DIVERGED_ITS iterations 10000'. This infomation means >> that the iterations >reach >the maximum >> >> >>iterations. But the result is right now. So why says 'did not >> converge'? Can I think that the result is >>right and >> >> >>can be used? >> > >>Obviously, your definition of "right" is not the same as the convergence > tolerances you are using. > > >> Matt > > >> >> >> Thanks. >> >> >> Jim >> >> >> > > > -- > >What most experimenters take for granted before they begin their > experiments is infinitely more >interesting than any results to which their > experiments lead. > >-- Norbert Wiener > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Tue Jul 10 12:45:28 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Tue, 10 Jul 2012 13:45:28 -0400 Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: <10174c6c.2608e.138715fa3b9.Coremail.w_ang_temp@163.com> References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> <84D88821-E153-4718-B2C8-0FD148060A50@columbia.edu> <10174c6c.2608e.138715fa3b9.Coremail.w_ang_temp@163.com> Message-ID: <5C23C252-4CF5-493B-B189-1838355417A5@columbia.edu> On Jul 10, 2012, at 10:51 AM, w_ang_temp wrote: > > In my opinion, convergence in PETSc is decided by rtol, atol and dtol. The divergent hints just show that > > in the solving process it does not satisfy the rule. The "right" result may be different from the true result at > > the several back decimal places(I mean that they may be the same with four decimal places but may be not > > the same with more decimal places). No, If you have an rtol=1.e-30 and atol=1.e-300 then your solution will probably be correct to all decimal places even though PETSc will say you "diverged". You can't be too sure about how many digits of accuracy you have with simple linear algebra arguments unless your system is extremely well conditioned. As Jed said if you are doing 10000 iterations are happy with the solution then your tolerances are probably too tight. Look at the residual history and see if you are stagnating. > > Is it right? > > > >At 2012-07-08 00:28:54,"Mark F. Adams" wrote: > >It sounds like your -ksp_rtol is too small. Experiment with looser tolerances until your solution is not "correct" to see >how much accuracy you want. > > >On Jul 7, 2012, at 12:15 PM, w_ang_temp wrote: > >> > Maybe it is a problem of mathematical concept. I compare the result with the true result which is >> >> >computed and validated by other tools. I think it is right if I get the same result. >> >> >>? 2012-07-08 00:03:21?"Matthew Knepley" ??? >> >>On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: >> >>Hello, >> >> >> I am a little puzzled that I get the right result while the converged reason says that 'Linear solve >>did not >> >> >>converge due to DIVERGED_ITS iterations 10000'. This infomation means that the iterations >reach >the maximum >> >> >>iterations. But the result is right now. So why says 'did not converge'? Can I think that the result is >>right and >> >> >>can be used? >> >>Obviously, your definition of "right" is not the same as the convergence tolerances you are using. >> >> >> Matt >> >> >> >> Thanks. >> >> >> Jim >> >> >> >> >> >> -- >> >What most experimenters take for granted before they begin their experiments is infinitely more >interesting than any results to which their experiments lead. >> >-- Norbert Wiener >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 10 12:50:14 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 10 Jul 2012 19:50:14 +0200 Subject: [petsc-users] [petsc-dev] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> Message-ID: On Tue, Jul 10, 2012 at 7:06 PM, Jed Brown wrote: > On Tue, Jul 10, 2012 at 12:04 PM, Matthew Knepley wrote: >> >> On Tue, Jul 10, 2012 at 11:59 AM, Thomas Hisch wrote: >>> >>> Thx for the hint. >>> >>> Should PETSc-dev in principle work with gcc-4.7, because I get the >>> following error while building petsc: >> >> >> Jed is fixing that now. C++ compilers are extraordinarily dumb, and cannot >> cast >> a 'volatile std::complex' to 'std::complex'. Should be ready soon. > > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/664ec55a8ab3 Thx this fixed the compilation error. However, compilation of the ex148 example in src/mat/examples/tests/ still failes! Maybe I did something wrong. diff shows no differenece between ex148.c from petsc-dev and petsc-3.3-p1. I tried to compile ex148 by typing 'make ex148' in src/mat/examples/tests/ - is this the correct way ? Again, here is the error message: mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -fPIC -I/home/thomas/local/src/petsc-dev/include -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi -D__INSDIR__=src/mat/examples/tests/ ex148.c ex148.c: In function ?PetscInt main(PetscInt, char**)?: ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope make: [ex148.o] Error 1 (ignored) mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 -o ex148 ex148.o -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib -L/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib -lpetsc -lX11 -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib -lfftw3_mpi -lfftw3 -lpthread -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -llapack -lblas -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 -L/usr/lib/gcc/x86_64-linux-gnu/4.7 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal -lnsl -lutil -lgcc_s -lpthread -ldl g++: error: ex148.o: No such file or directory make: [ex148] Error 1 (ignored) /bin/rm -f ex148.o Regards Thomas From sean at mcs.anl.gov Tue Jul 10 13:03:44 2012 From: sean at mcs.anl.gov (Sean Farley) Date: Tue, 10 Jul 2012 13:03:44 -0500 Subject: [petsc-users] [petsc-dev] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> Message-ID: On Tue, Jul 10, 2012 at 12:50 PM, Thomas Hisch wrote: > However, compilation of the ex148 example in src/mat/examples/tests/ > still failes! Maybe I did something wrong. diff shows no differenece > between ex148.c from petsc-dev and petsc-3.3-p1. I tried to compile > ex148 by typing 'make ex148' in src/mat/examples/tests/ - is this the > correct way ? > > Again, here is the error message: > > mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing > -Wno-unknown-pragmas -O3 -fPIC > -I/home/thomas/local/src/petsc-dev/include > -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include > -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi > -D__INSDIR__=src/mat/examples/tests/ ex148.c > ex148.c: In function ?PetscInt main(PetscInt, char**)?: > ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope > ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope > make: [ex148.o] Error 1 (ignored) > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -O3 -o ex148 ex148.o > -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib > -L/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib > -lpetsc -lX11 -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib > -lfftw3_mpi -lfftw3 -lpthread > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -llapack > -lblas -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 > -L/usr/lib/gcc/x86_64-linux-gnu/4.7 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu > -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 > -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 > -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 > -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 > -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm > -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal > -lnsl -lutil -lgcc_s -lpthread -ldl > g++: error: ex148.o: No such file or directory > make: [ex148] Error 1 (ignored) > /bin/rm -f ex148.o Sorry for the bug, Thomas. It seems that a summer student last year caused this: http://petsc.cs.iit.edu/petsc/petsc-dev/rev/dc6053cf3c78 I would fix this but need to meet with my advisor soon. Could someone else look at this? From zonexo at gmail.com Tue Jul 10 13:22:20 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 10 Jul 2012 20:22:20 +0200 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: References: <4FFC066D.7060307@gmail.com> <4FFC524F.7050709@gmail.com> Message-ID: <4FFC72DC.9040500@gmail.com> On 10/7/2012 6:05 PM, Matthew Knepley wrote: > On Tue, Jul 10, 2012 at 11:03 AM, TAY wee-beng > wrote: > > > Yours sincerely, > > TAY wee-beng > > On 10/7/2012 2:07 PM, Matthew Knepley wrote: >> On Tue, Jul 10, 2012 at 5:39 AM, TAY wee-beng > > wrote: >> >> Hi, >> >> I read in the manual in page 50 that it's recommended to >> declare struct to represent field for dof > 1 for DM. >> >> >> We mean C struct. C makes it easy (just use a pointer type cast). >> Fortran makes it hard unfortunately. >> >> Matt > Ok, I'll try to use another mtd. > > Btw, if I declare: > > /PetscScalar,pointer :: array2(:,:,:) > > with DMDACreate2d using dof = 2, > > call DMDAVecGetArrayF90(da,x_local,array2,ierr) > > access array2 .... > > call DMDAVecRestoreArrayF90(da,x_local,array2,ierr)/ > > How is the memory for "array2" allocated ? Is it allocated all the > time, or only between the DMDAVecGetArrayF90 and > DMDAVecRestoreArrayF90? > > Also, can I "reuse" array2? For e.g., now for y_local: > > /call DMDAVecGetArrayF90(da,y_local,array2,ierr) > > access array2 ..../ / > > call DMDAVecRestoreArrayF90(da,y_local,array2,ierr)/ > > > The right thing to do here is to implement DMDAVecGetArratDOFF90(). > > Matt Hi, Do you mean DMDAVecGetArrayDOFF90 ? I tried to compile but it gives the error during linking: 1>dm_test2d.obj : error LNK2019: unresolved external symbol DMDAVECGETARRAYDOFF90 referenced in function MAIN__ Also from the manual of DMDAVecGetArray, it says: / Fortran Notes: From Fortran use DMDAVecGetArrayF90() and pass for the array type PetscScalar ,pointer :: array(:,...,:) of the appropriate dimension. For a DMDA created with a dof of 1 use the dimension of the DMDA, for a DMDA created with a dof greater than 1 use one more than the dimension of the DMDA. The order of the indices is array(xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1) (when dof is 1) otherwise array(1:dof,xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1) where the values are obtained from DMDAGetCorners () for a global array or DMDAGetGhostCorners () for a local array. Include finclude/petscdmda.h90 to access this routine. / I just tried with dof = 2 and there's no problem. However, the manual says that for dof > 1, the array is /array(1:dof,xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1)/. Should it be /array(0:dof-1,xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1)/ instead? I had problems with the former, but the latter works fine. Also, I'm still not sure how the memory is allocated. If I have: /Vec x_local PetscScalar,pointer :: array2(:,:,:) with DMDACreate2d using dof = 2, call DMDAVecGetArrayF90(da,x_local,array2,ierr) access array2 .... call DMDAVecRestoreArrayF90(da,x_local,array2,ierr)/ How is the memory for "array2" allocated ? Is it allocated all the time, or only between the DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90? Thanks! > Thank you! > >> >> I'm using Fortran and for testing, I use dof = 1 and write as: >> >> /type field >> >> //PetscScalar//u (or real(8) :: u) >> >> end type field >> >> type(field), pointer :: field_u(:,:)/ >> >> When I tried to use : >> >> /call DMDAVecGetArrayF90(da,x_local,field_u,ierr)/ >> >> I got the error : There is no matching specific subroutine >> for this generic subroutine call. [DMDAVECGETARRAYF90] >> >> The da, x_local has been defined with the specific DM >> routines. It worked if I use : >> >> /PetscScalar,pointer :: array(:,:) and >> >> call DMDAVecGetArrayF90(da,x_local,array,ierr)/ >> >> May I know what did I do wrong? >> >> >> -- >> Yours sincerely, >> >> TAY wee-beng >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 10 13:29:56 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 10 Jul 2012 13:29:56 -0500 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: <4FFC72DC.9040500@gmail.com> References: <4FFC066D.7060307@gmail.com> <4FFC524F.7050709@gmail.com> <4FFC72DC.9040500@gmail.com> Message-ID: On Tue, Jul 10, 2012 at 1:22 PM, TAY wee-beng wrote: > Do you mean DMDAVecGetArrayDOFF90 ? I tried to compile but it gives the > error during linking: > > 1>dm_test2d.obj : error LNK2019: unresolved external symbol > DMDAVECGETARRAYDOFF90 referenced in function MAIN__ > Matt was suggesting that someone should implement this function, it doesn't exist currently. Fortran makes this stuff really painful and we don't know how to make it do something reasonable without depending on your types. You can write your own DMDAVecGetArrayF90WithYourType(), but sadly, it requires some circus tricks to make work. (We can't put this in PETSc because we don't know what your field type is.) > > Also from the manual of DMDAVecGetArray, it says: > * > Fortran Notes: From Fortran use DMDAVecGetArrayF90() and pass for the > array type PetscScalar,pointer > :: array(:,...,:) of the appropriate dimension. For a DMDA created with a > dof of 1 use the dimension of the DMDA, for a DMDA created with a dof > greater than 1 use one more than the dimension of the DMDA. The order of > the indices is array(xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1) (when dof is 1) > otherwise array(1:dof,xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1) where the values > are obtained from DMDAGetCorners() > for a global array or DMDAGetGhostCorners() > for a local array. Include finclude/petscdmda.h90 to access this routine. > * > > I just tried with dof = 2 and there's no problem. However, the manual says > that for dof > 1, the array is * > array(1:dof,xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1)*. > > Should it be *array(0:dof-1,xs:xs+xm-1,ys:ys+ym-1,zs:zs+zm-1)* instead? I > had problems with the former, but the latter works fine. > > Also, I'm still not sure how the memory is allocated. If I have: > > *Vec x_local > > > PetscScalar,pointer :: array2(:,:,:) > > with DMDACreate2d using dof = 2, > > call DMDAVecGetArrayF90(da,x_local,array2,ierr) > > access array2 .... > > call DMDAVecRestoreArrayF90(da,x_local,array2,ierr) > * > > > How is the memory for "array2" allocated ? Is it allocated all the time, > or only between the DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ptbauman at gmail.com Tue Jul 10 13:34:22 2012 From: ptbauman at gmail.com (Paul T. Bauman) Date: Tue, 10 Jul 2012 13:34:22 -0500 Subject: [petsc-users] KSPSetOperators/sub-preconditioner type Message-ID: Greetings, This question is somewhat related to this thread: http://lists.mcs.anl.gov/pipermail/petsc-users/2011-August/009583.html but instead of resurrecting that thread, I decided to start a new one. Summary: In the libMesh wrapper of PETSc's KSP, a call is made to KSPSetOperators; I gather this is in order to allow reusing the same preconditioner, if desired. However, I've noticed that when I use the following options, the sub-preconditioner type gets reset: -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_mat_solver_package superlu In particular, the first linear solve uses what I asked for, but in all following linear solves, the petsc ilu solver is used instead of superlu. If I remove the KSPSetOperators call, everything is peachy. My question: Does a call need to be made to PCSetFromOptions (or something similar...) to keep the superlu ilu solver around or would that wipe out the KSPSetOperators effect? Is this a PETSc bug? Other? This is with petsc-3.3-p1. Thanks for your time. Best, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 10 13:41:57 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 10 Jul 2012 20:41:57 +0200 Subject: [petsc-users] [petsc-dev] FFT Matrix Examples/Tests: Compiletime error In-Reply-To: References: <4ffc5b66.05bc0e0a.0a2f.ffff9d0e@mx.google.com> Message-ID: On Tue, Jul 10, 2012 at 8:03 PM, Sean Farley wrote: > On Tue, Jul 10, 2012 at 12:50 PM, Thomas Hisch wrote: >> However, compilation of the ex148 example in src/mat/examples/tests/ >> still failes! Maybe I did something wrong. diff shows no differenece >> between ex148.c from petsc-dev and petsc-3.3-p1. I tried to compile >> ex148 by typing 'make ex148' in src/mat/examples/tests/ - is this the >> correct way ? >> >> Again, here is the error message: >> >> mpicxx -o ex148.o -c -Wall -Wwrite-strings -Wno-strict-aliasing >> -Wno-unknown-pragmas -O3 -fPIC >> -I/home/thomas/local/src/petsc-dev/include >> -I/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/include >> -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi >> -D__INSDIR__=src/mat/examples/tests/ ex148.c >> ex148.c: In function ?PetscInt main(PetscInt, char**)?: >> ex148.c:45:37: error: ?InputTransformFFT? was not declared in this scope >> ex148.c:54:39: error: ?OutputTransformFFT? was not declared in this scope >> make: [ex148.o] Error 1 (ignored) >> mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas >> -O3 -o ex148 ex148.o >> -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib >> -L/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib >> -lpetsc -lX11 -Wl,-rpath,/home/thomas/local/src/petsc-dev/arch-linux2-cxx-release/lib >> -lfftw3_mpi -lfftw3 -lpthread >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.11.339/mkl/lib/intel64 -llapack >> -lblas -Wl,-rpath,/usr/lib/openmpi/lib -L/usr/lib/openmpi/lib >> -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.7 >> -L/usr/lib/gcc/x86_64-linux-gnu/4.7 >> -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu >> -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.11.339/compiler/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 >> -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/intel64 >> -Wl,-rpath,/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 >> -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 >> -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lgfortran -lm -lm >> -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lopen-rte -lopen-pal >> -lnsl -lutil -lgcc_s -lpthread -ldl >> g++: error: ex148.o: No such file or directory >> make: [ex148] Error 1 (ignored) >> /bin/rm -f ex148.o > > Sorry for the bug, Thomas. It seems that a summer student last year caused this: > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/dc6053cf3c78 No problem. The examples mentioned by Hong work now! Thx Another question: I plan to use the FFT stuff from petsc4py - Are FFT matrices supported in petsc4py, does anyone know ? Regards Thomas > > I would fix this but need to meet with my advisor soon. Could someone > else look at this? From balay at mcs.anl.gov Tue Jul 10 13:45:06 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 10 Jul 2012 13:45:06 -0500 (CDT) Subject: [petsc-users] random vector In-Reply-To: References: <6d08ec25-33d8-46ff-8db6-9175a3271ff0@zembox02.zaas.igi.nl> <3AB21CB2-9098-4EFB-BE1C-FDFA7E81CE08@mcs.anl.gov> Message-ID: On Mon, 9 Jul 2012, Barry Smith wrote: > > Satish, > > Please add the Fortran interface for petscrandomsettype_ by cloning one of the other set types etc. for petsc 3.3 > pushed http://petsc.cs.iit.edu/petsc/releases/petsc-3.3/rev/a82d8eaf28ab Satish From rlmackie862 at gmail.com Tue Jul 10 13:48:12 2012 From: rlmackie862 at gmail.com (Randall Mackie) Date: Tue, 10 Jul 2012 11:48:12 -0700 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: References: <4FFC066D.7060307@gmail.com> <4FFC524F.7050709@gmail.com> <4FFC72DC.9040500@gmail.com> Message-ID: On Jul 10, 2012, at 11:29 AM, Jed Brown wrote: > On Tue, Jul 10, 2012 at 1:22 PM, TAY wee-beng wrote: > Do you mean DMDAVecGetArrayDOFF90 ? I tried to compile but it gives the error during linking: > > 1>dm_test2d.obj : error LNK2019: unresolved external symbol DMDAVECGETARRAYDOFF90 referenced in function MAIN__ > > Matt was suggesting that someone should implement this function, it doesn't exist currently. > > Fortran makes this stuff really painful and we don't know how to make it do something reasonable without depending on your types. You can write your own DMDAVecGetArrayF90WithYourType(), but sadly, it requires some circus tricks to make work. (We can't put this in PETSc because we don't know what your field type is.) > > In my Fortran code, I simply use VecGetArrayF90 on a vector created with, for example, DMGetGlobalVector, and it works perfectly fine for DOF>1. You might want to look at this example: http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex5f90.F.html Note that for DOF> 1, the indexing is like vec(3,xs:xe,ys:ye,zs:ze) for example.. Randy -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Jul 10 14:23:05 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 10 Jul 2012 21:23:05 +0200 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: References: <4FFC066D.7060307@gmail.com> <4FFC524F.7050709@gmail.com> <4FFC72DC.9040500@gmail.com> Message-ID: <4FFC8119.7040005@gmail.com> On 10/7/2012 8:48 PM, Randall Mackie wrote: > > On Jul 10, 2012, at 11:29 AM, Jed Brown wrote: > >> On Tue, Jul 10, 2012 at 1:22 PM, TAY wee-beng > > wrote: >> >> Do you mean DMDAVecGetArrayDOFF90 ? I tried to compile but it >> gives the error during linking: >> >> 1>dm_test2d.obj : error LNK2019: unresolved external symbol >> DMDAVECGETARRAYDOFF90 referenced in function MAIN__ >> >> >> Matt was suggesting that someone should implement this function, it >> doesn't exist currently. >> >> Fortran makes this stuff really painful and we don't know how to make >> it do something reasonable without depending on your types. You can >> write your own DMDAVecGetArrayF90WithYourType(), but sadly, it >> requires some circus tricks to make work. (We can't put this in PETSc >> because we don't know what your field type is.) >> >> > > In my Fortran code, I simply use VecGetArrayF90 on a vector created > with, for example, DMGetGlobalVector, and it > works perfectly fine for DOF>1. You might want to look at this example: > > http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex5f90.F.html > > Note that for DOF> 1, the indexing is like vec(3,xs:xe,ys:ye,zs:ze) > for example.. > > > Randy Thanks Randy, I did this as well. But now I'm using DMDAVecGetArrayF90 since it's easier. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ptbauman at gmail.com Tue Jul 10 15:29:17 2012 From: ptbauman at gmail.com (Paul T. Bauman) Date: Tue, 10 Jul 2012 15:29:17 -0500 Subject: [petsc-users] KSPSetOperators/sub-preconditioner type In-Reply-To: References: Message-ID: On Tue, Jul 10, 2012 at 1:34 PM, Paul T. Bauman wrote: > Greetings, > > This question is somewhat related to this thread: > http://lists.mcs.anl.gov/pipermail/petsc-users/2011-August/009583.html but > instead of resurrecting that thread, I decided to start a new one. > > Summary: In the libMesh wrapper of PETSc's KSP, a call is made to > KSPSetOperators; I gather this is in order to allow reusing the same > preconditioner, if desired. However, I've noticed that when I use the > following options, the sub-preconditioner type gets reset: > > -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_mat_solver_package superlu > > In particular, the first linear solve uses what I asked for, but in all > following linear solves, the petsc ilu solver is used instead of superlu. > If I remove the KSPSetOperators call, everything is peachy. > > My question: Does a call need to be made to PCSetFromOptions (or something > similar...) to keep the superlu ilu solver around or would that wipe out > the KSPSetOperators effect? Is this a PETSc bug? Other? > > This is with petsc-3.3-p1. > Sorry to reply to myself, but I read this and realized I was rather imprecise. The KSPSetOperators call I referred to is in the solve step. In particular, KSPSetFromOptions is called during initilization, only once, but KSPSetOperators is called each time solve is called. I was able to mimic my problem in the attached, slightly modified ksp/ex5.c, running with: mpiexec -np 6 ./ex5 -ksp_type gmres -pc_type bjacobi -sub_pc_type ilu -ksp_view -ksp_monitor -sub_pc_factor_mat_solver_package superlu -sub_pc_factor_levels 1 Is the correct solution just to call KSPSetFromOptions every time? Will that be noticeable at all? I hope this was less vague than my original message. Thanks again. Best, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex5.c Type: text/x-csrc Size: 12302 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue Jul 10 21:22:53 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 10 Jul 2012 21:22:53 -0500 Subject: [petsc-users] Declaring struct to represent field for dof > 1 for DM in Fortran In-Reply-To: References: <4FFC066D.7060307@gmail.com> <4FFC524F.7050709@gmail.com> Message-ID: On Jul 10, 2012, at 11:05 AM, Matthew Knepley wrote: >> >> Matt > Ok, I'll try to use another mtd. > > Btw, if I declare: > > PetscScalar,pointer :: array2(:,:,:) > > with DMDACreate2d using dof = 2, > > call DMDAVecGetArrayF90(da,x_local,array2,ierr) > > access array2 .... > > call DMDAVecRestoreArrayF90(da,x_local,array2,ierr) > > How is the memory for "array2" allocated ? Is it allocated all the time, or only between the DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90? It is ALWAYS allocated and the actual memory is inside the Vec object. The array2 is just a fancy pointer that allows accessing that underlying data. > > Also, can I "reuse" array2? For e.g., now for y_local: > > call DMDAVecGetArrayF90(da,y_local,array2,ierr) > > access array2 .... > > call DMDAVecRestoreArrayF90(da,y_local,array2,ierr) Yes > > The right thing to do here is to implement DMDAVecGetArratDOFF90(). You do NOT need DMDAVecGetArrayDOFF90(). This is one place where Fortran polymorphism is actually more user friendly than C's (shock of shocks). The code you provide above with dof = 2 should work fine. See src/dm/examples/tutorials/ex11f90.F (the second half of the example). So you are all set to write the code you wanted. Barry > > Matt > > Thank you! > >> >> I'm using Fortran and for testing, I use dof = 1 and write as: >> >> type field >> >> PetscScalar u (or real(8) :: u) >> >> end type field >> >> type(field), pointer :: field_u(:,:) >> >> When I tried to use : >> >> call DMDAVecGetArrayF90(da,x_local,field_u,ierr) >> >> I got the error : There is no matching specific subroutine for this generic subroutine call. [DMDAVECGETARRAYF90] >> >> The da, x_local has been defined with the specific DM routines. It worked if I use : >> >> PetscScalar,pointer :: array(:,:) and >> >> call DMDAVecGetArrayF90(da,x_local,array,ierr) >> >> May I know what did I do wrong? >> >> >> -- >> Yours sincerely, >> >> TAY wee-beng >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From w_ang_temp at 163.com Tue Jul 10 22:24:32 2012 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 11 Jul 2012 11:24:32 +0800 (CST) Subject: [petsc-users] About DIVERGED_ITS In-Reply-To: <5C23C252-4CF5-493B-B189-1838355417A5@columbia.edu> References: <4e50cca5.8135.138622b73ae.Coremail.w_ang_temp@163.com> <25928393.81f6.138623965c2.Coremail.w_ang_temp@163.com> <84D88821-E153-4718-B2C8-0FD148060A50@columbia.edu> <10174c6c.2608e.138715fa3b9.Coremail.w_ang_temp@163.com> <5C23C252-4CF5-493B-B189-1838355417A5@columbia.edu> Message-ID: <6fcfc71c.7a94.138741113aa.Coremail.w_ang_temp@163.com> Thanks very much! I understand. I made a test. If the tolerance is large, PETSc says "converged" but the result is "incorrect" for my use due to the tolerance. While the tolerance is tight, when the rtol reaches a certain value, the result is not changing and it is not equal to the true value. And PETSc says "diverged". As Jed said, I need to either increase the number of iterations or use a better preconditioner. And I think the latter is a better way in my work. Thanks. >At 2012-07-11 01:45:28,"Mark F. Adams" wrote: >On Jul 10, 2012, at 10:51 AM, w_ang_temp wrote: > In my opinion, convergence in PETSc is decided by rtol, atol and dtol. The divergent hints just show >that >in the solving process it does not satisfy the rule. The "right" result may be different from the true result >at >the several back decimal places(I mean that they may be the same with four decimal places but may be >not > >the same with more decimal places). >No, If you have an rtol=1.e-30 and atol=1.e-300 then your solution will probably be correct to all decimal places even >though PETSc will say you "diverged". You can't be too sure about how many digits of accuracy you have with >simple linear algebra arguments unless your system is extremely well conditioned. >As Jed said if you are doing 10000 iterations are happy with the solution then your tolerances are probably too >tight. Look at the residual history and see if you are stagnating. > Is it right? >At 2012-07-08 00:28:54,"Mark F. Adams" wrote: >It sounds like your -ksp_rtol is too small. Experiment with looser tolerances until your solution is >not "correct" to see >how much accuracy you want. >On Jul 7, 2012, at 12:15 PM, w_ang_temp wrote: > Maybe it is a problem of mathematical concept. I compare the result with the true >result which is >computed and validated by other tools. I think it is right if I get the same result. >>? 2012-07-08 00:03:21?"Matthew Knepley" ??? >>On Sat, Jul 7, 2012 at 10:00 AM, w_ang_temp wrote: >>Hello, >> I am a little puzzled that I get the right result while the converged reason says that 'Linear solve >>did not >>converge due to DIVERGED_ITS iterations 10000'. This infomation means that the >>iterations >reach >the maximum >>iterations. But the result is right now. So why says 'did not converge'? Can I think >>that the result is >>right and >>can be used? >>Obviously, your definition of "right" is not the same as the convergence tolerances >>you are using. >> Matt >> Thanks. >> Jim -- >What most experimenters take for granted before they begin their experiments is infinitely more >interesting than any results to which their experiments lead. >-- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Wed Jul 11 03:48:08 2012 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 11 Jul 2012 08:48:08 +0000 Subject: [petsc-users] boomeramg memory usage Message-ID: I'm solving a 3D Poisson equation on a 200x200x100 grid using CG and algebraic multigrid as preconditioner. I noticed that with boomeramg, the memory increases as the number of procs increases: 1 proc: 2.8G 2 procs: 2.8G 4 procs: 5.2G 8 procs: >12G (max reached, swapping) The memory usage is obtained from top. When using ml or gamg (all with PETSc defaults), the memory usage remains more or less constant. Something wrong with my install? Some Hypre option I should set? dr. ir. Christiaan Klaij CFD Researcher Research & Development E mailto:C.Klaij at marin.nl T +31 317 49 33 44 MARIN 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl From zonexo at gmail.com Wed Jul 11 04:13:02 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 11 Jul 2012 11:13:02 +0200 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> Message-ID: <4FFD439E.80905@gmail.com> On 6/7/2012 5:48 PM, Barry Smith wrote: > Blaise, > > Thanks. > > Satish, > > If they look good could you apply them to 3.3 and dev? > > Thanks > > Barry Hi, I downloaded petsc-dev a few days ago and applied both patches using "patch -v ..." in both linux and windows vs2008 It worked great in linux However, when I compile and link in vs2008, it gives the error: / 1>Compiling manifest to resources... 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 1>Copyright (C) Microsoft Corporation. All rights reserved. 1>Linking... 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSINT referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe : fatal error LNK1120: 6 unresolved externals/ I wonder if it's fixed in the new petsc-dev. Thanks > > On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: > >> Hi, >> >> I have added the creation, destruction and accessor functions for 4d vectors in F90. The accessor was missing and needed for DMDAVecGetArrayF90 with a 3d DMDA and >1 dof. As far as I can test, ex11f90 in DM should now completely work with the intel compilers. >> >> Some of the functions are probably not used (F90Array4dAccessReal, F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), but I added them anyway. Let me know if you want me to submit a patch without them. >> >> Regards, >> >> Blaise >> >> >> >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Sanderse at cwi.nl Wed Jul 11 05:38:05 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Wed, 11 Jul 2012 12:38:05 +0200 Subject: [petsc-users] boomeramg memory usage In-Reply-To: References: Message-ID: Beste Christiaan, Misschien lopen wij tegen dezelfde problemen aan. Ik ben nu ook een Poisson vergelijking aan het oplossen van ongeveer dezelfde grootte (100^3, 200^3) en bij mij schaalt de rekentijd alleen goed van 1 naar 2 processoren; voor meer processoren blijft de rekentijd nagenoeg constant. Ik heb hier gisteren een post over geplaatst op de PETSc mailing list, misschien kan jij daar eens naar kijken. Gister werd gesuggereerd dat dit een hardware beperking is. Ik heb nog geen oplossing, dus ben benieuwd of het PETSc team nog met nieuwe tips komt. Hartelijke groeten, Benjamin Sanderse (ECN, CWI) Op 11 jul 2012, om 10:48 heeft Klaij, Christiaan het volgende geschreven: > I'm solving a 3D Poisson equation on a 200x200x100 grid using > CG and algebraic multigrid as preconditioner. I noticed that with > boomeramg, the memory increases as the number of procs increases: > > 1 proc: 2.8G > 2 procs: 2.8G > 4 procs: 5.2G > 8 procs: >12G (max reached, swapping) > > The memory usage is obtained from top. When using ml or gamg (all > with PETSc defaults), the memory usage remains more or less > constant. Something wrong with my install? Some Hypre option I > should set? > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > -- Ir. B. Sanderse Centrum Wiskunde en Informatica Science Park 123 1098 XG Amsterdam t: +31 20 592 4161 e: sanderse at cwi.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From alessio.cardillo at ct.infn.it Wed Jul 11 06:05:12 2012 From: alessio.cardillo at ct.infn.it (Alessio Cardillo) Date: Wed, 11 Jul 2012 13:05:12 +0200 Subject: [petsc-users] SLEPc mantain eigenvalue/eigenvector ordering compatible with problem matrix Message-ID: Hello to all, I am using SLEPc library to solve an eigenvalue problem using the lapack solver. the solver works fine (both eigenvalue and eigenvectors are correct) but the order in which solutions are given is different from the original one. My problem is: I need to retrieve the pair (eigenvalue + eigenvector) with the same order as the original problem matrix OR I need to get the problem matrix in the same order as the solution pairs. For the sake of clarity, below I report an example of what I mean: Cheers, Alessio COEFFICIENT MATRIX: 1 0 0 0 2 0 0 0 3 ##### EXPECTED SOLUTION ###### 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 with eigenvalues 1.000 2.000 3.000 ###### REAL SOLUTION ###### EIGENVECTOR MATRIX (row ordered) 0.000 0.000 1.000 0.000 1.000 0.000 1.000 0.000 0.000 EIGENVALUES 3.000 2.000 1.000 -- -------------------------------------------------------------------------------------------------------------------- Alessio Cardillo - PhD student in Physics Department of Condensed Matter Physics, University of Zaragoza and Institute for Biocomputation and Physics of Complex Systems (BIFI) University of Zaragoza Facultad de Ciencias, Universidad de Zaragoza, C/Pedro Cerbuna 12, 50009, Zaragoza, Spain Phone: +34 976 76 2455 Web : http://bifi.es/~cardillo/ Web 2: http://www.ct.infn.it/atp/ --------------------------------------------------------------------------------------------------------------------- From B.Sanderse at cwi.nl Wed Jul 11 06:12:09 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Wed, 11 Jul 2012 13:12:09 +0200 Subject: [petsc-users] boomeramg memory usage In-Reply-To: References: Message-ID: <1473972B-0515-49E2-8D7E-E39EDFB7E487@cwi.nl> I am sorry for the previous message in Dutch.. it was to be addressed to Christiaan Klaij only. In any case, it might be that the results of Christiaan are related to my issues which I posted yesterday... are there more people that experience memory or scaling problems with BoomerAMG? Benjamin Op 11 jul 2012, om 10:48 heeft Klaij, Christiaan het volgende geschreven: > I'm solving a 3D Poisson equation on a 200x200x100 grid using > CG and algebraic multigrid as preconditioner. I noticed that with > boomeramg, the memory increases as the number of procs increases: > > 1 proc: 2.8G > 2 procs: 2.8G > 4 procs: 5.2G > 8 procs: >12G (max reached, swapping) > > The memory usage is obtained from top. When using ml or gamg (all > with PETSc defaults), the memory usage remains more or less > constant. Something wrong with my install? Some Hypre option I > should set? > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > -- Ir. B. Sanderse Centrum Wiskunde en Informatica Science Park 123 1098 XG Amsterdam t: +31 20 592 4161 e: sanderse at cwi.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Jul 11 06:16:54 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 11 Jul 2012 13:16:54 +0200 Subject: [petsc-users] SLEPc mantain eigenvalue/eigenvector ordering compatible with problem matrix In-Reply-To: References: Message-ID: El 11/07/2012, a las 13:05, Alessio Cardillo escribi?: > Hello to all, > > I am using SLEPc library to solve an eigenvalue problem using the lapack solver. > > the solver works fine (both eigenvalue and eigenvectors are correct) > but the order in which solutions are given is different from the > original one. > > My problem is: > > I need to retrieve the pair (eigenvalue + eigenvector) with the same > order as the original problem matrix OR I need to get the problem > matrix in the same order as the solution pairs. > > For the sake of clarity, below I report an example of what I mean: > > Cheers, > > Alessio > > > > COEFFICIENT MATRIX: > > 1 0 0 > 0 2 0 > 0 0 3 > > ##### EXPECTED SOLUTION ###### > > 1.000 0.000 0.000 > 0.000 1.000 0.000 > 0.000 0.000 1.000 > > with eigenvalues > > 1.000 2.000 3.000 > > ###### REAL SOLUTION ###### > > EIGENVECTOR MATRIX (row ordered) > > 0.000 0.000 1.000 > 0.000 1.000 0.000 > 1.000 0.000 0.000 > > EIGENVALUES > > 3.000 > 2.000 > 1.000 The ordering can be selected via EPSSetWhichEigenpairs, but the concept of "the same order as the original problem matrix" is not available because that only makes sense if the matrix is already in diagonal or triangular form. Maybe you want to call LAPACKtrevc directly. Jose From balay at mcs.anl.gov Wed Jul 11 08:40:10 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 11 Jul 2012 08:40:10 -0500 (CDT) Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <4FFD439E.80905@gmail.com> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> Message-ID: The second patch has typos. Attaching the modified patch. If it works for you - I'll add it to petsc-3.3/petsc-dev thanks, Satish On Wed, 11 Jul 2012, TAY wee-beng wrote: > On 6/7/2012 5:48 PM, Barry Smith wrote: > > Blaise, > > > > Thanks. > > > > Satish, > > > > If they look good could you apply them to 3.3 and dev? > > > > Thanks > > > > Barry > > Hi, > > I downloaded petsc-dev a few days ago and applied both patches using "patch -v > ..." in both linux and windows vs2008 > > It worked great in linux > > However, when I compile and link in vs2008, it gives the error: > / > 1>Compiling manifest to resources... > 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 > 1>Copyright (C) Microsoft Corporation. All rights reserved. > 1>Linking... > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSINT referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy > 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe : > fatal error LNK1120: 6 unresolved externals/ > > I wonder if it's fixed in the new petsc-dev. > > Thanks > > > > On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: > > > > > Hi, > > > > > > I have added the creation, destruction and accessor functions for 4d > > > vectors in F90. The accessor was missing and needed for DMDAVecGetArrayF90 > > > with a 3d DMDA and >1 dof. As far as I can test, ex11f90 in DM should now > > > completely work with the intel compilers. > > > > > > Some of the functions are probably not used (F90Array4dAccessReal, > > > F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), but I > > > added them anyway. Let me know if you want me to submit a patch without > > > them. > > > > > > Regards, > > > > > > Blaise > > > > > > > > > > > > -- > > > Department of Mathematics and Center for Computation & Technology > > > Louisiana State University, Baton Rouge, LA 70803, USA > > > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > > > http://www.math.lsu.edu/~bourdin > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- diff -r c449016b5325 src/dm/impls/da/f90-custom/zda1f90.c --- a/src/dm/impls/da/f90-custom/zda1f90.c Tue Jul 03 13:22:19 2012 -0500 +++ b/src/dm/impls/da/f90-custom/zda1f90.c Fri Jul 06 15:15:48 2012 +0400 @@ -175,9 +175,7 @@ void PETSC_STDCALL dmdavecrestorearrayf9 /* F90Array4dAccess is not implemented, so the following call would fail */ - /* *ierr = F90Array4dAccess(a,PETSC_SCALAR,(void**)&fa PETSC_F90_2PTR_PARAM(ptrd)); - */ *ierr = VecRestoreArray(*v,&fa);if (*ierr) return; *ierr = F90Array4dDestroy(&a,PETSC_SCALAR PETSC_F90_2PTR_PARAM(ptrd)); } diff -r c449016b5325 src/sys/f90-src/f90_cwrap.c --- a/src/sys/f90-src/f90_cwrap.c Tue Jul 03 13:22:19 2012 -0500 +++ b/src/sys/f90-src/f90_cwrap.c Fri Jul 06 15:15:48 2012 +0400 @@ -307,17 +307,46 @@ PetscErrorCode F90Array3dDestroy(F90Arr } /*************************************************************************/ - #if defined(PETSC_HAVE_FORTRAN_CAPS) -#define f90array4dcreatescalar_ F90ARRAY4DCREATESCALAR -#define f90array4ddestroyscalar_ F90ARRAY4DDESTROYSCALAR +#define f90array4dcreatescalar_ F90ARRAY4DCREATESCALAR +#define f90array4daccessscalar_ F90ARRAY4DACCESSSCALAR +#define f90array4ddestroyscalar_ F90ARRAY4DDESTROYSCALAR +#define f90array4dcreatereal_ F90ARRAY4DCREATEREAL +#define f90array4daccessreal_ F90ARRAY4DACCESSREAL +#define f90array4ddestroyreal_ F90ARRAY4DDESTROYREAL +#define f90array4dcreateint_ F90ARRAY4DCREATEINT +#define f90array4daccessint_ F90ARRAY4DACCESSINT +#define f90array4ddestroyint_ F90ARRAY4DDESTROYINT +#define f90array4dcreatefortranaddr_ F90ARRAY4DCREATEFORTRANADDR +#define f90array4daccessfortranaddr_ F90ARRAY4DACCESSFORTRANADDR +#define f90array4ddestroyfortranaddr_ F90ARRAY4DDESTROYFORTRANADDR #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) #define f90array4dcreatescalar_ f90array4dcreatescalar +#define f90array4daccessscalar_ f90array4daccessscalar #define f90array4ddestroyscalar_ f90array4ddestroyscalar +#define f90array4dcreatereal_ f90array4dcreatereal +#define f90array4daccessreal_ f90array4daccessreal +#define f90array4ddestroyreal_ f90array4ddestroyreal +#define f90array4dcreateint_ f90array4dcreateint +#define f90array4daccessint_ f90array4daccessint +#define f90array4ddestroyint_ f90array4ddestroyint +#define f90array4dcreatefortranaddr_ f90array4dcreatefortranaddr +#define f90array4daccessfortranaddr_ f90array4daccessfortranaddr +#define f90array4ddestroyfortranaddr_ f90array4ddestroyfortranaddr #endif PETSC_EXTERN_C void PETSC_STDCALL f90array4dcreatescalar_(void *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt*,PetscInt*,F90Array4d * PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4daccessscalar_(F90Array4d*,void** PETSC_F90_2PTR_PROTO_NOVAR); PETSC_EXTERN_C void PETSC_STDCALL f90array4ddestroyscalar_(F90Array4d *ptr PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4dcreatereal_(void *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt*,PetscInt*,F90Array4d * PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4daccessreal_(F90Array4d*,void** PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4ddestroyreal_(F90Array4d *ptr PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4dcreateint_(void *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt*,PetscInt*,F90Array4d * PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4daccessint_(F90Array4d*,void** PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4ddestroyint_(F90Array4d *ptr PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4dcreatefortranaddr_(void *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt *,PetscInt*,PetscInt*,F90Array4d * PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4daccessfortranaddr_(F90Array4d*,void** PETSC_F90_2PTR_PROTO_NOVAR); +PETSC_EXTERN_C void PETSC_STDCALL f90array4ddestroyfortranaddr_(F90Array4d *ptr PETSC_F90_2PTR_PROTO_NOVAR); #undef __FUNCT__ #define __FUNCT__ "F90Array4dCreate" @@ -333,6 +362,25 @@ PetscErrorCode F90Array4dCreate(void *ar } #undef __FUNCT__ +#define __FUNCT__ "F90Array4dAccess" +PetscErrorCode F90Array4dAccess(F90Array4d *ptr,PetscDataType type,void **array PETSC_F90_2PTR_PROTO(ptrd)) +{ + PetscFunctionBegin; + if (type == PETSC_SCALAR) { + f90array4daccessscalar_(ptr,array PETSC_F90_2PTR_PARAM(ptrd)); + } else if (type == PETSC_REAL) { + f90array4daccessreal_(ptr,array PETSC_F90_2PTR_PARAM(ptrd)); + } else if (type == PETSC_INT) { + f90array4daccessint_(ptr,array PETSC_F90_2PTR_PARAM(ptrd)); + } else if (type == PETSC_FORTRANADDR) { + f90array4daccessfortranaddr_(ptr,array PETSC_F90_2PTR_PARAM(ptrd)); + } else { + SETERRQ1(PETSC_COMM_SELF,PETSC_ERR_SUP,"unsupported PetscDataType: %d",(PetscInt)type); + } + PetscFunctionReturn(0); +} + +#undef __FUNCT__ #define __FUNCT__ "F90Array4dDestroy" PetscErrorCode F90Array4dDestroy(F90Array4d *ptr,PetscDataType type PETSC_F90_2PTR_PROTO(ptrd)) { @@ -436,3 +484,31 @@ PETSC_EXTERN_C void PETSC_STDCALL f90arr } /*************************************************************************/ +#if defined(PETSC_HAVE_FORTRAN_CAPS) +#define f90array4dgetaddrscalar_ F90ARRAY4DGETADDRSCALAR +#define f90array4dgetaddrreal_ F90ARRAY4DGETADDRREAL +#define f90array4dgetaddrint_ F90ARRAY4DGETADDRINT +#define f90array4dgetaddrfortranaddr_ F90ARRAY4DGETADDRFORTRANADDR +#elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) +#define f90array4dgetaddrscalar_ f90array4dgetaddrscalar +#define f90array4dgetaddrreal_ f90array4dgetaddrreal +#define f90array4dgetaddrint_ f90array4dgetaddrint +#define f90array4dgetaddrfortranaddr_ f90array4dgetaddrfortranaddr +#endif + +PETSC_EXTERN_C void PETSC_STDCALL f90array4dgetaddrscalar_(void *array, PetscFortranAddr *address) +{ + *address = (PetscFortranAddr)array; +} +PETSC_EXTERN_C void PETSC_STDCALL f90array4dgetaddrreal_(void *array, PetscFortranAddr *address) +{ + *address = (PetscFortranAddr)array; +} +PETSC_EXTERN_C void PETSC_STDCALL f90array4dgetaddrint_(void *array, PetscFortranAddr *address) +{ + *address = (PetscFortranAddr)array; +} +PETSC_EXTERN_C void PETSC_STDCALL f90array4dgetaddrfortranaddr_(void *array, PetscFortranAddr *address) +{ + *address = (PetscFortranAddr)array; +} diff -r c449016b5325 src/sys/f90-src/fsrc/f90_fwrap.F --- a/src/sys/f90-src/fsrc/f90_fwrap.F Tue Jul 03 13:22:19 2012 -0500 +++ b/src/sys/f90-src/fsrc/f90_fwrap.F Fri Jul 06 15:15:48 2012 +0400 @@ -322,23 +322,6 @@ end subroutine !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! - subroutine F90Array4dCreateScalar(array,start1,len1, & - & start2,len2,start3,len3,start4,len4,ptr) - implicit none -#include - PetscInt start1,len1 - PetscInt start2,len2 - PetscInt start3,len3 - PetscInt start4,len4 - PetscScalar, target :: & - & array(start1:start1+len1-1,start2:start2+len2-1, & - & start3:start3+len3-1,start4:start4+len4-1) - PetscScalar, pointer :: ptr(:,:,:,:) - - ptr => array - end subroutine - -!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! subroutine F90Array3dAccessScalar(ptr,address) implicit none #include @@ -426,10 +409,158 @@ end subroutine !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + subroutine F90Array4dCreateScalar(array,start1,len1, & + & start2,len2,start3,len3,start4,len4,ptr) + implicit none +#include + PetscInt start1,len1 + PetscInt start2,len2 + PetscInt start3,len3 + PetscInt start4,len4 + PetscScalar, target :: & + & array(start1:start1+len1-1,start2:start2+len2-1, & + & start3:start3+len3-1,start4:start4+len4-1) + PetscScalar, pointer :: ptr(:,:,:,:) + + ptr => array + end subroutine + + subroutine F90Array4dCreateReal(array,start1,len1, & + & start2,len2,start3,len3,start4,len4,ptr) + implicit none +#include + PetscInt start1,len1 + PetscInt start2,len2 + PetscInt start3,len3 + PetscInt start4,len4 + PetscReal, target :: & + & array(start1:start1+len1-1,start2:start2+len2-1, & + & start3:start3+len3-1,start4:start4+len4-1) + PetscReal, pointer :: ptr(:,:,:,:) + + ptr => array + end subroutine + + subroutine F90Array4dCreateInt(array,start1,len1, & + & start2,len2,start3,len3,start4,len4,ptr) + implicit none +#include + PetscInt start1,len1 + PetscInt start2,len2 + PetscInt start3,len3 + PetscInt start4,len4 + PetscInt, target :: & + & array(start1:start1+len1-1,start2:start2+len2-1, & + & start3:start3+len3-1,start4:start4+len4-1) + PetscInt, pointer :: ptr(:,:,:,:) + + ptr => array + end subroutine + + subroutine F90Array4dCreateFortranAddr(array,start1,len1, & + & start2,len2,start3,len3,start4,len4,ptr) + implicit none +#include + PetscInt start1,len1 + PetscInt start2,len2 + PetscInt start3,len3 + PetscInt start4,len4 + PetscFortranAddr, target :: & + & array(start1:start1+len1-1,start2:start2+len2-1, & + & start3:start3+len3-1,start4:start4+len4-1) + PetscFortranAddr, pointer :: ptr(:,:,:,:) + + ptr => array + end subroutine + + subroutine F90Array4dAccessScalar(ptr,address) + implicit none +#include + PetscScalar, pointer :: ptr(:,:,:,:) + PetscFortranAddr address + PetscInt start1,start2,start3,start4 + + start1 = lbound(ptr,1) + start2 = lbound(ptr,2) + start3 = lbound(ptr,3) + start4 = lbound(ptr,4) + call F90Array4dGetAddrScalar(ptr(start1,start2,start3,start4), & + & address) + end subroutine + + subroutine F90Array4dAccessReal(ptr,address) + implicit none +#include + PetscReal, pointer :: ptr(:,:,:,:) + PetscFortranAddr address + PetscInt start1,start2,start3,start4 + + start1 = lbound(ptr,1) + start2 = lbound(ptr,2) + start3 = lbound(ptr,3) + start4 = lbound(ptr,4) + call F90Array4dGetAddrReal(ptr(start1,start2,start3,start4), & + & address) + end subroutine + + subroutine F90Array4dAccessInt(ptr,address) + implicit none +#include + PetscInt, pointer :: ptr(:,:,:,:) + PetscFortranAddr address + PetscInt start1,start2,start3,start4 + + start1 = lbound(ptr,1) + start2 = lbound(ptr,2) + start3 = lbound(ptr,3) + start4 = lbound(ptr,4) + call F90Array4dGetAddrInt(ptr(start1,start2,start3,start4), & + & address) + end subroutine + + subroutine F90Array4dAccessFortranAddr(ptr,address) + implicit none +#include + PetscScalar, pointer :: ptr(:,:,:,:) + PetscFortranAddr address + PetscFortranAddr start1,start2,start3,start4 + + start1 = lbound(ptr,1) + start2 = lbound(ptr,2) + start3 = lbound(ptr,3) + start4 = lbound(ptr,4) + call F90Array4dGetAddrFortranAddr(ptr(start1,start2,start3, & + & start4),address) + end subroutine + subroutine F90Array4dDestroyScalar(ptr) implicit none #include - PetscScalar, pointer :: ptr(:,:,:) + PetscScalar, pointer :: ptr(:,:,:,:) + + nullify(ptr) + end subroutine + + subroutine F90Array4dDestroyReal(ptr) + implicit none +#include + PetscReal, pointer :: ptr(:,:,:,:) + + nullify(ptr) + end subroutine + + subroutine F90Array4dDestroyInt(ptr) + implicit none +#include + PetscInt, pointer :: ptr(:,:,:,:) + + nullify(ptr) + end subroutine + + subroutine F90Array4dDestroyFortranAddr(ptr) + implicit none +#include + PetscFortranAddr, pointer :: ptr(:,:,:,:) nullify(ptr) end subroutine From jedbrown at mcs.anl.gov Wed Jul 11 08:47:19 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 11 Jul 2012 08:47:19 -0500 Subject: [petsc-users] KSPSetOperators/sub-preconditioner type In-Reply-To: References: Message-ID: On Tue, Jul 10, 2012 at 3:29 PM, Paul T. Bauman wrote: > On Tue, Jul 10, 2012 at 1:34 PM, Paul T. Bauman wrote: > >> Greetings, >> >> This question is somewhat related to this thread: >> http://lists.mcs.anl.gov/pipermail/petsc-users/2011-August/009583.html but >> instead of resurrecting that thread, I decided to start a new one. >> >> Summary: In the libMesh wrapper of PETSc's KSP, a call is made to >> KSPSetOperators; I gather this is in order to allow reusing the same >> preconditioner, if desired. However, I've noticed that when I use the >> following options, the sub-preconditioner type gets reset: >> >> -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_mat_solver_package >> superlu >> >> In particular, the first linear solve uses what I asked for, but in all >> following linear solves, the petsc ilu solver is used instead of superlu. >> If I remove the KSPSetOperators call, everything is peachy. >> >> My question: Does a call need to be made to PCSetFromOptions (or >> something similar...) to keep the superlu ilu solver around or would that >> wipe out the KSPSetOperators effect? Is this a PETSc bug? Other? >> > Thanks for your test case. Yes, this is a PETSc bug associated with handling of DIFFERENT_NONZERO_PATTERN. Fixed here and will be in the next patch release. http://petsc.cs.iit.edu/petsc/releases/petsc-3.3/rev/09d701958d66 We recommend that you only call KSPSetFromOptions (or PCSetFromOptions) once. > >> This is with petsc-3.3-p1. >> > > Sorry to reply to myself, but I read this and realized I was rather > imprecise. > > The KSPSetOperators call I referred to is in the solve step. In > particular, KSPSetFromOptions is called during initilization, only once, > but KSPSetOperators is called each time solve is called. I was able to > mimic my problem in the attached, slightly modified ksp/ex5.c, running with: > > mpiexec -np 6 ./ex5 -ksp_type gmres -pc_type bjacobi -sub_pc_type ilu > -ksp_view -ksp_monitor -sub_pc_factor_mat_solver_package superlu > -sub_pc_factor_levels 1 > > Is the correct solution just to call KSPSetFromOptions every time? Will > that be noticeable at all? I hope this was less vague than my original > message. Thanks again. > > Best, > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Wed Jul 11 08:48:02 2012 From: paeanball at gmail.com (Bao Kai) Date: Wed, 11 Jul 2012 16:48:02 +0300 Subject: [petsc-users] boomeramg memory usage Message-ID: Hi, I encountered the similar problem about a couple of weeks ago. I tried to use boomerAMG to accelerate the convergence, while it seems that the memory required increases dramatically with the size of the problem. As a result, boomerAMG seems not usable. I did not do the tests on relation between the size of the memory required and the No. of MPI tasks used. Best Regards, Kai > > Date: Wed, 11 Jul 2012 08:48:08 +0000 > From: "Klaij, Christiaan" > To: "petsc-users at mcs.anl.gov" > Subject: [petsc-users] boomeramg memory usage > Message-ID: > > Content-Type: text/plain; charset="us-ascii" > > I'm solving a 3D Poisson equation on a 200x200x100 grid using > CG and algebraic multigrid as preconditioner. I noticed that with > boomeramg, the memory increases as the number of procs increases: > > 1 proc: 2.8G > 2 procs: 2.8G > 4 procs: 5.2G > 8 procs: >12G (max reached, swapping) > > The memory usage is obtained from top. When using ml or gamg (all > with PETSc defaults), the memory usage remains more or less > constant. Something wrong with my install? Some Hypre option I > should set? > > > dr. ir. Christiaan Klaij > CFD Researcher > Research & Development > E mailto:C.Klaij at marin.nl > T +31 317 49 33 44 > > MARIN > 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands > T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Wed Jul 11 08:51:03 2012 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 11 Jul 2012 21:51:03 +0800 (CST) Subject: [petsc-users] Functions for judging the properties of the matrix Message-ID: <5daed011.ed92.138764ea964.Coremail.w_ang_temp@163.com> Hello, Is there function in PETSc for judging the properties of the matrix A ? Such as positive definitiveness and conditional number. I know that there are several functions begin with MatIs+, but it has limited amounts. Thanks. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 11 09:19:51 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 11 Jul 2012 09:19:51 -0500 Subject: [petsc-users] Functions for judging the properties of the matrix In-Reply-To: <5daed011.ed92.138764ea964.Coremail.w_ang_temp@163.com> References: <5daed011.ed92.138764ea964.Coremail.w_ang_temp@163.com> Message-ID: On Wed, Jul 11, 2012 at 8:51 AM, w_ang_temp wrote: > > Hello, > > Is there function in PETSc for judging the properties of the matrix A > ? Such as positive definitiveness and > > conditional number. I know that there are several functions begin with > MatIs+, but it has limited amounts. > The Ritz values are estimates of the eigenvalues that you can obtain using KSPSolve() with -ksp_compute_eigenvalues or -ksp_plot_eigenvalues. Note that these are estimates for the _preconditioned_ operator, so you should use -pc_type none if you want estimates for the original operator. http://www.mcs.anl.gov/petsc/documentation/faq.html#conditionnumber -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Jul 11 10:05:58 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 11 Jul 2012 17:05:58 +0200 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> Message-ID: <4FFD9656.5030707@gmail.com> On 11/7/2012 3:40 PM, Satish Balay wrote: > The second patch has typos. Attaching the modified patch. If it works for > you - I'll add it to petsc-3.3/petsc-dev > > thanks, > Satish Hi, How should I apply the patch? I download a new petsc-dev and used: /patch -p1 < DMDAVecRestoreArrayF90-2.1.patch / It says: /(Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c Hunk #1 FAILED at 175. 1 out of 1 hunk FAILED -- saving rejects to file src/dm/impls/da/f90-custom/zda1f90.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c Hunk #1 FAILED at 307. Hunk #2 FAILED at 333. Hunk #3 FAILED at 436. 3 out of 3 hunks FAILED -- saving rejects to file src/sys/f90-src/f90_cwrap.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F Hunk #1 FAILED at 322. Hunk #2 FAILED at 426. 2 out of 2 hunks FAILED -- saving rejects to file src/sys/f90-src/fsrc/f90_fwrap.F.rej/ I also used : /patch -p1 DMDAVecRestoreArrayF90-2.patch patch -p1 DMDAVecRestoreArrayF90-2.1.patch $ patch -p1 < DMDAVecRestoreArrayF90-2.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F / / $ patch -p1 < DMDAVecRestoreArrayF90-2.1.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c Hunk #1 FAILED at 175. 1 out of 1 hunk FAILED -- saving rejects to file src/dm/impls/da/f90-custom/zda1f90.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c Hunk #1 FAILED at 307. Hunk #2 FAILED at 333. Hunk #3 FAILED at 436. 3 out of 3 hunks FAILED -- saving rejects to file src/sys/f90-src/f90_cwrap.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F Hunk #1 FAILED at 322. Hunk #2 FAILED at 426. 2 out of 2 hunks FAILED -- saving rejects to file src/sys/f90-src/fsrc/f90_fwrap.F.rej/ Lastly, using patch -p1 < DMDAVecRestoreArrayF90.patch gives: / $ patch -p1 < DMDAVecRestoreArrayF90.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c Reversed (or previously applied) patch detected! Assume -R? [n] n Apply anyway? [n] n Skipping patch. 4 out of 4 hunks ignored -- saving rejects to file src/dm/impls/da/f90-custom/zda1f90.c.rej/ > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > >> On 6/7/2012 5:48 PM, Barry Smith wrote: >>> Blaise, >>> >>> Thanks. >>> >>> Satish, >>> >>> If they look good could you apply them to 3.3 and dev? >>> >>> Thanks >>> >>> Barry >> Hi, >> >> I downloaded petsc-dev a few days ago and applied both patches using "patch -v >> ..." in both linux and windows vs2008 >> >> It worked great in linux >> >> However, when I compile and link in vs2008, it gives the error: >> / >> 1>Compiling manifest to resources... >> 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 >> 1>Copyright (C) Microsoft Corporation. All rights reserved. >> 1>Linking... >> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >> F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate >> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >> F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess >> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >> F90ARRAY4dACCESSINT referenced in function F90Array4dAccess >> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >> F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess >> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >> F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess >> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >> F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy >> 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe : >> fatal error LNK1120: 6 unresolved externals/ >> >> I wonder if it's fixed in the new petsc-dev. >> >> Thanks >>> On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: >>> >>>> Hi, >>>> >>>> I have added the creation, destruction and accessor functions for 4d >>>> vectors in F90. The accessor was missing and needed for DMDAVecGetArrayF90 >>>> with a 3d DMDA and>1 dof. As far as I can test, ex11f90 in DM should now >>>> completely work with the intel compilers. >>>> >>>> Some of the functions are probably not used (F90Array4dAccessReal, >>>> F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), but I >>>> added them anyway. Let me know if you want me to submit a patch without >>>> them. >>>> >>>> Regards, >>>> >>>> Blaise >>>> >>>> >>>> >>>> -- >>>> Department of Mathematics and Center for Computation& Technology >>>> Louisiana State University, Baton Rouge, LA 70803, USA >>>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >>>> http://www.math.lsu.edu/~bourdin >>>> >>>> >>>> >>>> >>>> >>>> >>>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ptbauman at gmail.com Wed Jul 11 10:07:49 2012 From: ptbauman at gmail.com (Paul T. Bauman) Date: Wed, 11 Jul 2012 10:07:49 -0500 Subject: [petsc-users] KSPSetOperators/sub-preconditioner type In-Reply-To: References: Message-ID: On Wed, Jul 11, 2012 at 8:47 AM, Jed Brown wrote: > > Thanks for your test case. Yes, this is a PETSc bug associated with > handling of DIFFERENT_NONZERO_PATTERN. Fixed here and will be in the next > patch release. > > http://petsc.cs.iit.edu/petsc/releases/petsc-3.3/rev/09d701958d66 > Awesome, thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jul 11 10:12:29 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 11 Jul 2012 10:12:29 -0500 (CDT) Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <4FFD9656.5030707@gmail.com> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> Message-ID: You would revert the old one - and apply the new one. i.e patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch Satish On Wed, 11 Jul 2012, TAY wee-beng wrote: > > > On 11/7/2012 3:40 PM, Satish Balay wrote: > > The second patch has typos. Attaching the modified patch. If it works for > > you - I'll add it to petsc-3.3/petsc-dev > > > > thanks, > > Satish > > Hi, > > How should I apply the patch? I download a new petsc-dev and used: > > /patch -p1 < DMDAVecRestoreArrayF90-2.1.patch > / > It says: > > /(Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > Hunk #1 FAILED at 175. > 1 out of 1 hunk FAILED -- saving rejects to file > src/dm/impls/da/f90-custom/zda1f90.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > Hunk #1 FAILED at 307. > Hunk #2 FAILED at 333. > Hunk #3 FAILED at 436. > 3 out of 3 hunks FAILED -- saving rejects to file > src/sys/f90-src/f90_cwrap.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > Hunk #1 FAILED at 322. > Hunk #2 FAILED at 426. > 2 out of 2 hunks FAILED -- saving rejects to file > src/sys/f90-src/fsrc/f90_fwrap.F.rej/ > > I also used : > > /patch -p1 DMDAVecRestoreArrayF90-2.patch > patch -p1 DMDAVecRestoreArrayF90-2.1.patch > > $ patch -p1 < DMDAVecRestoreArrayF90-2.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > / > / > $ patch -p1 < DMDAVecRestoreArrayF90-2.1.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > Hunk #1 FAILED at 175. > 1 out of 1 hunk FAILED -- saving rejects to file > src/dm/impls/da/f90-custom/zda1f90.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > Hunk #1 FAILED at 307. > Hunk #2 FAILED at 333. > Hunk #3 FAILED at 436. > 3 out of 3 hunks FAILED -- saving rejects to file > src/sys/f90-src/f90_cwrap.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > Hunk #1 FAILED at 322. > Hunk #2 FAILED at 426. > 2 out of 2 hunks FAILED -- saving rejects to file > src/sys/f90-src/fsrc/f90_fwrap.F.rej/ > > Lastly, using patch -p1 < DMDAVecRestoreArrayF90.patch gives: > / > $ patch -p1 < DMDAVecRestoreArrayF90.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > Reversed (or previously applied) patch detected! Assume -R? [n] n > Apply anyway? [n] n > Skipping patch. > 4 out of 4 hunks ignored -- saving rejects to file > src/dm/impls/da/f90-custom/zda1f90.c.rej/ > > > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > > > > > On 6/7/2012 5:48 PM, Barry Smith wrote: > > > > Blaise, > > > > > > > > Thanks. > > > > > > > > Satish, > > > > > > > > If they look good could you apply them to 3.3 and dev? > > > > > > > > Thanks > > > > > > > > Barry > > > Hi, > > > > > > I downloaded petsc-dev a few days ago and applied both patches using > > > "patch -v > > > ..." in both linux and windows vs2008 > > > > > > It worked great in linux > > > > > > However, when I compile and link in vs2008, it gives the error: > > > / > > > 1>Compiling manifest to resources... > > > 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 > > > 1>Copyright (C) Microsoft Corporation. All rights reserved. > > > 1>Linking... > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > > > F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > > > F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > > > F90ARRAY4dACCESSINT referenced in function F90Array4dAccess > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > > > F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > > > F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > > > F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy > > > 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe > > > : > > > fatal error LNK1120: 6 unresolved externals/ > > > > > > I wonder if it's fixed in the new petsc-dev. > > > > > > Thanks > > > > On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: > > > > > > > > > Hi, > > > > > > > > > > I have added the creation, destruction and accessor functions for 4d > > > > > vectors in F90. The accessor was missing and needed for > > > > > DMDAVecGetArrayF90 > > > > > with a 3d DMDA and>1 dof. As far as I can test, ex11f90 in DM should > > > > > now > > > > > completely work with the intel compilers. > > > > > > > > > > Some of the functions are probably not used (F90Array4dAccessReal, > > > > > F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), but I > > > > > added them anyway. Let me know if you want me to submit a patch > > > > > without > > > > > them. > > > > > > > > > > Regards, > > > > > > > > > > Blaise > > > > > > > > > > > > > > > > > > > > -- > > > > > Department of Mathematics and Center for Computation& Technology > > > > > Louisiana State University, Baton Rouge, LA 70803, USA > > > > > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > > > > > http://www.math.lsu.edu/~bourdin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From zonexo at gmail.com Wed Jul 11 10:21:46 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 11 Jul 2012 17:21:46 +0200 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> Message-ID: <4FFD9A0A.2020406@gmail.com> On 11/7/2012 5:12 PM, Satish Balay wrote: > You would revert the old one - and apply the new one. i.e > > patch -Np1 -R< DMDAVecRestoreArrayF90-2.patch > patch -Np1< DMDAVecRestoreArrayF90-2.1.patch > > Satish Hi, Just tried but I still get the error: $ !494 patch -p1 < DMDAVecRestoreArrayF90-2.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F User at windows-480c6c3 /cygdrive/c/Codes/petsc-dev $ !498 patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F User at windows-480c6c3 /cygdrive/c/Codes/petsc-dev $ !500 patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c Hunk #1 FAILED at 175. 1 out of 1 hunk FAILED -- saving rejects to file src/dm/impls/da/f90-custom/zda1f90.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c Hunk #1 FAILED at 307. Hunk #2 FAILED at 333. Hunk #3 FAILED at 436. 3 out of 3 hunks FAILED -- saving rejects to file src/sys/f90-src/f90_cwrap.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F Hunk #1 FAILED at 322. Hunk #2 FAILED at 426. 2 out of 2 hunks FAILED -- saving rejects to file src/sys/f90-src/fsrc/f90_fwrap.F.rej > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > >> >> On 11/7/2012 3:40 PM, Satish Balay wrote: >>> The second patch has typos. Attaching the modified patch. If it works for >>> you - I'll add it to petsc-3.3/petsc-dev >>> >>> thanks, >>> Satish >> Hi, >> >> How should I apply the patch? I download a new petsc-dev and used: >> >> /patch -p1< DMDAVecRestoreArrayF90-2.1.patch >> / >> It says: >> >> /(Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> Hunk #1 FAILED at 175. >> 1 out of 1 hunk FAILED -- saving rejects to file >> src/dm/impls/da/f90-custom/zda1f90.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> Hunk #1 FAILED at 307. >> Hunk #2 FAILED at 333. >> Hunk #3 FAILED at 436. >> 3 out of 3 hunks FAILED -- saving rejects to file >> src/sys/f90-src/f90_cwrap.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> Hunk #1 FAILED at 322. >> Hunk #2 FAILED at 426. >> 2 out of 2 hunks FAILED -- saving rejects to file >> src/sys/f90-src/fsrc/f90_fwrap.F.rej/ >> >> I also used : >> >> /patch -p1 DMDAVecRestoreArrayF90-2.patch >> patch -p1 DMDAVecRestoreArrayF90-2.1.patch >> >> $ patch -p1< DMDAVecRestoreArrayF90-2.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> / >> / >> $ patch -p1< DMDAVecRestoreArrayF90-2.1.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> Hunk #1 FAILED at 175. >> 1 out of 1 hunk FAILED -- saving rejects to file >> src/dm/impls/da/f90-custom/zda1f90.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> Hunk #1 FAILED at 307. >> Hunk #2 FAILED at 333. >> Hunk #3 FAILED at 436. >> 3 out of 3 hunks FAILED -- saving rejects to file >> src/sys/f90-src/f90_cwrap.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> Hunk #1 FAILED at 322. >> Hunk #2 FAILED at 426. >> 2 out of 2 hunks FAILED -- saving rejects to file >> src/sys/f90-src/fsrc/f90_fwrap.F.rej/ >> >> Lastly, using patch -p1< DMDAVecRestoreArrayF90.patch gives: >> / >> $ patch -p1< DMDAVecRestoreArrayF90.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> Reversed (or previously applied) patch detected! Assume -R? [n] n >> Apply anyway? [n] n >> Skipping patch. >> 4 out of 4 hunks ignored -- saving rejects to file >> src/dm/impls/da/f90-custom/zda1f90.c.rej/ >>> On Wed, 11 Jul 2012, TAY wee-beng wrote: >>> >>>> On 6/7/2012 5:48 PM, Barry Smith wrote: >>>>> Blaise, >>>>> >>>>> Thanks. >>>>> >>>>> Satish, >>>>> >>>>> If they look good could you apply them to 3.3 and dev? >>>>> >>>>> Thanks >>>>> >>>>> Barry >>>> Hi, >>>> >>>> I downloaded petsc-dev a few days ago and applied both patches using >>>> "patch -v >>>> ..." in both linux and windows vs2008 >>>> >>>> It worked great in linux >>>> >>>> However, when I compile and link in vs2008, it gives the error: >>>> / >>>> 1>Compiling manifest to resources... >>>> 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 >>>> 1>Copyright (C) Microsoft Corporation. All rights reserved. >>>> 1>Linking... >>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >>>> F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate >>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >>>> F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess >>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >>>> F90ARRAY4dACCESSINT referenced in function F90Array4dAccess >>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >>>> F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess >>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >>>> F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess >>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol >>>> F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy >>>> 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe >>>> : >>>> fatal error LNK1120: 6 unresolved externals/ >>>> >>>> I wonder if it's fixed in the new petsc-dev. >>>> >>>> Thanks >>>>> On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I have added the creation, destruction and accessor functions for 4d >>>>>> vectors in F90. The accessor was missing and needed for >>>>>> DMDAVecGetArrayF90 >>>>>> with a 3d DMDA and>1 dof. As far as I can test, ex11f90 in DM should >>>>>> now >>>>>> completely work with the intel compilers. >>>>>> >>>>>> Some of the functions are probably not used (F90Array4dAccessReal, >>>>>> F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), but I >>>>>> added them anyway. Let me know if you want me to submit a patch >>>>>> without >>>>>> them. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Blaise >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Department of Mathematics and Center for Computation& Technology >>>>>> Louisiana State University, Baton Rouge, LA 70803, USA >>>>>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >>>>>> http://www.math.lsu.edu/~bourdin >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>> From w_ang_temp at 163.com Wed Jul 11 10:23:32 2012 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 11 Jul 2012 23:23:32 +0800 (CST) Subject: [petsc-users] Functions for judging the properties of the matrix In-Reply-To: References: <5daed011.ed92.138764ea964.Coremail.w_ang_temp@163.com> Message-ID: <3d8422e7.cee5.13876a357f3.Coremail.w_ang_temp@163.com> Thanks! ? 2012-07-11 22:19:51?"Jed Brown" ??? On Wed, Jul 11, 2012 at 8:51 AM, w_ang_temp wrote: Hello, Is there function in PETSc for judging the properties of the matrix A ? Such as positive definitiveness and conditional number. I know that there are several functions begin with MatIs+, but it has limited amounts. The Ritz values are estimates of the eigenvalues that you can obtain using KSPSolve() with -ksp_compute_eigenvalues or -ksp_plot_eigenvalues. Note that these are estimates for the _preconditioned_ operator, so you should use -pc_type none if you want estimates for the original operator. http://www.mcs.anl.gov/petsc/documentation/faq.html#conditionnumber -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jul 11 10:29:24 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 11 Jul 2012 10:29:24 -0500 (CDT) Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <4FFD9A0A.2020406@gmail.com> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> Message-ID: Hm - if you have a mercurial clone of petsc-dev - you don't need to apply/revert DMDAVecRestoreArrayF90-2.patch. You can just apply DMDAVecRestoreArrayF90-2.1.patch. But I'm not sure why you are getting errors. >>>>>>>>>>>>>> sbalay at ps3 ~/petsc-dev $ hg revert -a sbalay at ps3 ~/petsc-dev $ hg st sbalay at ps3 ~/petsc-dev $ patch -Np1 < ~/DMDAVecRestoreArrayF90-2.1.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F sbalay at ps3 ~/petsc-dev $ hg st M src/dm/impls/da/f90-custom/zda1f90.c M src/sys/f90-src/f90_cwrap.c M src/sys/f90-src/fsrc/f90_fwrap.F sbalay at ps3 ~/petsc-dev $ <<<<<<<<<<<< Satish On Wed, 11 Jul 2012, TAY wee-beng wrote: > > On 11/7/2012 5:12 PM, Satish Balay wrote: > > You would revert the old one - and apply the new one. i.e > > > > patch -Np1 -R< DMDAVecRestoreArrayF90-2.patch > > patch -Np1< DMDAVecRestoreArrayF90-2.1.patch > > > > Satish > > Hi, > > Just tried but I still get the error: > > $ !494 > patch -p1 < DMDAVecRestoreArrayF90-2.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > > User at windows-480c6c3 /cygdrive/c/Codes/petsc-dev > $ !498 > patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > > User at windows-480c6c3 /cygdrive/c/Codes/petsc-dev > $ !500 > patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > Hunk #1 FAILED at 175. > 1 out of 1 hunk FAILED -- saving rejects to file > src/dm/impls/da/f90-custom/zda1f90.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > Hunk #1 FAILED at 307. > Hunk #2 FAILED at 333. > Hunk #3 FAILED at 436. > 3 out of 3 hunks FAILED -- saving rejects to file > src/sys/f90-src/f90_cwrap.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > Hunk #1 FAILED at 322. > Hunk #2 FAILED at 426. > 2 out of 2 hunks FAILED -- saving rejects to file > src/sys/f90-src/fsrc/f90_fwrap.F.rej > > > > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > > > > > > > > On 11/7/2012 3:40 PM, Satish Balay wrote: > > > > The second patch has typos. Attaching the modified patch. If it works > > > > for > > > > you - I'll add it to petsc-3.3/petsc-dev > > > > > > > > thanks, > > > > Satish > > > Hi, > > > > > > How should I apply the patch? I download a new petsc-dev and used: > > > > > > /patch -p1< DMDAVecRestoreArrayF90-2.1.patch > > > / > > > It says: > > > > > > /(Stripping trailing CRs from patch.) > > > patching file src/dm/impls/da/f90-custom/zda1f90.c > > > Hunk #1 FAILED at 175. > > > 1 out of 1 hunk FAILED -- saving rejects to file > > > src/dm/impls/da/f90-custom/zda1f90.c.rej > > > (Stripping trailing CRs from patch.) > > > patching file src/sys/f90-src/f90_cwrap.c > > > Hunk #1 FAILED at 307. > > > Hunk #2 FAILED at 333. > > > Hunk #3 FAILED at 436. > > > 3 out of 3 hunks FAILED -- saving rejects to file > > > src/sys/f90-src/f90_cwrap.c.rej > > > (Stripping trailing CRs from patch.) > > > patching file src/sys/f90-src/fsrc/f90_fwrap.F > > > Hunk #1 FAILED at 322. > > > Hunk #2 FAILED at 426. > > > 2 out of 2 hunks FAILED -- saving rejects to file > > > src/sys/f90-src/fsrc/f90_fwrap.F.rej/ > > > > > > I also used : > > > > > > /patch -p1 DMDAVecRestoreArrayF90-2.patch > > > patch -p1 DMDAVecRestoreArrayF90-2.1.patch > > > > > > $ patch -p1< DMDAVecRestoreArrayF90-2.patch > > > (Stripping trailing CRs from patch.) > > > patching file src/dm/impls/da/f90-custom/zda1f90.c > > > (Stripping trailing CRs from patch.) > > > patching file src/sys/f90-src/f90_cwrap.c > > > (Stripping trailing CRs from patch.) > > > patching file src/sys/f90-src/fsrc/f90_fwrap.F > > > / > > > / > > > $ patch -p1< DMDAVecRestoreArrayF90-2.1.patch > > > (Stripping trailing CRs from patch.) > > > patching file src/dm/impls/da/f90-custom/zda1f90.c > > > Hunk #1 FAILED at 175. > > > 1 out of 1 hunk FAILED -- saving rejects to file > > > src/dm/impls/da/f90-custom/zda1f90.c.rej > > > (Stripping trailing CRs from patch.) > > > patching file src/sys/f90-src/f90_cwrap.c > > > Hunk #1 FAILED at 307. > > > Hunk #2 FAILED at 333. > > > Hunk #3 FAILED at 436. > > > 3 out of 3 hunks FAILED -- saving rejects to file > > > src/sys/f90-src/f90_cwrap.c.rej > > > (Stripping trailing CRs from patch.) > > > patching file src/sys/f90-src/fsrc/f90_fwrap.F > > > Hunk #1 FAILED at 322. > > > Hunk #2 FAILED at 426. > > > 2 out of 2 hunks FAILED -- saving rejects to file > > > src/sys/f90-src/fsrc/f90_fwrap.F.rej/ > > > > > > Lastly, using patch -p1< DMDAVecRestoreArrayF90.patch gives: > > > / > > > $ patch -p1< DMDAVecRestoreArrayF90.patch > > > (Stripping trailing CRs from patch.) > > > patching file src/dm/impls/da/f90-custom/zda1f90.c > > > Reversed (or previously applied) patch detected! Assume -R? [n] n > > > Apply anyway? [n] n > > > Skipping patch. > > > 4 out of 4 hunks ignored -- saving rejects to file > > > src/dm/impls/da/f90-custom/zda1f90.c.rej/ > > > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > > > > > > > > > On 6/7/2012 5:48 PM, Barry Smith wrote: > > > > > > Blaise, > > > > > > > > > > > > Thanks. > > > > > > > > > > > > Satish, > > > > > > > > > > > > If they look good could you apply them to 3.3 and dev? > > > > > > > > > > > > Thanks > > > > > > > > > > > > Barry > > > > > Hi, > > > > > > > > > > I downloaded petsc-dev a few days ago and applied both patches using > > > > > "patch -v > > > > > ..." in both linux and windows vs2008 > > > > > > > > > > It worked great in linux > > > > > > > > > > However, when I compile and link in vs2008, it gives the error: > > > > > / > > > > > 1>Compiling manifest to resources... > > > > > 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 > > > > > 1>Copyright (C) Microsoft Corporation. All rights reserved. > > > > > 1>Linking... > > > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external > > > > > symbol > > > > > F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate > > > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external > > > > > symbol > > > > > F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess > > > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external > > > > > symbol > > > > > F90ARRAY4dACCESSINT referenced in function F90Array4dAccess > > > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external > > > > > symbol > > > > > F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess > > > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external > > > > > symbol > > > > > F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess > > > > > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external > > > > > symbol > > > > > F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy > > > > > 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe > > > > > : > > > > > fatal error LNK1120: 6 unresolved externals/ > > > > > > > > > > I wonder if it's fixed in the new petsc-dev. > > > > > > > > > > Thanks > > > > > > On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > I have added the creation, destruction and accessor functions for > > > > > > > 4d > > > > > > > vectors in F90. The accessor was missing and needed for > > > > > > > DMDAVecGetArrayF90 > > > > > > > with a 3d DMDA and>1 dof. As far as I can test, ex11f90 in DM > > > > > > > should > > > > > > > now > > > > > > > completely work with the intel compilers. > > > > > > > > > > > > > > Some of the functions are probably not used (F90Array4dAccessReal, > > > > > > > F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), > > > > > > > but I > > > > > > > added them anyway. Let me know if you want me to submit a patch > > > > > > > without > > > > > > > them. > > > > > > > > > > > > > > Regards, > > > > > > > > > > > > > > Blaise > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > Department of Mathematics and Center for Computation& Technology > > > > > > > Louisiana State University, Baton Rouge, LA 70803, USA > > > > > > > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > > > > > > > http://www.math.lsu.edu/~bourdin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From zonexo at gmail.com Wed Jul 11 12:39:05 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 11 Jul 2012 19:39:05 +0200 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> Message-ID: <4FFDBA39.3040205@gmail.com> On 11/7/2012 5:29 PM, Satish Balay wrote: > Hm - if you have a mercurial clone of petsc-dev - you don't need to > apply/revert DMDAVecRestoreArrayF90-2.patch. You can just apply > DMDAVecRestoreArrayF90-2.1.patch. > > But I'm not sure why you are getting errors. > > sbalay at ps3 ~/petsc-dev > $ hg revert -a > > sbalay at ps3 ~/petsc-dev > $ hg st > > sbalay at ps3 ~/petsc-dev > $ patch -Np1 < ~/DMDAVecRestoreArrayF90-2.1.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > > sbalay at ps3 ~/petsc-dev > $ hg st > M src/dm/impls/da/f90-custom/zda1f90.c > M src/sys/f90-src/f90_cwrap.c > M src/sys/f90-src/fsrc/f90_fwrap.F > > sbalay at ps3 ~/petsc-dev > $ > <<<<<<<<<<<< > > Satish Hi, I just tried on a different system and the same error occurs. The 1st command : / / /patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch $ patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F/ works but the 2nd one doesn't: / $ patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch (Stripping trailing CRs from patch.) patching file src/dm/impls/da/f90-custom/zda1f90.c Hunk #1 FAILED at 175. 1 out of 1 hunk FAILED -- saving rejects to file src/dm/impls/da/f90-custom/zda1f90.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/f90_cwrap.c Hunk #1 FAILED at 307. Hunk #2 FAILED at 333. Hunk #3 FAILED at 436. 3 out of 3 hunks FAILED -- saving rejects to file src/sys/f90-src/f90_cwrap.c.rej (Stripping trailing CRs from patch.) patching file src/sys/f90-src/fsrc/f90_fwrap.F Hunk #1 FAILED at 322. Hunk #2 FAILED at 426. 2 out of 2 hunks FAILED -- saving rejects to file src/sys/f90-src/fsrc/f90_fwrap.F.rej/ It is possible to send me the 3 patch files? Or is there some other ways? Thanks > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > >> On 11/7/2012 5:12 PM, Satish Balay wrote: >>> You would revert the old one - and apply the new one. i.e >>> >>> patch -Np1 -R< DMDAVecRestoreArrayF90-2.patch >>> patch -Np1< DMDAVecRestoreArrayF90-2.1.patch >>> >>> Satish >> Hi, >> >> Just tried but I still get the error: >> >> $ !494 >> patch -p1 < DMDAVecRestoreArrayF90-2.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> >> User at windows-480c6c3 /cygdrive/c/Codes/petsc-dev >> $ !498 >> patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> >> User at windows-480c6c3 /cygdrive/c/Codes/petsc-dev >> $ !500 >> patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> Hunk #1 FAILED at 175. >> 1 out of 1 hunk FAILED -- saving rejects to file >> src/dm/impls/da/f90-custom/zda1f90.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> Hunk #1 FAILED at 307. >> Hunk #2 FAILED at 333. >> Hunk #3 FAILED at 436. >> 3 out of 3 hunks FAILED -- saving rejects to file >> src/sys/f90-src/f90_cwrap.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> Hunk #1 FAILED at 322. >> Hunk #2 FAILED at 426. >> 2 out of 2 hunks FAILED -- saving rejects to file >> src/sys/f90-src/fsrc/f90_fwrap.F.rej >> >>> On Wed, 11 Jul 2012, TAY wee-beng wrote: >>> >>>> On 11/7/2012 3:40 PM, Satish Balay wrote: >>>>> The second patch has typos. Attaching the modified patch. If it works >>>>> for >>>>> you - I'll add it to petsc-3.3/petsc-dev >>>>> >>>>> thanks, >>>>> Satish >>>> Hi, >>>> >>>> How should I apply the patch? I download a new petsc-dev and used: >>>> >>>> /patch -p1< DMDAVecRestoreArrayF90-2.1.patch >>>> / >>>> It says: >>>> >>>> /(Stripping trailing CRs from patch.) >>>> patching file src/dm/impls/da/f90-custom/zda1f90.c >>>> Hunk #1 FAILED at 175. >>>> 1 out of 1 hunk FAILED -- saving rejects to file >>>> src/dm/impls/da/f90-custom/zda1f90.c.rej >>>> (Stripping trailing CRs from patch.) >>>> patching file src/sys/f90-src/f90_cwrap.c >>>> Hunk #1 FAILED at 307. >>>> Hunk #2 FAILED at 333. >>>> Hunk #3 FAILED at 436. >>>> 3 out of 3 hunks FAILED -- saving rejects to file >>>> src/sys/f90-src/f90_cwrap.c.rej >>>> (Stripping trailing CRs from patch.) >>>> patching file src/sys/f90-src/fsrc/f90_fwrap.F >>>> Hunk #1 FAILED at 322. >>>> Hunk #2 FAILED at 426. >>>> 2 out of 2 hunks FAILED -- saving rejects to file >>>> src/sys/f90-src/fsrc/f90_fwrap.F.rej/ >>>> >>>> I also used : >>>> >>>> /patch -p1 DMDAVecRestoreArrayF90-2.patch >>>> patch -p1 DMDAVecRestoreArrayF90-2.1.patch >>>> >>>> $ patch -p1< DMDAVecRestoreArrayF90-2.patch >>>> (Stripping trailing CRs from patch.) >>>> patching file src/dm/impls/da/f90-custom/zda1f90.c >>>> (Stripping trailing CRs from patch.) >>>> patching file src/sys/f90-src/f90_cwrap.c >>>> (Stripping trailing CRs from patch.) >>>> patching file src/sys/f90-src/fsrc/f90_fwrap.F >>>> / >>>> / >>>> $ patch -p1< DMDAVecRestoreArrayF90-2.1.patch >>>> (Stripping trailing CRs from patch.) >>>> patching file src/dm/impls/da/f90-custom/zda1f90.c >>>> Hunk #1 FAILED at 175. >>>> 1 out of 1 hunk FAILED -- saving rejects to file >>>> src/dm/impls/da/f90-custom/zda1f90.c.rej >>>> (Stripping trailing CRs from patch.) >>>> patching file src/sys/f90-src/f90_cwrap.c >>>> Hunk #1 FAILED at 307. >>>> Hunk #2 FAILED at 333. >>>> Hunk #3 FAILED at 436. >>>> 3 out of 3 hunks FAILED -- saving rejects to file >>>> src/sys/f90-src/f90_cwrap.c.rej >>>> (Stripping trailing CRs from patch.) >>>> patching file src/sys/f90-src/fsrc/f90_fwrap.F >>>> Hunk #1 FAILED at 322. >>>> Hunk #2 FAILED at 426. >>>> 2 out of 2 hunks FAILED -- saving rejects to file >>>> src/sys/f90-src/fsrc/f90_fwrap.F.rej/ >>>> >>>> Lastly, using patch -p1< DMDAVecRestoreArrayF90.patch gives: >>>> / >>>> $ patch -p1< DMDAVecRestoreArrayF90.patch >>>> (Stripping trailing CRs from patch.) >>>> patching file src/dm/impls/da/f90-custom/zda1f90.c >>>> Reversed (or previously applied) patch detected! Assume -R? [n] n >>>> Apply anyway? [n] n >>>> Skipping patch. >>>> 4 out of 4 hunks ignored -- saving rejects to file >>>> src/dm/impls/da/f90-custom/zda1f90.c.rej/ >>>>> On Wed, 11 Jul 2012, TAY wee-beng wrote: >>>>> >>>>>> On 6/7/2012 5:48 PM, Barry Smith wrote: >>>>>>> Blaise, >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>>> Satish, >>>>>>> >>>>>>> If they look good could you apply them to 3.3 and dev? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Barry >>>>>> Hi, >>>>>> >>>>>> I downloaded petsc-dev a few days ago and applied both patches using >>>>>> "patch -v >>>>>> ..." in both linux and windows vs2008 >>>>>> >>>>>> It worked great in linux >>>>>> >>>>>> However, when I compile and link in vs2008, it gives the error: >>>>>> / >>>>>> 1>Compiling manifest to resources... >>>>>> 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 >>>>>> 1>Copyright (C) Microsoft Corporation. All rights reserved. >>>>>> 1>Linking... >>>>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external >>>>>> symbol >>>>>> F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate >>>>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external >>>>>> symbol >>>>>> F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess >>>>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external >>>>>> symbol >>>>>> F90ARRAY4dACCESSINT referenced in function F90Array4dAccess >>>>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external >>>>>> symbol >>>>>> F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess >>>>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external >>>>>> symbol >>>>>> F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess >>>>>> 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external >>>>>> symbol >>>>>> F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy >>>>>> 1>c:\obj_tmp\ibm2d_high_Re_staggered_old\Debug/ibm2d_high_Re_staggered.exe >>>>>> : >>>>>> fatal error LNK1120: 6 unresolved externals/ >>>>>> >>>>>> I wonder if it's fixed in the new petsc-dev. >>>>>> >>>>>> Thanks >>>>>>> On Jul 6, 2012, at 6:30 AM, Blaise Bourdin wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I have added the creation, destruction and accessor functions for >>>>>>>> 4d >>>>>>>> vectors in F90. The accessor was missing and needed for >>>>>>>> DMDAVecGetArrayF90 >>>>>>>> with a 3d DMDA and>1 dof. As far as I can test, ex11f90 in DM >>>>>>>> should >>>>>>>> now >>>>>>>> completely work with the intel compilers. >>>>>>>> >>>>>>>> Some of the functions are probably not used (F90Array4dAccessReal, >>>>>>>> F90Array4dAccessInt, F90Array4dAccessFortranAddr, for instance), >>>>>>>> but I >>>>>>>> added them anyway. Let me know if you want me to submit a patch >>>>>>>> without >>>>>>>> them. >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> Blaise >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Department of Mathematics and Center for Computation& Technology >>>>>>>> Louisiana State University, Baton Rouge, LA 70803, USA >>>>>>>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >>>>>>>> http://www.math.lsu.edu/~bourdin >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Wed Jul 11 12:40:51 2012 From: paeanball at gmail.com (Bao Kai) Date: Wed, 11 Jul 2012 20:40:51 +0300 Subject: [petsc-users] Does this mean the matrix is ill-conditioned? Message-ID: Hi, all, The following is the output from the solution of a Poisson equation from Darcy's law. To compute the condition number of matrix, I did not use PC and use GMRES KSP to do the test. It seems like that the condition number keep increasing during the iterative solution. Does this mean the matrix is ill-conditioned? For this test, it did not achieve convergence with 10000 iterations. When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 times iteration to get the iteration convergent. Any suggestion for improving the convergence rate will be much appreciated. The solution of this equation has been the bottleneck of my code, it takes more than 90% of the total time. Thank you very much. Best Regards, Kai -------------- next part -------------- the KSP type is gmres the PC type is none KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 4.573151996900e+02 0 KSP Residual norm 4.573151996900e+02 0 KSP Residual norm 4.573151996900e+02 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 1 KSP Residual norm 1.712147820093e+02 1 KSP Residual norm 1.712147820093e+02 % max 1.045538279822e-02 min 1.045538279822e-02 max/min 1.000000000000e+00 2 KSP Residual norm 9.264440714305e+01 2 KSP Residual norm 9.264440714305e+01 % max 1.341442246541e-02 min 6.047306755934e-03 max/min 2.218247396206e+00 3 KSP Residual norm 6.106792472402e+01 3 KSP Residual norm 6.106792472402e+01 % max 1.472031954019e-02 min 3.756157857579e-03 max/min 3.918983199944e+00 4 KSP Residual norm 4.554286112197e+01 4 KSP Residual norm 4.554286112197e+01 % max 1.540277850515e-02 min 2.460246491991e-03 max/min 6.260664756677e+00 5 KSP Residual norm 3.679681248393e+01 5 KSP Residual norm 3.679681248393e+01 % max 1.580188682187e-02 min 1.676768259980e-03 max/min 9.424013561694e+00 6 KSP Residual norm 3.133446439684e+01 6 KSP Residual norm 3.133446439684e+01 % max 1.605437366386e-02 min 1.181527981010e-03 max/min 1.358780657072e+01 7 KSP Residual norm 2.763377137035e+01 7 KSP Residual norm 2.763377137035e+01 % max 1.622367634674e-02 min 8.580313154728e-04 max/min 1.890802358163e+01 8 KSP Residual norm 2.495980544663e+01 8 KSP Residual norm 2.495980544663e+01 % max 1.634245666024e-02 min 6.407061489327e-04 max/min 2.550694524700e+01 9 KSP Residual norm 2.292746640884e+01 9 KSP Residual norm 2.292746640884e+01 % max 1.642887802339e-02 min 4.908297028371e-04 max/min 3.347164592614e+01 10 KSP Residual norm 2.132064948715e+01 10 KSP Residual norm 2.132064948715e+01 % max 1.649366342552e-02 min 3.848443768915e-04 max/min 4.285800810900e+01 11 KSP Residual norm 2.001544594893e+01 11 KSP Residual norm 2.001544594893e+01 % max 1.654353577107e-02 min 3.083036296755e-04 max/min 5.365987999713e+01 12 KSP Residual norm 1.915651242600e+01 12 KSP Residual norm 1.915651242600e+01 % max 1.658817320495e-02 min 2.615347136235e-04 max/min 6.342627705183e+01 13 KSP Residual norm 1.869099502629e+01 13 KSP Residual norm 1.869099502629e+01 % max 2.668576960746e-02 min 2.402286972676e-04 max/min 1.110848533543e+02 14 KSP Residual norm 1.786679692013e+01 14 KSP Residual norm 1.786679692013e+01 % max 4.531586353791e-02 min 2.028622993290e-04 max/min 2.233823814863e+02 15 KSP Residual norm 1.755222508589e+01 15 KSP Residual norm 1.755222508589e+01 % max 4.617547087190e-02 min 1.895794764179e-04 max/min 2.435678784665e+02 16 KSP Residual norm 1.689478391514e+01 16 KSP Residual norm 1.689478391514e+01 % max 4.690654750900e-02 min 1.642634706428e-04 max/min 2.855567785427e+02 17 KSP Residual norm 1.657593952970e+01 17 KSP Residual norm 1.657593952970e+01 % max 4.757805370202e-02 min 1.522684222525e-04 max/min 3.124617238309e+02 18 KSP Residual norm 1.607807633373e+01 18 KSP Residual norm 1.607807633373e+01 % max 4.787231856714e-02 min 1.357988993999e-04 max/min 3.525236123320e+02 19 KSP Residual norm 1.575670445571e+01 19 KSP Residual norm 1.575670445571e+01 % max 4.831549897599e-02 min 1.252078190052e-04 max/min 3.858824421660e+02 20 KSP Residual norm 1.537090399395e+01 20 KSP Residual norm 1.537090399395e+01 % max 4.848601459775e-02 min 1.140395291915e-04 max/min 4.251684915004e+02 21 KSP Residual norm 1.506947587307e+01 21 KSP Residual norm 1.506947587307e+01 % max 4.877611380564e-02 min 1.052883158172e-04 max/min 4.632623613270e+02 22 KSP Residual norm 1.474968532481e+01 22 KSP Residual norm 1.474968532481e+01 % max 4.889640063411e-02 min 9.704858932772e-05 max/min 5.038342233806e+02 23 KSP Residual norm 1.448591635498e+01 23 KSP Residual norm 1.448591635498e+01 % max 4.908959927887e-02 min 9.023308477824e-05 max/min 5.440310435969e+02 24 KSP Residual norm 1.419842980560e+01 24 KSP Residual norm 1.419842980560e+01 % max 4.918486927922e-02 min 8.355370801843e-05 max/min 5.886617176627e+02 25 KSP Residual norm 1.397964475782e+01 25 KSP Residual norm 1.397964475782e+01 % max 4.931609196955e-02 min 7.848055170355e-05 max/min 6.283861529903e+02 26 KSP Residual norm 1.370513733039e+01 26 KSP Residual norm 1.370513733039e+01 % max 4.939650692420e-02 min 7.267447159755e-05 max/min 6.796954396551e+02 27 KSP Residual norm 1.352528115483e+01 27 KSP Residual norm 1.352528115483e+01 % max 4.948929673950e-02 min 6.891284161113e-05 max/min 7.181433181752e+02 28 KSP Residual norm 1.326081311243e+01 28 KSP Residual norm 1.326081311243e+01 % max 4.955787236964e-02 min 6.379160955998e-05 max/min 7.768713269892e+02 29 KSP Residual norm 1.310778233412e+01 29 KSP Residual norm 1.310778233412e+01 % max 4.962674116591e-02 min 6.089219676274e-05 max/min 8.149934442220e+02 30 KSP Residual norm 1.285901828880e+01 30 KSP Residual norm 1.285901828880e+01 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 31 KSP Residual norm 1.277079005716e+01 31 KSP Residual norm 1.277079005716e+01 % max 2.722168497237e-03 min 2.722168497237e-03 max/min 1.000000000000e+00 32 KSP Residual norm 1.266683363017e+01 32 KSP Residual norm 1.266683363017e+01 % max 4.004279724366e-02 min 9.127378679357e-04 max/min 4.387108133710e+01 33 KSP Residual norm 1.258849237296e+01 33 KSP Residual norm 1.258849237296e+01 % max 4.594007675035e-02 min 5.298405793964e-04 max/min 8.670547054491e+01 34 KSP Residual norm 1.247351394381e+01 34 KSP Residual norm 1.247351394381e+01 % max 4.806735971621e-02 min 3.132044558135e-04 max/min 1.534695909462e+02 35 KSP Residual norm 1.239668430246e+01 35 KSP Residual norm 1.239668430246e+01 % max 4.883162533294e-02 min 2.407658620779e-04 max/min 2.028178949935e+02 36 KSP Residual norm 1.228012971411e+01 36 KSP Residual norm 1.228012971411e+01 % max 4.934524165164e-02 min 1.723589158474e-04 max/min 2.862935253975e+02 37 KSP Residual norm 1.219746497968e+01 37 KSP Residual norm 1.219746497968e+01 % max 4.956223238081e-02 min 1.430200760698e-04 max/min 3.465403860967e+02 38 KSP Residual norm 1.208654662874e+01 38 KSP Residual norm 1.208654662874e+01 % max 4.975857930187e-02 min 1.138415994604e-04 max/min 4.370860874912e+02 39 KSP Residual norm 1.199303910737e+01 39 KSP Residual norm 1.199303910737e+01 % max 4.984766569132e-02 min 9.727417164001e-05 max/min 5.124450288386e+02 40 KSP Residual norm 1.189050207640e+01 40 KSP Residual norm 1.189050207640e+01 % max 4.994122563351e-02 min 8.271140074009e-05 max/min 6.038009897866e+02 41 KSP Residual norm 1.178490083769e+01 41 KSP Residual norm 1.178490083769e+01 % max 4.998663577089e-02 min 7.181264179337e-05 max/min 6.960701420053e+02 42 KSP Residual norm 1.169000296495e+01 42 KSP Residual norm 1.169000296495e+01 % max 5.003678884890e-02 min 6.368173738923e-05 max/min 7.857321565062e+02 43 KSP Residual norm 1.157478182345e+01 43 KSP Residual norm 1.157478182345e+01 % max 5.006376436088e-02 min 5.605252567009e-05 max/min 8.931580470707e+02 44 KSP Residual norm 1.148607855674e+01 44 KSP Residual norm 1.148607855674e+01 % max 5.009158541095e-02 min 5.111049364404e-05 max/min 9.800645980807e+02 45 KSP Residual norm 1.136412186224e+01 45 KSP Residual norm 1.136412186224e+01 % max 5.011041280799e-02 min 4.555943744020e-05 max/min 1.099890947375e+03 46 KSP Residual norm 1.127749710266e+01 46 KSP Residual norm 1.127749710266e+01 % max 5.012568759917e-02 min 4.224471094501e-05 max/min 1.186555345696e+03 47 KSP Residual norm 1.115147008618e+01 47 KSP Residual norm 1.115147008618e+01 % max 5.014071249101e-02 min 3.811963577839e-05 max/min 1.315351300377e+03 48 KSP Residual norm 1.106039584262e+01 48 KSP Residual norm 1.106039584262e+01 % max 5.015013434089e-02 min 3.561350018397e-05 max/min 1.408177631567e+03 49 KSP Residual norm 1.094059620504e+01 49 KSP Residual norm 1.094059620504e+01 % max 5.016186045748e-02 min 3.270418879645e-05 max/min 1.533805371834e+03 50 KSP Residual norm 1.083508705289e+01 50 KSP Residual norm 1.083508705289e+01 % max 5.016995683788e-02 min 3.052792504522e-05 max/min 1.643411950323e+03 51 KSP Residual norm 1.073367913128e+01 51 KSP Residual norm 1.073367913128e+01 % max 5.017897801665e-02 min 2.865191798905e-05 max/min 1.751330505547e+03 52 KSP Residual norm 1.060603184736e+01 52 KSP Residual norm 1.060603184736e+01 % max 5.018800143551e-02 min 2.660968494931e-05 max/min 1.886080257287e+03 53 KSP Residual norm 1.051172191859e+01 53 KSP Residual norm 1.051172191859e+01 % max 5.019827210111e-02 min 2.526173649961e-05 max/min 1.987126740154e+03 54 KSP Residual norm 1.038153764145e+01 54 KSP Residual norm 1.038153764145e+01 % max 5.020825475867e-02 min 2.361305203455e-05 max/min 2.126292471012e+03 55 KSP Residual norm 1.028119058815e+01 55 KSP Residual norm 1.028119058815e+01 % max 5.022061690155e-02 min 2.248477688732e-05 max/min 2.233538591609e+03 56 KSP Residual norm 1.014232627496e+01 56 KSP Residual norm 1.014232627496e+01 % max 5.023806191189e-02 min 2.107980706501e-05 max/min 2.383231580676e+03 57 KSP Residual norm 1.003842698073e+01 57 KSP Residual norm 1.003842698073e+01 % max 5.024688436678e-02 min 2.014709071403e-05 max/min 2.494001991652e+03 58 KSP Residual norm 9.919507136373e+00 58 KSP Residual norm 9.919507136373e+00 % max 5.025366627009e-02 min 1.916742405033e-05 max/min 2.621826810851e+03 59 KSP Residual norm 9.796376380994e+00 59 KSP Residual norm 9.796376380994e+00 % max 5.025842293049e-02 min 1.825927170108e-05 max/min 2.752487818423e+03 60 KSP Residual norm 9.691561821651e+00 60 KSP Residual norm 9.691561821651e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 61 KSP Residual norm 9.632099259731e+00 61 KSP Residual norm 9.632099259731e+00 % max 2.925500167504e-03 min 2.925500167504e-03 max/min 1.000000000000e+00 62 KSP Residual norm 9.575971599862e+00 62 KSP Residual norm 9.575971599862e+00 % max 3.982286419478e-02 min 8.791324690563e-04 max/min 4.529791083422e+01 63 KSP Residual norm 9.515302874531e+00 63 KSP Residual norm 9.515302874531e+00 % max 4.600312211673e-02 min 4.690569662221e-04 max/min 9.807576782676e+01 64 KSP Residual norm 9.464103525667e+00 64 KSP Residual norm 9.464103525667e+00 % max 4.822778551854e-02 min 3.131693908322e-04 max/min 1.539990399138e+02 65 KSP Residual norm 9.403560332422e+00 65 KSP Residual norm 9.403560332422e+00 % max 4.919703252361e-02 min 2.173988739905e-04 max/min 2.262984698153e+02 66 KSP Residual norm 9.357893043391e+00 66 KSP Residual norm 9.357893043391e+00 % max 4.964622614799e-02 min 1.710060298600e-04 max/min 2.903185705711e+02 67 KSP Residual norm 9.306960541922e+00 67 KSP Residual norm 9.306960541922e+00 % max 4.984787134450e-02 min 1.344096269947e-04 max/min 3.708653350140e+02 68 KSP Residual norm 9.269427150014e+00 68 KSP Residual norm 9.269427150014e+00 % max 4.995400794520e-02 min 1.140155165481e-04 max/min 4.381334177802e+02 69 KSP Residual norm 9.230165861600e+00 69 KSP Residual norm 9.230165861600e+00 % max 5.004176556833e-02 min 9.566114238487e-05 max/min 5.231148648320e+02 70 KSP Residual norm 9.198198029235e+00 70 KSP Residual norm 9.198198029235e+00 % max 5.009558946218e-02 min 8.331899653454e-05 max/min 6.012505136378e+02 71 KSP Residual norm 9.169251469517e+00 71 KSP Residual norm 9.169251469517e+00 % max 5.014519245640e-02 min 7.289198469218e-05 max/min 6.879383606876e+02 72 KSP Residual norm 9.135092793025e+00 72 KSP Residual norm 9.135092793025e+00 % max 5.018215774742e-02 min 6.271927321243e-05 max/min 8.001074498657e+02 73 KSP Residual norm 9.105362494120e+00 73 KSP Residual norm 9.105362494120e+00 % max 5.021388961400e-02 min 5.524064548644e-05 max/min 9.090025862628e+02 74 KSP Residual norm 9.061687673417e+00 74 KSP Residual norm 9.061687673417e+00 % max 5.023739863387e-02 min 4.687817079168e-05 max/min 1.071658680052e+03 75 KSP Residual norm 9.028679662969e+00 75 KSP Residual norm 9.028679662969e+00 % max 5.025730425083e-02 min 4.199221022330e-05 max/min 1.196824458241e+03 76 KSP Residual norm 8.992880406408e+00 76 KSP Residual norm 8.992880406408e+00 % max 5.027454686058e-02 min 3.752738297751e-05 max/min 1.339676334231e+03 77 KSP Residual norm 8.964528819102e+00 77 KSP Residual norm 8.964528819102e+00 % max 5.028699554781e-02 min 3.449201415211e-05 max/min 1.457931546881e+03 78 KSP Residual norm 8.932204845132e+00 78 KSP Residual norm 8.932204845132e+00 % max 5.030072885494e-02 min 3.134400332258e-05 max/min 1.604795926585e+03 79 KSP Residual norm 8.895761877274e+00 79 KSP Residual norm 8.895761877274e+00 % max 5.031028472527e-02 min 2.839269327487e-05 max/min 1.771944783054e+03 80 KSP Residual norm 8.865870934945e+00 80 KSP Residual norm 8.865870934945e+00 % max 5.031998882995e-02 min 2.627396025827e-05 max/min 1.915203811504e+03 81 KSP Residual norm 8.830580132880e+00 81 KSP Residual norm 8.830580132880e+00 % max 5.032777592309e-02 min 2.409536172351e-05 max/min 2.088691446122e+03 82 KSP Residual norm 8.798465225248e+00 82 KSP Residual norm 8.798465225248e+00 % max 5.033524303782e-02 min 2.236523603182e-05 max/min 2.250601914785e+03 83 KSP Residual norm 8.760080074944e+00 83 KSP Residual norm 8.760080074944e+00 % max 5.034177171096e-02 min 2.056519241405e-05 max/min 2.447911534082e+03 84 KSP Residual norm 8.727744076067e+00 84 KSP Residual norm 8.727744076067e+00 % max 5.034598614205e-02 min 1.925152975402e-05 max/min 2.615168081983e+03 85 KSP Residual norm 8.696038485352e+00 85 KSP Residual norm 8.696038485352e+00 % max 5.034985758289e-02 min 1.807525461962e-05 max/min 2.785568371925e+03 86 KSP Residual norm 8.658084916866e+00 86 KSP Residual norm 8.658084916866e+00 % max 5.035288484733e-02 min 1.683572987630e-05 max/min 2.990834684169e+03 87 KSP Residual norm 8.629832651339e+00 87 KSP Residual norm 8.629832651339e+00 % max 5.035570607484e-02 min 1.599518303526e-05 max/min 3.148179421507e+03 88 KSP Residual norm 8.590754754556e+00 88 KSP Residual norm 8.590754754556e+00 % max 5.035804458853e-02 min 1.494573854149e-05 max/min 3.369391512418e+03 89 KSP Residual norm 8.559150744004e+00 89 KSP Residual norm 8.559150744004e+00 % max 5.035997098383e-02 min 1.418721855779e-05 max/min 3.549671895073e+03 90 KSP Residual norm 8.521258228137e+00 90 KSP Residual norm 8.521258228137e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 91 KSP Residual norm 8.497526588411e+00 91 KSP Residual norm 8.497526588411e+00 % max 1.975264931070e-03 min 1.975264931070e-03 max/min 1.000000000000e+00 92 KSP Residual norm 8.469968314368e+00 92 KSP Residual norm 8.469968314368e+00 % max 4.067541947777e-02 min 5.649956814298e-04 max/min 7.199244315432e+01 93 KSP Residual norm 8.442388216535e+00 93 KSP Residual norm 8.442388216535e+00 % max 4.591641283212e-02 min 3.180748040850e-04 max/min 1.443572777297e+02 94 KSP Residual norm 8.419482060407e+00 94 KSP Residual norm 8.419482060407e+00 % max 4.796536774726e-02 min 2.164394688203e-04 max/min 2.216110028762e+02 95 KSP Residual norm 8.391448073437e+00 95 KSP Residual norm 8.391448073437e+00 % max 4.889377486049e-02 min 1.495946596321e-04 max/min 3.268417133388e+02 96 KSP Residual norm 8.368625827061e+00 96 KSP Residual norm 8.368625827061e+00 % max 4.933631094537e-02 min 1.176354530136e-04 max/min 4.194000165890e+02 97 KSP Residual norm 8.343888719911e+00 97 KSP Residual norm 8.343888719911e+00 % max 4.966580952625e-02 min 9.275757622672e-05 max/min 5.354366893422e+02 98 KSP Residual norm 8.318859375781e+00 98 KSP Residual norm 8.318859375781e+00 % max 4.985554288816e-02 min 7.584651557689e-05 max/min 6.573214670305e+02 99 KSP Residual norm 8.296818234372e+00 99 KSP Residual norm 8.296818234372e+00 % max 5.000465250274e-02 min 6.434331078101e-05 max/min 7.771538625503e+02 100 KSP Residual norm 8.273996484882e+00 100 KSP Residual norm 8.273996484882e+00 % max 5.008895279168e-02 min 5.506848158263e-05 max/min 9.095757019652e+02 101 KSP Residual norm 8.255724949899e+00 101 KSP Residual norm 8.255724949899e+00 % max 5.014593371023e-02 min 4.882008841863e-05 max/min 1.027157781449e+03 102 KSP Residual norm 8.232669957556e+00 102 KSP Residual norm 8.232669957556e+00 % max 5.018913332758e-02 min 4.226042050076e-05 max/min 1.187615568725e+03 103 KSP Residual norm 8.213296377141e+00 103 KSP Residual norm 8.213296377141e+00 % max 5.021868252235e-02 min 3.780616575153e-05 max/min 1.328319905605e+03 104 KSP Residual norm 8.193132226826e+00 104 KSP Residual norm 8.193132226826e+00 % max 5.024688075415e-02 min 3.377511995259e-05 max/min 1.487689187327e+03 105 KSP Residual norm 8.171643254763e+00 105 KSP Residual norm 8.171643254763e+00 % max 5.026728153150e-02 min 3.023415073303e-05 max/min 1.662599421937e+03 106 KSP Residual norm 8.151829662774e+00 106 KSP Residual norm 8.151829662774e+00 % max 5.028708500556e-02 min 2.744400922332e-05 max/min 1.832351993339e+03 107 KSP Residual norm 8.129025765420e+00 107 KSP Residual norm 8.129025765420e+00 % max 5.030339522581e-02 min 2.474875172459e-05 max/min 2.032562926228e+03 108 KSP Residual norm 8.111309802627e+00 108 KSP Residual norm 8.111309802627e+00 % max 5.031735287766e-02 min 2.293050726954e-05 max/min 2.194341027270e+03 109 KSP Residual norm 8.090366540985e+00 109 KSP Residual norm 8.090366540985e+00 % max 5.033121784005e-02 min 2.101137296849e-05 max/min 2.395427367623e+03 110 KSP Residual norm 8.073027662070e+00 110 KSP Residual norm 8.073027662070e+00 % max 5.034138335267e-02 min 1.960159080315e-05 max/min 2.568229479853e+03 111 KSP Residual norm 8.054434159270e+00 111 KSP Residual norm 8.054434159270e+00 % max 5.035015883239e-02 min 1.821682208537e-05 max/min 2.763937562570e+03 112 KSP Residual norm 8.032053340546e+00 112 KSP Residual norm 8.032053340546e+00 % max 5.035739060038e-02 min 1.676987839775e-05 max/min 3.002847689530e+03 113 KSP Residual norm 8.010926961336e+00 113 KSP Residual norm 8.010926961336e+00 % max 5.036401843600e-02 min 1.558244441831e-05 max/min 3.232099989191e+03 114 KSP Residual norm 7.984492067800e+00 114 KSP Residual norm 7.984492067800e+00 % max 5.036915025576e-02 min 1.432125376193e-05 max/min 3.517090828294e+03 115 KSP Residual norm 7.961675302740e+00 115 KSP Residual norm 7.961675302740e+00 % max 5.037370765890e-02 min 1.339130397868e-05 max/min 3.761673078224e+03 116 KSP Residual norm 7.932625028152e+00 116 KSP Residual norm 7.932625028152e+00 % max 5.037827218254e-02 min 1.237220076291e-05 max/min 4.071892555573e+03 117 KSP Residual norm 7.906813066610e+00 117 KSP Residual norm 7.906813066610e+00 % max 5.038125939613e-02 min 1.160011274858e-05 max/min 4.343169802576e+03 118 KSP Residual norm 7.879520099545e+00 118 KSP Residual norm 7.879520099545e+00 % max 5.038362664858e-02 min 1.088466124387e-05 max/min 4.628864924663e+03 119 KSP Residual norm 7.847935193267e+00 119 KSP Residual norm 7.847935193267e+00 % max 5.038553316037e-02 min 1.017160174043e-05 max/min 4.953549543737e+03 120 KSP Residual norm 7.820348365528e+00 120 KSP Residual norm 7.820348365528e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 121 KSP Residual norm 7.799046045074e+00 121 KSP Residual norm 7.799046045074e+00 % max 1.763573844259e-03 min 1.763573844259e-03 max/min 1.000000000000e+00 122 KSP Residual norm 7.772891000068e+00 122 KSP Residual norm 7.772891000068e+00 % max 4.012276949233e-02 min 5.621079331001e-04 max/min 7.137911979118e+01 123 KSP Residual norm 7.751356511735e+00 123 KSP Residual norm 7.751356511735e+00 % max 4.589984449729e-02 min 3.294352346375e-04 max/min 1.393288867470e+02 124 KSP Residual norm 7.729080405053e+00 124 KSP Residual norm 7.729080405053e+00 % max 4.809613107581e-02 min 2.125971954232e-04 max/min 2.262312585077e+02 125 KSP Residual norm 7.705970794736e+00 125 KSP Residual norm 7.705970794736e+00 % max 4.905136349091e-02 min 1.510670687320e-04 max/min 3.246992471796e+02 126 KSP Residual norm 7.686819565608e+00 126 KSP Residual norm 7.686819565608e+00 % max 4.952725262914e-02 min 1.178748152432e-04 max/min 4.201682312458e+02 127 KSP Residual norm 7.666701686115e+00 127 KSP Residual norm 7.666701686115e+00 % max 4.976093182491e-02 min 9.328491735132e-05 max/min 5.334295536491e+02 128 KSP Residual norm 7.649235097233e+00 128 KSP Residual norm 7.649235097233e+00 % max 4.989696153267e-02 min 7.766412200490e-05 max/min 6.424711983421e+02 129 KSP Residual norm 7.633235754643e+00 129 KSP Residual norm 7.633235754643e+00 % max 5.000231707025e-02 min 6.587368573978e-05 max/min 7.590636004151e+02 130 KSP Residual norm 7.619220007143e+00 130 KSP Residual norm 7.619220007143e+00 % max 5.007176928445e-02 min 5.716773243860e-05 max/min 8.758746787486e+02 131 KSP Residual norm 7.606446986325e+00 131 KSP Residual norm 7.606446986325e+00 % max 5.012837361729e-02 min 5.008891864933e-05 max/min 1.000787698537e+03 132 KSP Residual norm 7.592330732659e+00 132 KSP Residual norm 7.592330732659e+00 % max 5.017504638090e-02 min 4.343768101856e-05 max/min 1.155104167726e+03 133 KSP Residual norm 7.579037905111e+00 133 KSP Residual norm 7.579037905111e+00 % max 5.021088967367e-02 min 3.821442461058e-05 max/min 1.313925047553e+03 134 KSP Residual norm 7.562856215056e+00 134 KSP Residual norm 7.562856215056e+00 % max 5.023848561968e-02 min 3.313432200285e-05 max/min 1.516206838799e+03 135 KSP Residual norm 7.547358511004e+00 135 KSP Residual norm 7.547358511004e+00 % max 5.026078841519e-02 min 2.933327392962e-05 max/min 1.713439438631e+03 136 KSP Residual norm 7.532901434331e+00 136 KSP Residual norm 7.532901434331e+00 % max 5.028045023141e-02 min 2.637769496002e-05 max/min 1.906173011236e+03 137 KSP Residual norm 7.519753135895e+00 137 KSP Residual norm 7.519753135895e+00 % max 5.029524526417e-02 min 2.407220090845e-05 max/min 2.089349679967e+03 138 KSP Residual norm 7.507202333456e+00 138 KSP Residual norm 7.507202333456e+00 % max 5.031012734177e-02 min 2.208251190735e-05 max/min 2.278279190015e+03 139 KSP Residual norm 7.491643996798e+00 139 KSP Residual norm 7.491643996798e+00 % max 5.032241606998e-02 min 1.997321813357e-05 max/min 2.519494641947e+03 140 KSP Residual norm 7.479003314360e+00 140 KSP Residual norm 7.479003314360e+00 % max 5.033319407435e-02 min 1.849197931329e-05 max/min 2.721893271759e+03 141 KSP Residual norm 7.464516162579e+00 141 KSP Residual norm 7.464516162579e+00 % max 5.034358555367e-02 min 1.698606528749e-05 max/min 2.963816793447e+03 142 KSP Residual norm 7.449996382478e+00 142 KSP Residual norm 7.449996382478e+00 % max 5.035230265047e-02 min 1.568572671913e-05 max/min 3.210071394975e+03 143 KSP Residual norm 7.434571619570e+00 143 KSP Residual norm 7.434571619570e+00 % max 5.036072594499e-02 min 1.447781135267e-05 max/min 3.478476457405e+03 144 KSP Residual norm 7.419000751324e+00 144 KSP Residual norm 7.419000751324e+00 % max 5.036623286253e-02 min 1.343175093003e-05 max/min 3.749789072542e+03 145 KSP Residual norm 7.405831013304e+00 145 KSP Residual norm 7.405831013304e+00 % max 5.037115175889e-02 min 1.263636694125e-05 max/min 3.986205211759e+03 146 KSP Residual norm 7.389947313335e+00 146 KSP Residual norm 7.389947313335e+00 % max 5.037511137683e-02 min 1.178175543726e-05 max/min 4.275688087830e+03 147 KSP Residual norm 7.376589103834e+00 147 KSP Residual norm 7.376589103834e+00 % max 5.037884873619e-02 min 1.113697859480e-05 max/min 4.523565193858e+03 148 KSP Residual norm 7.361904465068e+00 148 KSP Residual norm 7.361904465068e+00 % max 5.038244929028e-02 min 1.048869803082e-05 max/min 4.803498884443e+03 149 KSP Residual norm 7.345707930341e+00 149 KSP Residual norm 7.345707930341e+00 % max 5.038557618488e-02 min 9.853233140868e-06 max/min 5.113608443496e+03 150 KSP Residual norm 7.332036969948e+00 150 KSP Residual norm 7.332036969948e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 151 KSP Residual norm 7.320455375230e+00 151 KSP Residual norm 7.320455375230e+00 % max 1.431385688657e-03 min 1.431385688657e-03 max/min 1.000000000000e+00 152 KSP Residual norm 7.306137835405e+00 152 KSP Residual norm 7.306137835405e+00 % max 4.067576743363e-02 min 4.241260979801e-04 max/min 9.590489155784e+01 153 KSP Residual norm 7.293375577879e+00 153 KSP Residual norm 7.293375577879e+00 % max 4.590977116489e-02 min 2.456009487710e-04 max/min 1.869283135697e+02 154 KSP Residual norm 7.281375640680e+00 154 KSP Residual norm 7.281375640680e+00 % max 4.794606694861e-02 min 1.624135035336e-04 max/min 2.952098557415e+02 155 KSP Residual norm 7.267585152756e+00 155 KSP Residual norm 7.267585152756e+00 % max 4.883815226720e-02 min 1.132870807821e-04 max/min 4.311008098192e+02 156 KSP Residual norm 7.256005246851e+00 156 KSP Residual norm 7.256005246851e+00 % max 4.928821371002e-02 min 8.875215964924e-05 max/min 5.553466406318e+02 157 KSP Residual norm 7.243046942130e+00 157 KSP Residual norm 7.243046942130e+00 % max 4.962899267192e-02 min 6.961410772597e-05 max/min 7.129157335073e+02 158 KSP Residual norm 7.229811309947e+00 158 KSP Residual norm 7.229811309947e+00 % max 4.983438073161e-02 min 5.673001872789e-05 max/min 8.784481628791e+02 159 KSP Residual norm 7.217877941614e+00 159 KSP Residual norm 7.217877941614e+00 % max 4.999187039515e-02 min 4.800542601518e-05 max/min 1.041379580286e+03 160 KSP Residual norm 7.205919454792e+00 160 KSP Residual norm 7.205919454792e+00 % max 5.008301131493e-02 min 4.122670881103e-05 max/min 1.214819537123e+03 161 KSP Residual norm 7.195409385573e+00 161 KSP Residual norm 7.195409385573e+00 % max 5.014616631182e-02 min 3.631703183888e-05 max/min 1.380789226782e+03 162 KSP Residual norm 7.183547674636e+00 162 KSP Residual norm 7.183547674636e+00 % max 5.019278208096e-02 min 3.173784761671e-05 max/min 1.581480341299e+03 163 KSP Residual norm 7.172979874079e+00 163 KSP Residual norm 7.172979874079e+00 % max 5.022591765326e-02 min 2.839070882841e-05 max/min 1.769096994260e+03 164 KSP Residual norm 7.162534910500e+00 164 KSP Residual norm 7.162534910500e+00 % max 5.025519893286e-02 min 2.551128075164e-05 max/min 1.969920656752e+03 165 KSP Residual norm 7.150786114562e+00 165 KSP Residual norm 7.150786114562e+00 % max 5.027704919547e-02 min 2.281464044572e-05 max/min 2.203718674204e+03 166 KSP Residual norm 7.139085411880e+00 166 KSP Residual norm 7.139085411880e+00 % max 5.029716479060e-02 min 2.058123314807e-05 max/min 2.443836306054e+03 167 KSP Residual norm 7.127348133205e+00 167 KSP Residual norm 7.127348133205e+00 % max 5.031340434771e-02 min 1.870110965983e-05 max/min 2.690396733825e+03 168 KSP Residual norm 7.117236740050e+00 168 KSP Residual norm 7.117236740050e+00 % max 5.032704745463e-02 min 1.728792595435e-05 max/min 2.911109614162e+03 169 KSP Residual norm 7.106581467538e+00 169 KSP Residual norm 7.106581467538e+00 % max 5.033992742731e-02 min 1.595303186658e-05 max/min 3.155508485680e+03 170 KSP Residual norm 7.096702708897e+00 170 KSP Residual norm 7.096702708897e+00 % max 5.034994373036e-02 min 1.484567897591e-05 max/min 3.391555469579e+03 171 KSP Residual norm 7.086565366195e+00 171 KSP Residual norm 7.086565366195e+00 % max 5.035812213690e-02 min 1.381612484622e-05 max/min 3.644880362433e+03 172 KSP Residual norm 7.075012585924e+00 172 KSP Residual norm 7.075012585924e+00 % max 5.036502228469e-02 min 1.278340351316e-05 max/min 3.939875811073e+03 173 KSP Residual norm 7.062833495345e+00 173 KSP Residual norm 7.062833495345e+00 % max 5.037137634876e-02 min 1.183829447275e-05 max/min 4.254952135607e+03 174 KSP Residual norm 7.048879356834e+00 174 KSP Residual norm 7.048879356834e+00 % max 5.037656483477e-02 min 1.091961875670e-05 max/min 4.613399602791e+03 175 KSP Residual norm 7.035902594057e+00 175 KSP Residual norm 7.035902594057e+00 % max 5.038141686987e-02 min 1.018839733079e-05 max/min 4.944979591404e+03 176 KSP Residual norm 7.021136476465e+00 176 KSP Residual norm 7.021136476465e+00 % max 5.038611549016e-02 min 9.468564830312e-06 max/min 5.321409991180e+03 177 KSP Residual norm 7.006593845151e+00 177 KSP Residual norm 7.006593845151e+00 % max 5.038959144611e-02 min 8.859635889121e-06 max/min 5.687546539919e+03 178 KSP Residual norm 6.993062833601e+00 178 KSP Residual norm 6.993062833601e+00 % max 5.039236160252e-02 min 8.359618446769e-06 max/min 6.028069573199e+03 179 KSP Residual norm 6.975879436834e+00 179 KSP Residual norm 6.975879436834e+00 % max 5.039481328183e-02 min 7.806301511296e-06 max/min 6.455658061491e+03 180 KSP Residual norm 6.961936159036e+00 180 KSP Residual norm 6.961936159036e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 181 KSP Residual norm 6.949571104766e+00 181 KSP Residual norm 6.949571104766e+00 % max 1.399351227981e-03 min 1.399351227981e-03 max/min 1.000000000000e+00 182 KSP Residual norm 6.934430276323e+00 182 KSP Residual norm 6.934430276323e+00 % max 4.006268471447e-02 min 4.553537108235e-04 max/min 8.798146092192e+01 183 KSP Residual norm 6.922501402175e+00 183 KSP Residual norm 6.922501402175e+00 % max 4.594179754423e-02 min 2.676752077283e-04 max/min 1.716326212432e+02 184 KSP Residual norm 6.909332295912e+00 184 KSP Residual norm 6.909332295912e+00 % max 4.812582036535e-02 min 1.696467143449e-04 max/min 2.836825962188e+02 185 KSP Residual norm 6.896287042548e+00 185 KSP Residual norm 6.896287042548e+00 % max 4.906043413850e-02 min 1.212860900976e-04 max/min 4.045017371655e+02 186 KSP Residual norm 6.884916847633e+00 186 KSP Residual norm 6.884916847633e+00 % max 4.953832593062e-02 min 9.391587179028e-05 max/min 5.274755479163e+02 187 KSP Residual norm 6.873060200420e+00 187 KSP Residual norm 6.873060200420e+00 % max 4.977259153265e-02 min 7.430022632071e-05 max/min 6.698847903613e+02 188 KSP Residual norm 6.862680021627e+00 188 KSP Residual norm 6.862680021627e+00 % max 4.991455828748e-02 min 6.176130304247e-05 max/min 8.081849933308e+02 189 KSP Residual norm 6.852837583168e+00 189 KSP Residual norm 6.852837583168e+00 % max 5.002041053011e-02 min 5.220301128960e-05 max/min 9.581901368221e+02 190 KSP Residual norm 6.844339558399e+00 190 KSP Residual norm 6.844339558399e+00 % max 5.009153534565e-02 min 4.533445353381e-05 max/min 1.104933035275e+03 191 KSP Residual norm 6.836163224382e+00 191 KSP Residual norm 6.836163224382e+00 % max 5.014995414872e-02 min 3.956405914800e-05 max/min 1.267563420657e+03 192 KSP Residual norm 6.827649277866e+00 192 KSP Residual norm 6.827649277866e+00 % max 5.019724473232e-02 min 3.449994474144e-05 max/min 1.454994931398e+03 193 KSP Residual norm 6.819390263748e+00 193 KSP Residual norm 6.819390263748e+00 % max 5.023378666870e-02 min 3.037668631425e-05 max/min 1.653695408019e+03 194 KSP Residual norm 6.809627039335e+00 194 KSP Residual norm 6.809627039335e+00 % max 5.026073446154e-02 min 2.647033683855e-05 max/min 1.898756890329e+03 195 KSP Residual norm 6.800053875740e+00 195 KSP Residual norm 6.800053875740e+00 % max 5.028268345327e-02 min 2.345083920236e-05 max/min 2.144174160224e+03 196 KSP Residual norm 6.790862273360e+00 196 KSP Residual norm 6.790862273360e+00 % max 5.030114878577e-02 min 2.105839057759e-05 max/min 2.388651145986e+03 197 KSP Residual norm 6.782848047202e+00 197 KSP Residual norm 6.782848047202e+00 % max 5.031488527030e-02 min 1.926879621578e-05 max/min 2.611210617770e+03 198 KSP Residual norm 6.774820940722e+00 198 KSP Residual norm 6.774820940722e+00 % max 5.032806655943e-02 min 1.765506124918e-05 max/min 2.850631093775e+03 199 KSP Residual norm 6.765560705278e+00 199 KSP Residual norm 6.765560705278e+00 % max 5.033850883800e-02 min 1.605613422737e-05 max/min 3.135157449805e+03 200 KSP Residual norm 6.757382042481e+00 200 KSP Residual norm 6.757382042481e+00 % max 5.034786300596e-02 min 1.483058273224e-05 max/min 3.394867478574e+03 201 KSP Residual norm 6.748407018126e+00 201 KSP Residual norm 6.748407018126e+00 % max 5.035636455149e-02 min 1.364758464500e-05 max/min 3.689763856488e+03 202 KSP Residual norm 6.739546132243e+00 202 KSP Residual norm 6.739546132243e+00 % max 5.036368235375e-02 min 1.263270753527e-05 max/min 3.986768649010e+03 203 KSP Residual norm 6.729883903870e+00 203 KSP Residual norm 6.729883903870e+00 % max 5.037053456809e-02 min 1.166511323803e-05 max/min 4.318049344251e+03 204 KSP Residual norm 6.720537000820e+00 204 KSP Residual norm 6.720537000820e+00 % max 5.037532511363e-02 min 1.085623180438e-05 max/min 4.640221950061e+03 205 KSP Residual norm 6.712215379652e+00 205 KSP Residual norm 6.712215379652e+00 % max 5.037958968674e-02 min 1.020706245172e-05 max/min 4.935757954362e+03 206 KSP Residual norm 6.702764302125e+00 206 KSP Residual norm 6.702764302125e+00 % max 5.038301694015e-02 min 9.547806980790e-06 max/min 5.276920348465e+03 207 KSP Residual norm 6.694503439664e+00 207 KSP Residual norm 6.694503439664e+00 % max 5.038645990787e-02 min 9.027288881950e-06 max/min 5.581571673043e+03 208 KSP Residual norm 6.685836845914e+00 208 KSP Residual norm 6.685836845914e+00 % max 5.038983496539e-02 min 8.525086467240e-06 max/min 5.910771129305e+03 209 KSP Residual norm 6.676056081078e+00 209 KSP Residual norm 6.676056081078e+00 % max 5.039292090854e-02 min 8.016959589319e-06 max/min 6.285789562377e+03 210 KSP Residual norm 6.667972824779e+00 210 KSP Residual norm 6.667972824779e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 211 KSP Residual norm 6.660524120788e+00 211 KSP Residual norm 6.660524120788e+00 % max 1.201057584256e-03 min 1.201057584256e-03 max/min 1.000000000000e+00 212 KSP Residual norm 6.651638150805e+00 212 KSP Residual norm 6.651638150805e+00 % max 4.055003712073e-02 min 3.602702261671e-04 max/min 1.125545054115e+02 213 KSP Residual norm 6.643691520486e+00 213 KSP Residual norm 6.643691520486e+00 % max 4.592802802980e-02 min 2.065701906456e-04 max/min 2.223361845494e+02 214 KSP Residual norm 6.636037873988e+00 214 KSP Residual norm 6.636037873988e+00 % max 4.796407969370e-02 min 1.353722017463e-04 max/min 3.543126216090e+02 215 KSP Residual norm 6.627442754885e+00 215 KSP Residual norm 6.627442754885e+00 % max 4.884567320882e-02 min 9.462015702202e-05 max/min 5.162290440657e+02 216 KSP Residual norm 6.619887241098e+00 216 KSP Residual norm 6.619887241098e+00 % max 4.929835256881e-02 min 7.355460128521e-05 max/min 6.702279899208e+02 217 KSP Residual norm 6.611644672814e+00 217 KSP Residual norm 6.611644672814e+00 % max 4.963767318297e-02 min 5.782377865251e-05 max/min 8.584301188836e+02 218 KSP Residual norm 6.603094047823e+00 218 KSP Residual norm 6.603094047823e+00 % max 4.984846204805e-02 min 4.703322416890e-05 max/min 1.059856365982e+03 219 KSP Residual norm 6.595182173561e+00 219 KSP Residual norm 6.595182173561e+00 % max 5.000432218602e-02 min 3.967324824945e-05 max/min 1.260404035274e+03 220 KSP Residual norm 6.587284563620e+00 220 KSP Residual norm 6.587284563620e+00 % max 5.009670521772e-02 min 3.404241840693e-05 max/min 1.471596542257e+03 221 KSP Residual norm 6.580128584866e+00 221 KSP Residual norm 6.580128584866e+00 % max 5.016072928392e-02 min 2.990024164317e-05 max/min 1.677602806109e+03 222 KSP Residual norm 6.572296403412e+00 222 KSP Residual norm 6.572296403412e+00 % max 5.020731588915e-02 min 2.618933910196e-05 max/min 1.917089839254e+03 223 KSP Residual norm 6.565352707582e+00 223 KSP Residual norm 6.565352707582e+00 % max 5.024073372415e-02 min 2.347465778804e-05 max/min 2.140211549740e+03 224 KSP Residual norm 6.558386275258e+00 224 KSP Residual norm 6.558386275258e+00 % max 5.026951967451e-02 min 2.110562876230e-05 max/min 2.381806305828e+03 225 KSP Residual norm 6.550598040629e+00 225 KSP Residual norm 6.550598040629e+00 % max 5.029079158979e-02 min 1.889984268976e-05 max/min 2.660910591443e+03 226 KSP Residual norm 6.542697017960e+00 226 KSP Residual norm 6.542697017960e+00 % max 5.031008807660e-02 min 1.704236672389e-05 max/min 2.952059939309e+03 227 KSP Residual norm 6.534759376592e+00 227 KSP Residual norm 6.534759376592e+00 % max 5.032533122873e-02 min 1.548375893644e-05 max/min 3.250201158215e+03 228 KSP Residual norm 6.528021311803e+00 228 KSP Residual norm 6.528021311803e+00 % max 5.033781802385e-02 min 1.433114590636e-05 max/min 3.512476835612e+03 229 KSP Residual norm 6.520984017979e+00 229 KSP Residual norm 6.520984017979e+00 % max 5.034943606557e-02 min 1.324786234169e-05 max/min 3.800570595237e+03 230 KSP Residual norm 6.514193170401e+00 230 KSP Residual norm 6.514193170401e+00 % max 5.035841820200e-02 min 1.231461482687e-05 max/min 4.089321420929e+03 231 KSP Residual norm 6.507454678993e+00 231 KSP Residual norm 6.507454678993e+00 % max 5.036564337502e-02 min 1.147801720702e-05 max/min 4.388009049524e+03 232 KSP Residual norm 6.499715568912e+00 232 KSP Residual norm 6.499715568912e+00 % max 5.037174915650e-02 min 1.062931272644e-05 max/min 4.738946952913e+03 233 KSP Residual norm 6.491658928677e+00 233 KSP Residual norm 6.491658928677e+00 % max 5.037725892752e-02 min 9.861215700509e-06 max/min 5.108625595211e+03 234 KSP Residual norm 6.482539749822e+00 234 KSP Residual norm 6.482539749822e+00 % max 5.038200712523e-02 min 9.117400679384e-06 max/min 5.525917846207e+03 235 KSP Residual norm 6.473739436873e+00 235 KSP Residual norm 6.473739436873e+00 % max 5.038639249864e-02 min 8.502157227416e-06 max/min 5.926306836125e+03 236 KSP Residual norm 6.464207246788e+00 236 KSP Residual norm 6.464207246788e+00 % max 5.039071411587e-02 min 7.923453501421e-06 max/min 6.359690772064e+03 237 KSP Residual norm 6.454563042524e+00 237 KSP Residual norm 6.454563042524e+00 % max 5.039401381208e-02 min 7.417590394319e-06 max/min 6.793852333863e+03 238 KSP Residual norm 6.445814111401e+00 238 KSP Residual norm 6.445814111401e+00 % max 5.039670848391e-02 min 7.010929124691e-06 max/min 7.188306654880e+03 239 KSP Residual norm 6.434818602557e+00 239 KSP Residual norm 6.434818602557e+00 % max 5.039914643905e-02 min 6.561797780115e-06 max/min 7.680691805496e+03 240 KSP Residual norm 6.425629227876e+00 240 KSP Residual norm 6.425629227876e+00 % max 1.000000000000e+00 min 1.000000000000e+00 max/min 1.000000000000e+00 241 KSP Residual norm 6.417127248758e+00 241 KSP Residual norm 6.417127248758e+00 % max 1.210656778015e-03 min 1.210656778015e-03 max/min 1.000000000000e+00 242 KSP Residual norm 6.407084526202e+00 242 KSP Residual norm 6.407084526202e+00 % max 3.993892554248e-02 min 3.965391930652e-04 max/min 1.007187340897e+02 243 KSP Residual norm 6.399023676786e+00 243 KSP Residual norm 6.399023676786e+00 % max 4.595771539869e-02 min 2.302090353201e-04 max/min 1.996347160519e+02 244 KSP Residual norm 6.390100972460e+00 244 KSP Residual norm 6.390100972460e+00 % max 4.813642522449e-02 min 1.454042346773e-04 max/min 3.310524300158e+02 245 KSP Residual norm 6.381371311556e+00 245 KSP Residual norm 6.381371311556e+00 % max 4.906816728063e-02 min 1.040324840991e-04 max/min 4.716619785209e+02 246 KSP Residual norm 6.373426446725e+00 246 KSP Residual norm 6.373426446725e+00 % max 4.954785702498e-02 min 7.995229606721e-05 max/min 6.197177499859e+02 247 KSP Residual norm 6.365388681789e+00 From balay at mcs.anl.gov Wed Jul 11 13:10:26 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 11 Jul 2012 13:10:26 -0500 (CDT) Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <4FFDBA39.3040205@gmail.com> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> Message-ID: On Wed, 11 Jul 2012, TAY wee-beng wrote: > > I just tried on a different system and the same error occurs. The 1st command > : > / > / > > /patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch > > $ patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F/ > > works but the 2nd one doesn't: > / > $ patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > Hunk #1 FAILED at 175. > 1 out of 1 hunk FAILED -- saving rejects to file > src/dm/impls/da/f90-custom/zda1f90.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > Hunk #1 FAILED at 307. > Hunk #2 FAILED at 333. > Hunk #3 FAILED at 436. > 3 out of 3 hunks FAILED -- saving rejects to file > src/sys/f90-src/f90_cwrap.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > Hunk #1 FAILED at 322. > Hunk #2 FAILED at 426. > 2 out of 2 hunks FAILED -- saving rejects to file > src/sys/f90-src/fsrc/f90_fwrap.F.rej/ > > > It is possible to send me the 3 patch files? Or is there some other ways? This doesn't make any sense. What petsc-dev do you have? Is it hg repo? or something else? What revisions? Are the sources locally modified? You can apply DMDAVecRestoreArrayF90-2.patch and then edit src/sys/f90-src/f90_cwrap.c and replace all occurances of 'F90ARRAY4d' with 'F90ARRAY4D' in the editor to get the same effect. Satish From knepley at gmail.com Wed Jul 11 15:17:15 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Jul 2012 15:17:15 -0500 Subject: [petsc-users] Does this mean the matrix is ill-conditioned? In-Reply-To: References: Message-ID: On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai wrote: > Hi, all, > > The following is the output from the solution of a Poisson equation > from Darcy's law. > > To compute the condition number of matrix, I did not use PC and use > GMRES KSP to do the test. > > It seems like that the condition number keep increasing during the > iterative solution. Does this mean the matrix is ill-conditioned? > Generally yes. Krylov methods take a long time to resolve the smallest eigenvalues, so this approximation is not great. > For this test, it did not achieve convergence with 10000 iterations. > > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 > times iteration to get the iteration convergent. > > Any suggestion for improving the convergence rate will be much > appreciated. The solution of this equation has been the bottleneck of > my code, it takes more than 90% of the total time. > Try ML or GAMG. Matt > Thank you very much. > > Best Regards, > Kai > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Wed Jul 11 15:32:53 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Wed, 11 Jul 2012 22:32:53 +0200 Subject: [petsc-users] Draw '2D' Vector (FFT) Message-ID: Hello list, what is the easiest way to 'draw' (PETSC_VIEWER_DRAW_WORLD) a 2D plot of a petsc vector which was created with MatGetVecsFFTW()? The employed FFT matrix has DIM=2 and therefore the mentioned vector has N1*N2 components. As far as I know VecView(x, PETSC_VIEWER_DRAW_WORLD) only creates a lineplot. Regards Thomas From jedbrown at mcs.anl.gov Wed Jul 11 15:37:12 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 11 Jul 2012 15:37:12 -0500 Subject: [petsc-users] Draw '2D' Vector (FFT) In-Reply-To: References: Message-ID: On Wed, Jul 11, 2012 at 3:32 PM, Thomas Hisch wrote: > Hello list, > > what is the easiest way to 'draw' (PETSC_VIEWER_DRAW_WORLD) a 2D plot > of a petsc vector which was created with MatGetVecsFFTW()? The > employed FFT matrix has DIM=2 and therefore the mentioned vector has > N1*N2 components. As far as I know VecView(x, PETSC_VIEWER_DRAW_WORLD) > only creates a lineplot. > Can the Vec be associated with a DM? If so, it will do a 2D plot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Jul 11 15:44:03 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 11 Jul 2012 22:44:03 +0200 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> Message-ID: <4FFDE593.2070402@gmail.com> On 11/7/2012 8:10 PM, Satish Balay wrote: > On Wed, 11 Jul 2012, TAY wee-beng wrote: > >> I just tried on a different system and the same error occurs. The 1st command >> : >> / >> / >> >> /patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch >> >> $ patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F/ >> >> works but the 2nd one doesn't: >> / >> $ patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch >> (Stripping trailing CRs from patch.) >> patching file src/dm/impls/da/f90-custom/zda1f90.c >> Hunk #1 FAILED at 175. >> 1 out of 1 hunk FAILED -- saving rejects to file >> src/dm/impls/da/f90-custom/zda1f90.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/f90_cwrap.c >> Hunk #1 FAILED at 307. >> Hunk #2 FAILED at 333. >> Hunk #3 FAILED at 436. >> 3 out of 3 hunks FAILED -- saving rejects to file >> src/sys/f90-src/f90_cwrap.c.rej >> (Stripping trailing CRs from patch.) >> patching file src/sys/f90-src/fsrc/f90_fwrap.F >> Hunk #1 FAILED at 322. >> Hunk #2 FAILED at 426. >> 2 out of 2 hunks FAILED -- saving rejects to file >> src/sys/f90-src/fsrc/f90_fwrap.F.rej/ >> >> >> It is possible to send me the 3 patch files? Or is there some other ways? > This doesn't make any sense. What petsc-dev do you have? Is it hg > repo? or something else? What revisions? Are the sources locally > modified? > > You can apply DMDAVecRestoreArrayF90-2.patch and then edit > src/sys/f90-src/f90_cwrap.c and replace all occurances of 'F90ARRAY4d' > with 'F90ARRAY4D' in the editor to get the same effect. > > Satish I just tried but the same error occurs: / 1>D:\Lib\petsc-3.3-dev_win32_vs2008/include/finclude/ftn-custom/petscdmcomposite.h90(8): warning #6717: This name has not been given an explicit type. [D1] 1>Linking... 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSINT referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy 1>c:\obj_tmp\dm_test2d\Debug/dm_test2d.exe : fatal error LNK1120: 6 unresolved externals 1> 1>Build log written to "file://c:\obj_tmp\dm_test2d\Debug\BuildLog.htm"/ Any other solutions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 11 15:58:16 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 11 Jul 2012 15:58:16 -0500 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <4FFDE593.2070402@gmail.com> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> <4FFDE593.2070402@gmail.com> Message-ID: On Wed, Jul 11, 2012 at 3:44 PM, TAY wee-beng wrote: > On 11/7/2012 8:10 PM, Satish Balay wrote: > > On Wed, 11 Jul 2012, TAY wee-beng wrote: > > > I just tried on a different system and the same error occurs. The 1st command > : > / > / > > /patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch > > $ patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F/ > > works but the 2nd one doesn't: > / > $ patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch > (Stripping trailing CRs from patch.) > patching file src/dm/impls/da/f90-custom/zda1f90.c > Hunk #1 FAILED at 175. > 1 out of 1 hunk FAILED -- saving rejects to file > src/dm/impls/da/f90-custom/zda1f90.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/f90_cwrap.c > Hunk #1 FAILED at 307. > Hunk #2 FAILED at 333. > Hunk #3 FAILED at 436. > 3 out of 3 hunks FAILED -- saving rejects to file > src/sys/f90-src/f90_cwrap.c.rej > (Stripping trailing CRs from patch.) > patching file src/sys/f90-src/fsrc/f90_fwrap.F > Hunk #1 FAILED at 322. > Hunk #2 FAILED at 426. > 2 out of 2 hunks FAILED -- saving rejects to file > src/sys/f90-src/fsrc/f90_fwrap.F.rej/ > > > It is possible to send me the 3 patch files? Or is there some other ways? > > This doesn't make any sense. What petsc-dev do you have? Is it hg > repo? or something else? What revisions? Are the sources locally > modified? > > You can apply DMDAVecRestoreArrayF90-2.patch and then edit > src/sys/f90-src/f90_cwrap.c and replace all occurances of 'F90ARRAY4d' > with 'F90ARRAY4D' in the editor to get the same effect. > > Satish > > > I just tried but the same error occurs: > * > 1>D:\Lib\petsc-3.3-dev_win32_vs2008/include/finclude/ftn-custom/petscdmcomposite.h90(8): > warning #6717: This name has not been given an explicit type. [D1] > 1>Linking... > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSINT referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol > F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy > 1>c:\obj_tmp\dm_test2d\Debug/dm_test2d.exe : fatal error LNK1120: 6 > unresolved externals > 1> > 1>Build log written to "file://c:\obj_tmp\dm_test2d\Debug\BuildLog.htm"* > > Any other solutions? > Look, if this stuff is not done systematically, if you do not know exactly what version of the code you have, and how to build, then it just generates an endless stream of emails without any progress. If you have a link error, unresolved symbol, it means it can't find the symbol in the library. Did you a) Look for the symbol in the library using nm? b) Check the build log (make.log) to see that all files built correctly? c) Check the source file for this function? Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Wed Jul 11 16:11:22 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Wed, 11 Jul 2012 16:11:22 -0500 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <4FFDE593.2070402@gmail.com> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> <4FFDE593.2070402@gmail.com> Message-ID: <3E2AD25B-E55B-4237-B3BC-4CC7220CAF12@lsu.edu> On Jul 11, 2012, at 3:44 PM, TAY wee-beng wrote: > On 11/7/2012 8:10 PM, Satish Balay wrote: >> On Wed, 11 Jul 2012, TAY wee-beng wrote: >> >>> I just tried on a different system and the same error occurs. The 1st command >>> : >>> / >>> / >>> >>> /patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch >>> >>> $ patch -Np1 -R < DMDAVecRestoreArrayF90-2.patch >>> (Stripping trailing CRs from patch.) >>> patching file src/dm/impls/da/f90-custom/zda1f90.c >>> (Stripping trailing CRs from patch.) >>> patching file src/sys/f90-src/f90_cwrap.c >>> (Stripping trailing CRs from patch.) >>> patching file src/sys/f90-src/fsrc/f90_fwrap.F/ >>> >>> works but the 2nd one doesn't: >>> / >>> $ patch -Np1 < DMDAVecRestoreArrayF90-2.1.patch >>> (Stripping trailing CRs from patch.) >>> patching file src/dm/impls/da/f90-custom/zda1f90.c >>> Hunk #1 FAILED at 175. >>> 1 out of 1 hunk FAILED -- saving rejects to file >>> src/dm/impls/da/f90-custom/zda1f90.c.rej >>> (Stripping trailing CRs from patch.) >>> patching file src/sys/f90-src/f90_cwrap.c >>> Hunk #1 FAILED at 307. >>> Hunk #2 FAILED at 333. >>> Hunk #3 FAILED at 436. >>> 3 out of 3 hunks FAILED -- saving rejects to file >>> src/sys/f90-src/f90_cwrap.c.rej >>> (Stripping trailing CRs from patch.) >>> patching file src/sys/f90-src/fsrc/f90_fwrap.F >>> Hunk #1 FAILED at 322. >>> Hunk #2 FAILED at 426. >>> 2 out of 2 hunks FAILED -- saving rejects to file >>> src/sys/f90-src/fsrc/f90_fwrap.F.rej/ >>> >>> >>> It is possible to send me the 3 patch files? Or is there some other ways? >> This doesn't make any sense. What petsc-dev do you have? Is it hg >> repo? or something else? What revisions? Are the sources locally >> modified? >> >> You can apply DMDAVecRestoreArrayF90-2.patch and then edit >> src/sys/f90-src/f90_cwrap.c and replace all occurances of 'F90ARRAY4d' >> with 'F90ARRAY4D' in the editor to get the same effect. >> >> Satish > > I just tried but the same error occurs: > > 1>D:\Lib\petsc-3.3-dev_win32_vs2008/include/finclude/ftn-custom/petscdmcomposite.h90(8): warning #6717: This name has not been given an explicit type. [D1] > 1>Linking... > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dCREATESCALAR referenced in function F90Array4dCreate > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSFORTRANADDR referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSINT referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSREAL referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dACCESSSCALAR referenced in function F90Array4dAccess > 1>libpetsc.lib(f90_cwrap.o) : error LNK2019: unresolved external symbol F90ARRAY4dDESTROYSCALAR referenced in function F90Array4dDestroy > 1>c:\obj_tmp\dm_test2d\Debug/dm_test2d.exe : fatal error LNK1120: 6 unresolved externals > 1> > 1>Build log written to "file://c:\obj_tmp\dm_test2d\Debug\BuildLog.htm" > > Any other solutions? There is something fishy in your tree or your build: the patch that Satish sent addressed this issue, but it clearly has not been applied to your tree. The patch I had originally sent contained an error. For instance, F90ARRAY4DACCESSFORTRANADDR was replaced by F90ARRAY4dACCESSFORTRANADDR (note the capitalization of the "d"). These are the symbols that are unresolved in your build. How about starting from a clean state: - rename your current petsc-dev tree - clone petsc-dev from the official tree (BTW, do you really need petsc-dev, and not 3.3?) - apply the patch Satish sent you or, quoting Satich >> edit src/sys/f90-src/f90_cwrap.c and replace all occurrences of 'F90ARRAY4d' with 'F90ARRAY4D' - run configure with your favorite options, then make This should fix your problem. Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Wed Jul 11 16:12:32 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Wed, 11 Jul 2012 16:12:32 -0500 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: <3E2AD25B-E55B-4237-B3BC-4CC7220CAF12@lsu.edu> References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> <4FFDE593.2070402@gmail.com> <3E2AD25B-E55B-4237-B3BC-4CC7220CAF12@lsu.edu> Message-ID: Rectification: clone petsc-3.3, not petsc-dev, of course... B > There is something fishy in your tree or your build: the patch that Satish sent addressed this issue, but it clearly has not been applied to your tree. The patch I had originally sent contained an error. For instance, F90ARRAY4DACCESSFORTRANADDR was replaced by F90ARRAY4dACCESSFORTRANADDR (note the capitalization of the "d"). These are the symbols that are unresolved in your build. > > > > How about starting from a clean state: > - rename your current petsc-dev tree > - clone petsc-dev from the official tree (BTW, do you really need petsc-dev, and not 3.3?) > - apply the patch Satish sent you or, quoting Satich >>> edit src/sys/f90-src/f90_cwrap.c and replace all occurrences of 'F90ARRAY4d' with 'F90ARRAY4D' > > - run configure with your favorite options, then make > > This should fix your problem. -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jul 11 16:22:27 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 11 Jul 2012 16:22:27 -0500 (CDT) Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> <4FFDE593.2070402@gmail.com> <3E2AD25B-E55B-4237-B3BC-4CC7220CAF12@lsu.edu> Message-ID: I added the patch to both 3.3 and petsc-dev repos. Hopefully thats easier to use. Satish On Wed, 11 Jul 2012, Blaise Bourdin wrote: > Rectification: clone petsc-3.3, not petsc-dev, of course... > B > > > There is something fishy in your tree or your build: the patch that Satish sent addressed this issue, but it clearly has not been applied to your tree. The patch I had originally sent contained an error. For instance, F90ARRAY4DACCESSFORTRANADDR was replaced by F90ARRAY4dACCESSFORTRANADDR (note the capitalization of the "d"). These are the symbols that are unresolved in your build. > > > > > > > > How about starting from a clean state: > > - rename your current petsc-dev tree > > - clone petsc-dev from the official tree (BTW, do you really need petsc-dev, and not 3.3?) > > - apply the patch Satish sent you or, quoting Satich > >>> edit src/sys/f90-src/f90_cwrap.c and replace all occurrences of 'F90ARRAY4d' with 'F90ARRAY4D' > > > > - run configure with your favorite options, then make > > > > This should fix your problem. > > From bin.gao at uit.no Wed Jul 11 23:59:36 2012 From: bin.gao at uit.no (Gao Bin) Date: Thu, 12 Jul 2012 04:59:36 +0000 Subject: [petsc-users] EPSSolve does not converge for a few eigenvalues of large-scale matrix Message-ID: Hi, all I am trying to solve a few eigenvalues in a interval of a sparse Hermitian matrix (F) with the dimension as 2362. I know there are only 20 eigenvalues in the interval [-80.0,-70.0] for the matrix F. I choose the EPSARNOLDI method. The code looks like as follows. But after several hours running, EPS can not get converged eigenvalues. Therefore, may I ask if I did some wrong in the code? Or should I add more solver parameters for my problem? Thank you in advance. #include use slepceps #if defined(PETSC_USE_FORTRAN_DATATYPES) type(Mat) F type(EPS) solver #else Mat F EPS solver #endif ... ... ! creates eigensolver context call EPSCreate(PETSC_COMM_WORLD, solver, ierr) ! sets operators call EPSSetOperators(solver, F, PETSC_NULL_OBJECT, ierr) ! sets solver parameters call EPSSetProblemType(solver, EPS_HEP, ierr) call EPSSetWhichEigenpairs(solver, EPS_ALL, ierr) call EPSSetInterval(solver, -80.0, 70.0, ierr) call EPSSetType(solver, EPSARNOLDI, ierr) call EPSSetTolerances(solver, 1.0D-9, 90000, ierr) ! solve the eigensystem call EPSSolve(solver, ierr) Cheers Gao -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jul 12 03:09:21 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 12 Jul 2012 10:09:21 +0200 Subject: [petsc-users] EPSSolve does not converge for a few eigenvalues of large-scale matrix In-Reply-To: References: Message-ID: <127C123A-04F9-4238-89F7-2C2CBBF5589D@dsic.upv.es> El 12/07/2012, a las 06:59, Gao Bin escribi?: > Hi, all > > I am trying to solve a few eigenvalues in a interval of a sparse Hermitian matrix (F) with the dimension as 2362. I know there are only 20 eigenvalues in the interval [-80.0,-70.0] for the matrix F. I choose the EPSARNOLDI method. The code looks like as follows. But after several hours running, EPS can not get converged eigenvalues. Therefore, may I ask if I did some wrong in the code? Or should I add more solver parameters for my problem? Thank you in advance. > > #include > use slepceps > #if defined(PETSC_USE_FORTRAN_DATATYPES) > type(Mat) F > type(EPS) solver > #else > Mat F > EPS solver > #endif > ... ... > ! creates eigensolver context > call EPSCreate(PETSC_COMM_WORLD, solver, ierr) > ! sets operators > call EPSSetOperators(solver, F, PETSC_NULL_OBJECT, ierr) > ! sets solver parameters > call EPSSetProblemType(solver, EPS_HEP, ierr) > call EPSSetWhichEigenpairs(solver, EPS_ALL, ierr) > call EPSSetInterval(solver, -80.0, 70.0, ierr) > call EPSSetType(solver, EPSARNOLDI, ierr) > call EPSSetTolerances(solver, 1.0D-9, 90000, ierr) > ! solve the eigensystem > call EPSSolve(solver, ierr) > > Cheers > > Gao For using computational intervals (EPSSetInterval) you must use Krylov-Schur and specify the relevant options for the spectral transformation. See section 3.4.5 in the manual. Jose From zonexo at gmail.com Thu Jul 12 15:44:52 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 12 Jul 2012 22:44:52 +0200 Subject: [petsc-users] [petsc-dev] second patch for DMDAVecGetArrayF90 In-Reply-To: References: <78CA146A-C72F-481F-A861-C51FB3B5EB6C@lsu.edu> <3FAA48E2-59D7-45E5-AAE1-A40326C5454D@mcs.anl.gov> <4FFD439E.80905@gmail.com> <4FFD9656.5030707@gmail.com> <4FFD9A0A.2020406@gmail.com> <4FFDBA39.3040205@gmail.com> <4FFDE593.2070402@gmail.com> <3E2AD25B-E55B-4237-B3BC-4CC7220CAF12@lsu.edu> Message-ID: <4FFF3744.5040700@gmail.com> It's working now. Thanks again. Yours sincerely, TAY wee-beng On 11/7/2012 11:22 PM, Satish Balay wrote: > I added the patch to both 3.3 and petsc-dev repos. Hopefully thats easier to use. > > Satish > > On Wed, 11 Jul 2012, Blaise Bourdin wrote: > >> Rectification: clone petsc-3.3, not petsc-dev, of course... >> B >> >>> There is something fishy in your tree or your build: the patch that Satish sent addressed this issue, but it clearly has not been applied to your tree. The patch I had originally sent contained an error. For instance, F90ARRAY4DACCESSFORTRANADDR was replaced by F90ARRAY4dACCESSFORTRANADDR (note the capitalization of the "d"). These are the symbols that are unresolved in your build. >>> >>> >>> >>> How about starting from a clean state: >>> - rename your current petsc-dev tree >>> - clone petsc-dev from the official tree (BTW, do you really need petsc-dev, and not 3.3?) >>> - apply the patch Satish sent you or, quoting Satich >>>>> edit src/sys/f90-src/f90_cwrap.c and replace all occurrences of 'F90ARRAY4d' with 'F90ARRAY4D' >>> - run configure with your favorite options, then make >>> >>> This should fix your problem. >> From nowakmr at umich.edu Thu Jul 12 16:14:04 2012 From: nowakmr at umich.edu (Michael R Nowak) Date: Thu, 12 Jul 2012 17:14:04 -0400 Subject: [petsc-users] Slepc Matrix memory preallocation Message-ID: Hello, I'm setting up a matrix with Slepc, and manually setting the preallocation for nonzero entries row-by-row using the MatMPISBAIJSetPreallocation function as follows: MatMPISBAIJSetPreallocation(LaplaceOperator, 1, 0, nonzeroEntries, 0, PETSC_NULL); The problem with this is that the number of nonzero entries per row that I give Slepc is not the same as the number of nonzero entries that Slepc allocates space for, slepc always seems to allocate N more nonzeroes than needed, with N being the number of rows in the matrix. This is a problem for me, since I would like to reduce the number of unneeded nonzero entries to exactly zero, in order to save memory. The number of nonzero entries that I give in nonzeroEntries is exactly what is used, so why are extra nonzeroes being allocated? Any help with this would be much appreciated. Thanks, Mike Nowak From jedbrown at mcs.anl.gov Thu Jul 12 17:18:54 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 12 Jul 2012 17:18:54 -0500 Subject: [petsc-users] Slepc Matrix memory preallocation In-Reply-To: References: Message-ID: On Thu, Jul 12, 2012 at 4:14 PM, Michael R Nowak wrote: > I'm setting up a matrix with Slepc, and manually setting the preallocation > for nonzero entries row-by-row using the MatMPISBAIJSetPreallocation > function as follows: > > MatMPISBAIJSetPreallocation(LaplaceOperator, > 1, > 0, > nonzeroEntries, > 0, > PETSC_NULL); > > The problem with this is that the number of nonzero entries per row that I > give Slepc is not the same as the number of nonzero entries that Slepc > allocates space for, slepc always seems to allocate N more nonzeroes than > needed, > What do you mean by SLEPc allocating nonzeros? > with N being the number of rows in the matrix. This is a problem > for me, since I would like to reduce the number of unneeded nonzero entries > to exactly zero, in order to save memory. The number of nonzero entries > that I give in nonzeroEntries is exactly what is used, so why are extra > nonzeroes being allocated? Any help with this would be much appreciated. > What evidence do you have that this is happening? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrosso at uci.edu Fri Jul 13 13:34:57 2012 From: mrosso at uci.edu (Michele Rosso) Date: Fri, 13 Jul 2012 11:34:57 -0700 Subject: [petsc-users] Parallel Incomplete Choleski Factorization Message-ID: <50006A51.5090207@uci.edu> Hi, I need to use the ICC factorization as preconditioner, but I noticed that no parallel version is supported. Is that correct? If so, is there a work around, like building the preconditioner "by hand" by using PETSc functions? Thank you, Michele -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Fri Jul 13 14:14:22 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 13 Jul 2012 14:14:22 -0500 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: <50006A51.5090207@uci.edu> References: <50006A51.5090207@uci.edu> Message-ID: Michele : > > I need to use the ICC factorization as preconditioner, but I noticed that > no parallel version is supported. > Is that correct? > Correct. > If so, is there a work around, like building the preconditioner "by hand" > by using PETSc functions? > You may try block jacobi with icc in the blocks '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' Hong > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrosso at uci.edu Fri Jul 13 15:44:17 2012 From: mrosso at uci.edu (Michele Rosso) Date: Fri, 13 Jul 2012 13:44:17 -0700 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: References: <50006A51.5090207@uci.edu> Message-ID: <500088A1.1050607@uci.edu> Thank you, Michele On 07/13/2012 12:14 PM, Hong Zhang wrote: > Michele : > > > I need to use the ICC factorization as preconditioner, but I > noticed that no parallel version is supported. > Is that correct? > > Correct. > > If so, is there a work around, like building the preconditioner > "by hand" by using PETSc functions? > > You may try block jacobi with icc in the blocks '-ksp_type cg > -pc_type bjacobi -sub_pc_type icc' > > Hong > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Sat Jul 14 02:41:10 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Sat, 14 Jul 2012 09:41:10 +0200 Subject: [petsc-users] Draw '2D' Vector (FFT) In-Reply-To: References: Message-ID: The Vec comes from a FFT object. As I'm not that familiar with PETSc DMs, is it possible to associate such a FFT Vec with a DM? The next thing is that after a FFT the axis coordintes change. Can this be handled as well by a DM ? Regards Thomas On Wed, Jul 11, 2012 at 10:37 PM, Jed Brown wrote: > On Wed, Jul 11, 2012 at 3:32 PM, Thomas Hisch wrote: >> >> Hello list, >> >> what is the easiest way to 'draw' (PETSC_VIEWER_DRAW_WORLD) a 2D plot >> of a petsc vector which was created with MatGetVecsFFTW()? The >> employed FFT matrix has DIM=2 and therefore the mentioned vector has >> N1*N2 components. As far as I know VecView(x, PETSC_VIEWER_DRAW_WORLD) >> only creates a lineplot. > > > Can the Vec be associated with a DM? If so, it will do a 2D plot. From jedbrown at mcs.anl.gov Sat Jul 14 11:21:15 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 14 Jul 2012 11:21:15 -0500 Subject: [petsc-users] Draw '2D' Vector (FFT) In-Reply-To: References: Message-ID: On Sat, Jul 14, 2012 at 2:41 AM, Thomas Hisch wrote: > The Vec comes from a FFT object. As I'm not that familiar with PETSc > DMs, is it possible to associate such a FFT Vec with a DM? The next > thing is that after a FFT the axis coordintes change. Can this be > handled as well by a DM ? > You'll need VecScatterFFTWToPetsc() to turn it into something that you can interpret. I guess that Vec will be in the natural ordering, so you can scatter once more to a DMDA. > > Regards > Thomas > > On Wed, Jul 11, 2012 at 10:37 PM, Jed Brown wrote: > > On Wed, Jul 11, 2012 at 3:32 PM, Thomas Hisch wrote: > >> > >> Hello list, > >> > >> what is the easiest way to 'draw' (PETSC_VIEWER_DRAW_WORLD) a 2D plot > >> of a petsc vector which was created with MatGetVecsFFTW()? The > >> employed FFT matrix has DIM=2 and therefore the mentioned vector has > >> N1*N2 components. As far as I know VecView(x, PETSC_VIEWER_DRAW_WORLD) > >> only creates a lineplot. > > > > > > Can the Vec be associated with a DM? If so, it will do a 2D plot. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwu at mymail.mines.edu Sat Jul 14 14:30:59 2012 From: pwu at mymail.mines.edu (Panruo Wu) Date: Sat, 14 Jul 2012 13:30:59 -0600 Subject: [petsc-users] mutiple DA In-Reply-To: <5F7CFFE0-E3AB-4384-89E9-CC8A559FA269@mcs.anl.gov> References: <5F7CFFE0-E3AB-4384-89E9-CC8A559FA269@mcs.anl.gov> Message-ID: Thank you Barry! Panruo On Sat, Jul 7, 2012 at 3:10 PM, Barry Smith wrote: > > So long as you have the same boundary types and the same array sizes in > the i and j direction they give the same distribution. > > Barry > On Jul 7, 2012, at 3:58 PM, Panruo Wu wrote: > > > Hello, > > > > If I create 2 DAs with (almost) identical parameters except DA name > > and dof like: > > > > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, > > DMDA_BOUNDARY_GHOSTED, & > > stype, M, N, m, n, dof1, s & > > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & > > da1, ierr) > > > > > > > > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, > > DMDA_BOUNDARY_GHOSTED, & > > stype, M, N, m, n, dof2, s & > > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & > > da2, ierr) > > > > > > my question is, will the two DAs have the same distribution scheme? > > Specifically, > > will the DMDAGetCorners() give the same results when querying da1 & da2? > > > > Thanks, > > Panruo Wu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel.fosas at gmail.com Sat Jul 14 18:30:43 2012 From: miguel.fosas at gmail.com (Miguel Fosas) Date: Sun, 15 Jul 2012 01:30:43 +0200 Subject: [petsc-users] Shell matrices and complex NHEP SVD problems Message-ID: Hi everyone, I have a question concerning the usage of shell matrices with SLEPc and the SVD solvers. When I compute the largest singular values from a complex non-hermitian matrix A with SLEPc, the solver asks the user to provide MATOP_MULT and MATOP_MULT_TRANSPOSE operations for A. As I have observed in the documentation of SLEPc, both the cyclic and cross solvers require matrix-vector product of A* v, with A* the complex conjugate of A. After having taken a look at the source code, it is not clear to me how the complex case is handled. The question is: for the complex case, does the implementation of MatMultTranspose have to compute A* v or A^T v? Is there a reason why it is not implemented using MatMultHermitianTranspose? Thanks in advance, Miguel. From knepley at gmail.com Sat Jul 14 18:41:56 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 14 Jul 2012 18:41:56 -0500 Subject: [petsc-users] Shell matrices and complex NHEP SVD problems In-Reply-To: References: Message-ID: On Sat, Jul 14, 2012 at 6:30 PM, Miguel Fosas wrote: > Hi everyone, > > I have a question concerning the usage of shell matrices with SLEPc > and the SVD solvers. When I compute the largest singular values from a > complex non-hermitian matrix A with SLEPc, the solver asks the user to > provide MATOP_MULT and MATOP_MULT_TRANSPOSE operations for A. > > As I have observed in the documentation of SLEPc, both the cyclic and > cross solvers require matrix-vector product of A* v, with A* the > complex conjugate of A. After having taken a look at the source code, > it is not clear to me how the complex case is handled. > > The question is: for the complex case, does the implementation of > MatMultTranspose have to compute A* v or A^T v? Is there a reason why > it is not implemented using MatMultHermitianTranspose? > I believe it is done this way because we originally did not distinguish (they were not there in 2009). It should probably be updated. Matt > Thanks in advance, > > Miguel. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sun Jul 15 03:35:47 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 15 Jul 2012 10:35:47 +0200 Subject: [petsc-users] Shell matrices and complex NHEP SVD problems In-Reply-To: References: Message-ID: <20147D14-4D06-4A59-ACBD-E9354414D01E@dsic.upv.es> El 15/07/2012, a las 01:41, Matthew Knepley escribi?: > On Sat, Jul 14, 2012 at 6:30 PM, Miguel Fosas wrote: > Hi everyone, > > I have a question concerning the usage of shell matrices with SLEPc > and the SVD solvers. When I compute the largest singular values from a > complex non-hermitian matrix A with SLEPc, the solver asks the user to > provide MATOP_MULT and MATOP_MULT_TRANSPOSE operations for A. > > As I have observed in the documentation of SLEPc, both the cyclic and > cross solvers require matrix-vector product of A* v, with A* the > complex conjugate of A. After having taken a look at the source code, > it is not clear to me how the complex case is handled. > > The question is: for the complex case, does the implementation of > MatMultTranspose have to compute A* v or A^T v? Is there a reason why > it is not implemented using MatMultHermitianTranspose? > > I believe it is done this way because we originally did not distinguish (they were not there in 2009). It > should probably be updated. > > Matt > > Thanks in advance, > > Miguel. > True. I will change it for the release. Thanks. By the way, the MatOperation names are not very consistent: MATOP_MULT_TRANSPOSE and MATOP_MULTHERMITIANTRANSPOSE. Shouldn't it be MATOP_MULT_HERMITIAN_TRANSPOSE? Jose From bsmith at mcs.anl.gov Sun Jul 15 22:17:21 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 15 Jul 2012 22:17:21 -0500 Subject: [petsc-users] Shell matrices and complex NHEP SVD problems In-Reply-To: <20147D14-4D06-4A59-ACBD-E9354414D01E@dsic.upv.es> References: <20147D14-4D06-4A59-ACBD-E9354414D01E@dsic.upv.es> Message-ID: <294AEEBF-2836-474F-B107-0BE4AC288153@mcs.anl.gov> On Jul 15, 2012, at 3:35 AM, Jose E. Roman wrote: > > > By the way, the MatOperation names are not very consistent: MATOP_MULT_TRANSPOSE and MATOP_MULTHERMITIANTRANSPOSE. Shouldn't it be MATOP_MULT_HERMITIAN_TRANSPOSE? > > Jose > It seems some PETSc developer thinks they can just make up arbitrary abbreviations and everyone else in the world will instantly know what crazy thing they chose. I have fixed some of them but many more to do. Barry The model is in the MATOP there is a _ introduced before each Capital letter and the result is truncated to 31 characters (completely mechanical). The model is not just write something that you think looks good. From pwu at mymail.mines.edu Sun Jul 15 23:20:48 2012 From: pwu at mymail.mines.edu (Panruo Wu) Date: Sun, 15 Jul 2012 22:20:48 -0600 Subject: [petsc-users] array still valid after DMDAVecRestoreArrayF90()? Message-ID: Hello, For some reason, it would be convenient for me to do something like call DMDAVecGetArrayF90(da, global, array, ierr) call DMDAVecGetArrayF90(da, global, array, ierr) ! read-only access to array here. call DMDAVecRestoreArrayF90(da, global, array,ierr) ! read&write access to array here. call DMDAVecRestoreArrayF90(da, global, array, ierr) Will this piece of code do what I expect? I wrote a simply program which gave positive answer but I want to know if it's guaranteed to work or only happened to be working. Thanks! Panruo Wu On Sat, Jul 14, 2012 at 1:30 PM, Panruo Wu wrote: > Thank you Barry! > > Panruo > > > On Sat, Jul 7, 2012 at 3:10 PM, Barry Smith wrote: > >> >> So long as you have the same boundary types and the same array sizes >> in the i and j direction they give the same distribution. >> >> Barry >> On Jul 7, 2012, at 3:58 PM, Panruo Wu wrote: >> >> > Hello, >> > >> > If I create 2 DAs with (almost) identical parameters except DA name >> > and dof like: >> > >> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >> > DMDA_BOUNDARY_GHOSTED, & >> > stype, M, N, m, n, dof1, s & >> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >> > da1, ierr) >> > >> > >> > >> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >> > DMDA_BOUNDARY_GHOSTED, & >> > stype, M, N, m, n, dof2, s & >> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >> > da2, ierr) >> > >> > >> > my question is, will the two DAs have the same distribution scheme? >> > Specifically, >> > will the DMDAGetCorners() give the same results when querying da1 & da2? >> > >> > Thanks, >> > Panruo Wu >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Jul 15 23:39:52 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 15 Jul 2012 23:39:52 -0500 Subject: [petsc-users] array still valid after DMDAVecRestoreArrayF90()? In-Reply-To: References: Message-ID: On Sun, Jul 15, 2012 at 11:20 PM, Panruo Wu wrote: > Hello, > > For some reason, it would be convenient for me to do something > like > > call DMDAVecGetArrayF90(da, global, array, ierr) > > call DMDAVecGetArrayF90(da, global, array, ierr) > ! read-only access to array here. > call DMDAVecRestoreArrayF90(da, global, array,ierr) > ! read&write access to array here. > call DMDAVecRestoreArrayF90(da, global, array, ierr) > No, it is not okay to lie about the array status. > > Will this piece of code do what I expect? I wrote a simply > program which gave positive answer but I want to know if > it's guaranteed to work or only happened to be working. > > Thanks! > Panruo Wu > > On Sat, Jul 14, 2012 at 1:30 PM, Panruo Wu wrote: > >> Thank you Barry! >> >> Panruo >> >> >> On Sat, Jul 7, 2012 at 3:10 PM, Barry Smith wrote: >> >>> >>> So long as you have the same boundary types and the same array sizes >>> in the i and j direction they give the same distribution. >>> >>> Barry >>> On Jul 7, 2012, at 3:58 PM, Panruo Wu wrote: >>> >>> > Hello, >>> > >>> > If I create 2 DAs with (almost) identical parameters except DA name >>> > and dof like: >>> > >>> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >>> > DMDA_BOUNDARY_GHOSTED, & >>> > stype, M, N, m, n, dof1, s & >>> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >>> > da1, ierr) >>> > >>> > >>> > >>> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >>> > DMDA_BOUNDARY_GHOSTED, & >>> > stype, M, N, m, n, dof2, s & >>> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >>> > da2, ierr) >>> > >>> > >>> > my question is, will the two DAs have the same distribution scheme? >>> > Specifically, >>> > will the DMDAGetCorners() give the same results when querying da1 & >>> da2? >>> > >>> > Thanks, >>> > Panruo Wu >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Mon Jul 16 01:14:48 2012 From: fd.kong at siat.ac.cn (=?ISO-8859-1?B?ZmRrb25n?=) Date: Mon, 16 Jul 2012 14:14:48 +0800 Subject: [petsc-users] How to correctly run snes/ex62.c? Message-ID: Hi guys, How to correctly run snes/ex62.c with DMcomplex? Could you please give a script sample (e.g. mpiexec -np 2 ./ex62 and what ?) ? And, could you please explain scripts "bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian dim order 1 1 gradient"? Regards, ------------------ Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From shitij.cse at gmail.com Mon Jul 16 03:23:31 2012 From: shitij.cse at gmail.com (Shitij Bhargava) Date: Mon, 16 Jul 2012 13:53:31 +0530 Subject: [petsc-users] Problem using MatSetValue with SeqAIJ. Values not being inserted. Message-ID: Hi all !! I am using Fortan and petsc 3.2-p7. I am creating the array as: *call MatCreateSeqAIJ(PETSC_COMM_SELF,natoms,natoms,PETSC_DECIDE,PETSC_NULL,bonds,ierr) * And then inserting elements as: *call MatSetValue(bonds,temp_int1-1,temp_int2-1,1,INSERT_VALUES,ierr)* I even did: *call MatSetValue(bonds,1,8,35,INSERT_VALUES,ierr)* then I finalize the matrix by these: *call MatAssemblyBegin(bonds,MAT_FINAL_ASSEMBLY,ierr)* *call MatAssemblyEnd(bonds,MAT_FINAL_ASSEMBLY,ierr)* But none of these values show up when I view the matrix by this: *call MatView(bonds,PETSC_VIEWER_STDOUT_SELF,ierr)* It's output is the following: Matrix Object: 1 MPI processes type: seqaij row 0: (0, 2.47033e-323) (1, 2.8034e-85) (4, 2.8034e-85) (5, 2.8034e-85) (6, 2.8034e-85) row 1: (0, 2.8034e-85) (2, 2.8034e-85) (7, 2.8034e-85) (8, 1.6976e-313) row 2: (1, 2.8034e-85) (3, 2.8034e-85) (9, 2.8034e-85) row 3: (2, 2.8034e-85) row 4: (0, 2.8034e-85) row 5: (0, 2.8034e-85) row 6: (0, 2.8034e-85) row 7: (1, 2.8034e-85) row 8: (1, 2.8034e-85) .............. Why are the values not being inserted? (Even at (1,8), 35 is not there.) Thank you in advance !! Shitij -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 16 06:28:37 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 Jul 2012 06:28:37 -0500 Subject: [petsc-users] How to correctly run snes/ex62.c? In-Reply-To: References: Message-ID: On Mon, Jul 16, 2012 at 1:14 AM, fdkong wrote: > Hi guys, > > How to correctly run snes/ex62.c with DMcomplex? Could you please give a > script sample (e.g. mpiexec -np 2 ./ex62 and what ?) ? And, could you > please explain scripts "bin/pythonscripts/PetscGenerateFEMQuadrature.py dim > order dim 1 laplacian dim order 1 1 gradient"? > The full set of tests for ex62 is in $PETSC_DIR/config/builder.py:215. If you use the Python build: python2.7 ./config/builder2.py build you can run them all python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c or just one python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c --testnum=0 and there is documentation for the options with --help. The PetscGenerateFEMQuadrature.py script generates headers which contain the evaluation of the basis functions and derivatives at the quadrature points. Matt > Regards, > ** > ------------------ > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > ** > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 16 06:42:17 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 Jul 2012 06:42:17 -0500 Subject: [petsc-users] Problem using MatSetValue with SeqAIJ. Values not being inserted. In-Reply-To: References: Message-ID: On Mon, Jul 16, 2012 at 3:23 AM, Shitij Bhargava wrote: > Hi all !! > > I am using Fortan and petsc 3.2-p7. > > I am creating the array as: > *call > MatCreateSeqAIJ(PETSC_COMM_SELF,natoms,natoms,PETSC_DECIDE,PETSC_NULL,bonds,ierr) > * > > And then inserting elements as: > *call MatSetValue(bonds,temp_int1-1,temp_int2-1,1,INSERT_VALUES,ierr)* > > I even did: > *call MatSetValue(bonds,1,8,35,INSERT_VALUES,ierr)* > > then I finalize the matrix by these: > *call MatAssemblyBegin(bonds,MAT_FINAL_ASSEMBLY,ierr)* > *call MatAssemblyEnd(bonds,MAT_FINAL_ASSEMBLY,ierr)* > > But none of these values show up when I view the matrix by this: > *call MatView(bonds,PETSC_VIEWER_STDOUT_SELF,ierr)* > > It's output is the following: > Matrix Object: 1 MPI processes > type: seqaij > row 0: (0, 2.47033e-323) (1, 2.8034e-85) (4, 2.8034e-85) (5, > 2.8034e-85) (6, 2.8034e-85) > row 1: (0, 2.8034e-85) (2, 2.8034e-85) (7, 2.8034e-85) (8, 1.6976e-313) > row 2: (1, 2.8034e-85) (3, 2.8034e-85) (9, 2.8034e-85) > row 3: (2, 2.8034e-85) > row 4: (0, 2.8034e-85) > row 5: (0, 2.8034e-85) > row 6: (0, 2.8034e-85) > row 7: (1, 2.8034e-85) > row 8: (1, 2.8034e-85) > .............. > > Why are the values not being inserted? (Even at (1,8), 35 is not there.) > Because the values are the wrong type, and Fortran does not know how to coerce them correctly (typing is weak in Fortran). Take a look at http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex2f.F.html Matt > Thank you in advance !! > > Shitij > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Mon Jul 16 15:35:42 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Mon, 16 Jul 2012 22:35:42 +0200 Subject: [petsc-users] Draw '2D' Vector (FFT) In-Reply-To: References: Message-ID: On Sat, Jul 14, 2012 at 6:21 PM, Jed Brown wrote: > On Sat, Jul 14, 2012 at 2:41 AM, Thomas Hisch wrote: >> >> The Vec comes from a FFT object. As I'm not that familiar with PETSc >> DMs, is it possible to associate such a FFT Vec with a DM? The next >> thing is that after a FFT the axis coordintes change. Can this be >> handled as well by a DM ? > > > You'll need VecScatterFFTWToPetsc() to turn it into something that you can > interpret. I guess that Vec will be in the natural ordering, so you can > scatter once more to a DMDA. > Thx for the answer! Due to a lack of time I will try to employ VecScatterFFTWToPetsc in my programs in a few weeks. BTW, do you plan to add FFT docs to the petsc manual in the near future? Regards Thomas From t.hisch at gmail.com Mon Jul 16 15:47:34 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Mon, 16 Jul 2012 22:47:34 +0200 Subject: [petsc-users] Store only Matrix Structure (Binary) Message-ID: Hello list, is it possible to store just the nonzero pattern of a large (sparse) matrix in a file? If I write the whole matrix to disk (PetscViewerBinaryOpen/MatView) its size is about 30MB. As I only need its structure it would be a good idea to load and store just the non-zero pattern of the matrix. Has anyone tried to do that and are there corresponding functions in petsc-dev? Regards Thomas From jedbrown at mcs.anl.gov Mon Jul 16 15:55:12 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 16 Jul 2012 15:55:12 -0500 Subject: [petsc-users] Draw '2D' Vector (FFT) In-Reply-To: References: Message-ID: On Mon, Jul 16, 2012 at 3:35 PM, Thomas Hisch wrote: > Thx for the answer! Due to a lack of time I will try to employ > VecScatterFFTWToPetsc in my programs in a few weeks. > > BTW, do you plan to add FFT docs to the petsc manual in the near future? > I don't know, hopefully the people that worked on that stuff will document it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Mon Jul 16 16:01:11 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Mon, 16 Jul 2012 23:01:11 +0200 Subject: [petsc-users] Purpose of --with-boost Message-ID: Is there any benefit to the user if he enables boost support in petsc with the --with-boost configure option ? What is the difference between a petsc build with enabled boost support and a build without boost support? I can't find any information about this neither in the manual nor on the web. Regards Thomas From balay at mcs.anl.gov Mon Jul 16 16:35:17 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 16 Jul 2012 16:35:17 -0500 (CDT) Subject: [petsc-users] Purpose of --with-boost In-Reply-To: References: Message-ID: On Mon, 16 Jul 2012, Thomas Hisch wrote: > Is there any benefit to the user if he enables boost support in petsc > with the --with-boost configure option ? What is the difference > between a petsc build with enabled boost support and a build without > boost support? I can't find any information about this neither in the > manual nor on the web. boost primarily required by 'sieve' part of PETSc [which requires a 'c++' build]. But this functionality is being relaced by DMComplex [a 'c' implementation] Satish From zonexo at gmail.com Mon Jul 16 16:42:49 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Mon, 16 Jul 2012 23:42:49 +0200 Subject: [petsc-users] DM global/ local vectors and DMDAVecGetArrayF90 indics Message-ID: <50048AD9.8060804@gmail.com> Hi, I have 2 questions. When I use DMCreateGlobalVector and DMCreateLocalVector with Vec pressure_global, pressure_local call DMCreateGlobalVector(da_dof1,pressure_global,ierr) call DMCreateLocalVector(da_dof1,pressure_local,ierr) Are the pressure_global and pressure_local using the same memory space, or are they seperate? Another thing is in Fortran, DMDAVecGetArrayF90 gives an array which starts from 0. However, my Fortran array usually starts from 1. Is there any simple ways to have the array starting from 1 instead? Thanks -- Yours sincerely, TAY wee-beng From knepley at gmail.com Mon Jul 16 16:44:00 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 Jul 2012 16:44:00 -0500 Subject: [petsc-users] Store only Matrix Structure (Binary) In-Reply-To: References: Message-ID: On Mon, Jul 16, 2012 at 3:47 PM, Thomas Hisch wrote: > Hello list, > > is it possible to store just the nonzero pattern of a large (sparse) > matrix in a file? If I write the whole matrix to disk > (PetscViewerBinaryOpen/MatView) its size is about 30MB. As I only need > its structure it would be a good idea to load and store just the > non-zero pattern of the matrix. > > Has anyone tried to do that and are there corresponding functions in > petsc-dev? > This is not in PETSc currrently. It would not be hard to do by commenting out the value parts in the source. Matt > Regards > Thomas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 16 17:27:48 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 16 Jul 2012 17:27:48 -0500 Subject: [petsc-users] DM global/ local vectors and DMDAVecGetArrayF90 indics In-Reply-To: <50048AD9.8060804@gmail.com> References: <50048AD9.8060804@gmail.com> Message-ID: On Mon, Jul 16, 2012 at 4:42 PM, TAY wee-beng wrote: > Hi, > > I have 2 questions. > > When I use DMCreateGlobalVector and DMCreateLocalVector with > > Vec pressure_global, pressure_local > > call DMCreateGlobalVector(da_dof1,**pressure_global,ierr) > > call DMCreateLocalVector(da_dof1,**pressure_local,ierr) > > Are the pressure_global and pressure_local using the same memory space, or > are they seperate? > Separate. > Another thing is in Fortran, DMDAVecGetArrayF90 gives an array which > starts from 0. However, my Fortran array usually starts from 1. Is there > any simple ways to have the array starting from 1 instead? > Its not simple. Matt > Thanks > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwu at mymail.mines.edu Mon Jul 16 20:15:48 2012 From: pwu at mymail.mines.edu (Panruo Wu) Date: Mon, 16 Jul 2012 19:15:48 -0600 Subject: [petsc-users] array still valid after DMDAVecRestoreArrayF90()? In-Reply-To: References: Message-ID: How about in the inner block use a different array name like array2? Panruo Wu On Sun, Jul 15, 2012 at 10:39 PM, Jed Brown wrote: > On Sun, Jul 15, 2012 at 11:20 PM, Panruo Wu wrote: > >> Hello, >> >> For some reason, it would be convenient for me to do something >> like >> >> call DMDAVecGetArrayF90(da, global, array, ierr) >> >> call DMDAVecGetArrayF90(da, global, array, ierr) >> ! read-only access to array here. >> call DMDAVecRestoreArrayF90(da, global, array,ierr) >> ! read&write access to array here. >> call DMDAVecRestoreArrayF90(da, global, array, ierr) >> > > No, it is not okay to lie about the array status. > > >> >> Will this piece of code do what I expect? I wrote a simply >> program which gave positive answer but I want to know if >> it's guaranteed to work or only happened to be working. >> >> Thanks! >> Panruo Wu >> >> On Sat, Jul 14, 2012 at 1:30 PM, Panruo Wu wrote: >> >>> Thank you Barry! >>> >>> Panruo >>> >>> >>> On Sat, Jul 7, 2012 at 3:10 PM, Barry Smith wrote: >>> >>>> >>>> So long as you have the same boundary types and the same array sizes >>>> in the i and j direction they give the same distribution. >>>> >>>> Barry >>>> On Jul 7, 2012, at 3:58 PM, Panruo Wu wrote: >>>> >>>> > Hello, >>>> > >>>> > If I create 2 DAs with (almost) identical parameters except DA name >>>> > and dof like: >>>> > >>>> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >>>> > DMDA_BOUNDARY_GHOSTED, & >>>> > stype, M, N, m, n, dof1, s & >>>> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >>>> > da1, ierr) >>>> > >>>> > >>>> > >>>> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >>>> > DMDA_BOUNDARY_GHOSTED, & >>>> > stype, M, N, m, n, dof2, s & >>>> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >>>> > da2, ierr) >>>> > >>>> > >>>> > my question is, will the two DAs have the same distribution scheme? >>>> > Specifically, >>>> > will the DMDAGetCorners() give the same results when querying da1 & >>>> da2? >>>> > >>>> > Thanks, >>>> > Panruo Wu >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 16 20:20:06 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 16 Jul 2012 20:20:06 -0500 Subject: [petsc-users] array still valid after DMDAVecRestoreArrayF90()? In-Reply-To: References: Message-ID: No, you should only Get an array once. On Jul 16, 2012 8:15 PM, "Panruo Wu" wrote: > How about in the inner block use a different array name > like array2? > > Panruo Wu > > On Sun, Jul 15, 2012 at 10:39 PM, Jed Brown wrote: > >> On Sun, Jul 15, 2012 at 11:20 PM, Panruo Wu wrote: >> >>> Hello, >>> >>> For some reason, it would be convenient for me to do something >>> like >>> >>> call DMDAVecGetArrayF90(da, global, array, ierr) >>> >>> call DMDAVecGetArrayF90(da, global, array, ierr) >>> ! read-only access to array here. >>> call DMDAVecRestoreArrayF90(da, global, array,ierr) >>> ! read&write access to array here. >>> call DMDAVecRestoreArrayF90(da, global, array, ierr) >>> >> >> No, it is not okay to lie about the array status. >> >> >>> >>> Will this piece of code do what I expect? I wrote a simply >>> program which gave positive answer but I want to know if >>> it's guaranteed to work or only happened to be working. >>> >>> Thanks! >>> Panruo Wu >>> >>> On Sat, Jul 14, 2012 at 1:30 PM, Panruo Wu wrote: >>> >>>> Thank you Barry! >>>> >>>> Panruo >>>> >>>> >>>> On Sat, Jul 7, 2012 at 3:10 PM, Barry Smith wrote: >>>> >>>>> >>>>> So long as you have the same boundary types and the same array >>>>> sizes in the i and j direction they give the same distribution. >>>>> >>>>> Barry >>>>> On Jul 7, 2012, at 3:58 PM, Panruo Wu wrote: >>>>> >>>>> > Hello, >>>>> > >>>>> > If I create 2 DAs with (almost) identical parameters except DA name >>>>> > and dof like: >>>>> > >>>>> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >>>>> > DMDA_BOUNDARY_GHOSTED, & >>>>> > stype, M, N, m, n, dof1, s & >>>>> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >>>>> > da1, ierr) >>>>> > >>>>> > >>>>> > >>>>> > call DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_GHOSTED, >>>>> > DMDA_BOUNDARY_GHOSTED, & >>>>> > stype, M, N, m, n, dof2, s & >>>>> > PETSC_NULL_INTEGER, PETSC_NULL_INTEGER, & >>>>> > da2, ierr) >>>>> > >>>>> > >>>>> > my question is, will the two DAs have the same distribution scheme? >>>>> > Specifically, >>>>> > will the DMDAGetCorners() give the same results when querying da1 & >>>>> da2? >>>>> > >>>>> > Thanks, >>>>> > Panruo Wu >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 17 02:09:17 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 17 Jul 2012 09:09:17 +0200 Subject: [petsc-users] Purpose of --with-boost In-Reply-To: References: Message-ID: On Mon, Jul 16, 2012 at 11:35 PM, Satish Balay wrote: > On Mon, 16 Jul 2012, Thomas Hisch wrote: > >> Is there any benefit to the user if he enables boost support in petsc >> with the --with-boost configure option ? What is the difference >> between a petsc build with enabled boost support and a build without >> boost support? I can't find any information about this neither in the >> manual nor on the web. > > boost primarily required by 'sieve' part of PETSc [which requires a 'c++' > build]. But this functionality is being relaced by DMComplex [a 'c' > implementation] > Does this mean that if I don't need the sieve part then compiling with or without boost enabled does not make any difference for me ? From zonexo at gmail.com Tue Jul 17 02:30:05 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 17 Jul 2012 09:30:05 +0200 Subject: [petsc-users] Using DMDAVecGetArrayF90 and PetscInt Message-ID: <5005147D.5070704@gmail.com> Hi, Can PetscInt be used with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 ? If I use : /PetscScalar,pointer :: types(:,:) call DMDAVecGetArrayF90(da_dof1,type_local,types,ierr) call DMDAVecRestoreArrayF90(da_dof1,type_local,types,ierr)/ It worked. However if I use : /PetscInt,pointer :: types(:,:) call DMDAVecGetArrayF90(da_dof1,type_local,types,ierr) call DMDAVecRestoreArrayF90(da_dof1,type_local,types,ierr)/ It gives error saying : Error: There is no matching specific subroutine for this generic subroutine call However I would like my "types" to be integer. How can that be done? -- Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Sanderse at cwi.nl Tue Jul 17 03:23:12 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Tue, 17 Jul 2012 10:23:12 +0200 Subject: [petsc-users] ML - zero pivot error In-Reply-To: <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> References: <69AD671D-5C83-4A1A-9100-6E5ACDA29224@erdw.ethz.ch> <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> Message-ID: Hello all, I am trying to use ML to solve a Poisson equation with Neumann BC (singular matrix) and found the thread below to avoid a zero pivot error. I used the option that Barry suggests, -mg_coarse_pc_factor_shift_type nonzero, which works, but only when I run on a single processor. For two or more processors I still get a zero pivot error. Are there more options to be set for the parallel case? Benjamin [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Detected zero pivot in LU factorization: see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! [1]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Detected zero pivot in LU factorization: see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! [0]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: bin/navier-stokes on a linux-gnu named gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 [0]PETSC ERROR: Libraries linked from /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapack-dir=/sara /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 --download-ml [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatPivotCheck_none() line 583 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h [0]PETSC ERROR: MatPivotCheck() line 602 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: bin/navier-stokes on a linux-gnu named gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 [1]PETSC ERROR: Libraries linked from /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib [1]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapack-dir=/sara /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 --download-ml [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: MatPivotCheck_none() line 583 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c [0]PETSC ERROR: MatLUFactorNumeric() line 2790 in /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_LU() line 160 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCSetUp_Redundant() line 176 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c [0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCSetUp_MG() line 729 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: [1]PETSC ERROR: MatPivotCheck() line 602 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h [1]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c [1]PETSC ERROR: MatLUFactorNumeric() line 2790 in /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c [1]PETSC ERROR: PCSetUp_LU() line 160 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c [1]PETSC ERROR: PCSetUp_ML() line 820 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c [0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c [1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: PCSetUp_Redundant() line 176 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c [1]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c [1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c [1]PETSC ERROR: PCSetUp_MG() line 729 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c [1]PETSC ERROR: PCSetUp_ML() line 820 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c [1]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c [1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c Op 3 jun 2011, om 14:26 heeft Barry Smith het volgende geschreven: > > It is the direct solver on the the coarse grid that is finding the zero pivot (since the coarse grid problem like all the levels has a null space). > > You can use the option -mg_coarse_pc_factor_shift_type nonzero (in petsc-3.1 or petsc-dev) Also keep the KSPSetNullSpace() function you are using. > > > Barry > > On Jun 3, 2011, at 3:54 AM, Stijn A. M. Vantieghem wrote: > >> Dear all, >> >> I am using PETSc (Fortran interface) to solve a Poisson equation with Neumann boundary conditions. Up till now, I managed to do this with Hypre's BoomerAMG. Now, I am investigating whether I can improve the performance of my code by using ML. However, at execution I receive a zero pivot error; I tried to remove the (constant) null space with KSPSetNullSpace, but this didn't solve my problem. Do you have an idea of what I'm doing wrong? Thanks. >> >> The relevant portions of my code are as follows: >> >> !**************************************************** >> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) >> call KSPSetOperators(ksp,M,M,DIFFERENT_NONZERO_PATTERN,ierr) >> >> call MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,sp,ierr) >> call KSPSetNullSpace(ksp,sp,ierr) >> >> call KSPGetPC(ksp,pc,ierr) >> call PCSetType(pc,PCML,ierr) >> >> call KSPSetFromOptions(ksp,ierr) >> ... >> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr) >> call KSPSolve(ksp,petsc_rhs,petsc_pressure,ierr) >> !**************************************************** >> >> and the error message is: >> >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Detected zero pivot in LU factorization >> see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot! >> [0]PETSC ERROR: Zero pivot row 0 value 3.57045e-20 tolerance 1e-12! >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> ... >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ() line 574 in src/mat/impls/aij/seq/aijfact.c >> [0]PETSC ERROR: MatLUFactorNumeric() line 2587 in src/mat/interface/matrix.c >> [0]PETSC ERROR: PCSetUp_LU() line 158 in src/ksp/pc/impls/factor/lu/lu.c >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: PCSetUp_MG() line 602 in src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: PCSetUp_ML() line 668 in src/ksp/pc/impls/ml/ml.c >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: KSPSolve() line 353 in src/ksp/ksp/interface/itfunc.c >> >> Regards >> Stijn >> >> Stijn A.M. Vantieghem >> Earth and Planetary Magnetism >> Institute for Geophysics >> ETH Z?rich >> Sonneggstrasse 5 - CH 8092 Z?rich >> tel: +41 44 632 39 90 >> e-mail: stijn.vantieghem at erdw.ethz.ch >> >> >> >> > From zonexo at gmail.com Tue Jul 17 04:31:30 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 17 Jul 2012 11:31:30 +0200 Subject: [petsc-users] Adding vectors of different dimension Message-ID: <500530F2.7050700@gmail.com> Hi, I have 3 vectors of different dimension, create using 2 DM (da_dof1,da_dof2), with dof = 1 and 2. They are declared as: Vec u,v,duv call DMCreateLocalVector(da_dof1,u,ierr) call DMCreateLocalVector(da_dof1,v,ierr) call DMCreateLocalVector(da_dof2,duv,ierr) How can I add duv to u and v to get new values for u,v? I'm currently using DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 to access the arrays. Thanks -- Yours sincerely, TAY wee-beng From knepley at gmail.com Tue Jul 17 05:55:16 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 05:55:16 -0500 Subject: [petsc-users] Purpose of --with-boost In-Reply-To: References: Message-ID: On Tue, Jul 17, 2012 at 2:09 AM, Thomas Hisch wrote: > On Mon, Jul 16, 2012 at 11:35 PM, Satish Balay wrote: > > On Mon, 16 Jul 2012, Thomas Hisch wrote: > > > >> Is there any benefit to the user if he enables boost support in petsc > >> with the --with-boost configure option ? What is the difference > >> between a petsc build with enabled boost support and a build without > >> boost support? I can't find any information about this neither in the > >> manual nor on the web. > > > > boost primarily required by 'sieve' part of PETSc [which requires a 'c++' > > build]. But this functionality is being relaced by DMComplex [a 'c' > > implementation] > > > > Does this mean that if I don't need the sieve part then compiling with > or without boost enabled does not make any difference for me ? > Yes Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 17 05:55:56 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 05:55:56 -0500 Subject: [petsc-users] Using DMDAVecGetArrayF90 and PetscInt In-Reply-To: <5005147D.5070704@gmail.com> References: <5005147D.5070704@gmail.com> Message-ID: On Tue, Jul 17, 2012 at 2:30 AM, TAY wee-beng wrote: > Hi, > > Can PetscInt be used with DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 ? > > If I use : > > *PetscScalar,pointer :: types(:,:) > > call DMDAVecGetArrayF90(da_dof1,type_local,types,ierr) > > call DMDAVecRestoreArrayF90(da_dof1,type_local,types,ierr)* > > It worked. However if I use : > > *PetscInt,pointer :: types(:,:) > > call DMDAVecGetArrayF90(da_dof1,type_local,types,ierr) > > call DMDAVecRestoreArrayF90(da_dof1,type_local,types,ierr)* > > It gives error saying : > > Error: There is no matching specific subroutine for this generic > subroutine call > > However I would like my "types" to be integer. How can that be done? > No Matt > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 17 05:58:53 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 05:58:53 -0500 Subject: [petsc-users] ML - zero pivot error In-Reply-To: References: <69AD671D-5C83-4A1A-9100-6E5ACDA29224@erdw.ethz.ch> <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> Message-ID: On Tue, Jul 17, 2012 at 3:23 AM, Benjamin Sanderse wrote: > Hello all, > > I am trying to use ML to solve a Poisson equation with Neumann BC > (singular matrix) and found the thread below to avoid a zero pivot error. I > used the option that Barry suggests, -mg_coarse_pc_factor_shift_type > nonzero, which works, but only when I run on a single processor. For two or > more processors I still get a zero pivot error. Are there more options to > be set for the parallel case? > Should still work. Can you check that the prefix for the coarse solver is correct with -ksp_view? Matt > Benjamin > > > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [1]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 > CDT 2012 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [0]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 > CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: bin/navier-stokes on a linux-gnu named > gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 > [0]PETSC ERROR: Libraries linked from > /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 > --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre > --with-blas-lapack-dir=/sara > /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 > --download-ml > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatPivotCheck_none() line 583 in > /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > [0]PETSC ERROR: MatPivotCheck() line 602 in > /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: bin/navier-stokes on a linux-gnu named > gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 > [1]PETSC ERROR: Libraries linked from > /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 > [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 > --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre > --with-blas-lapack-dir=/sara > /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 > --download-ml > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: MatPivotCheck_none() line 583 in > /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in > /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: MatLUFactorNumeric() line 2790 in > /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 160 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUp_Redundant() line 176 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c > [0]PETSC ERROR: PCSetUp() line 832 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUp_MG() line 729 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: [1]PETSC ERROR: MatPivotCheck() line 602 in > /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > [1]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in > /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c > [1]PETSC ERROR: MatLUFactorNumeric() line 2790 in > /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c > [1]PETSC ERROR: PCSetUp_LU() line 160 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c > [1]PETSC ERROR: PCSetUp_ML() line 820 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c > [0]PETSC ERROR: PCSetUp() line 832 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > PCSetUp() line 832 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: PCSetUp_Redundant() line 176 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c > [1]PETSC ERROR: PCSetUp() line 832 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: PCSetUp_MG() line 729 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c > [1]PETSC ERROR: PCSetUp_ML() line 820 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c > [1]PETSC ERROR: PCSetUp() line 832 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in > /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > > > Op 3 jun 2011, om 14:26 heeft Barry Smith het volgende geschreven: > > > > > It is the direct solver on the the coarse grid that is finding the > zero pivot (since the coarse grid problem like all the levels has a null > space). > > > > You can use the option -mg_coarse_pc_factor_shift_type nonzero (in > petsc-3.1 or petsc-dev) Also keep the KSPSetNullSpace() function you are > using. > > > > > > Barry > > > > On Jun 3, 2011, at 3:54 AM, Stijn A. M. Vantieghem wrote: > > > >> Dear all, > >> > >> I am using PETSc (Fortran interface) to solve a Poisson equation with > Neumann boundary conditions. Up till now, I managed to do this with Hypre's > BoomerAMG. Now, I am investigating whether I can improve the performance of > my code by using ML. However, at execution I receive a zero pivot error; I > tried to remove the (constant) null space with KSPSetNullSpace, but this > didn't solve my problem. Do you have an idea of what I'm doing wrong? > Thanks. > >> > >> The relevant portions of my code are as follows: > >> > >> !**************************************************** > >> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) > >> call KSPSetOperators(ksp,M,M,DIFFERENT_NONZERO_PATTERN,ierr) > >> > >> call > MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,sp,ierr) > >> call KSPSetNullSpace(ksp,sp,ierr) > >> > >> call KSPGetPC(ksp,pc,ierr) > >> call PCSetType(pc,PCML,ierr) > >> > >> call KSPSetFromOptions(ksp,ierr) > >> ... > >> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr) > >> call KSPSolve(ksp,petsc_rhs,petsc_pressure,ierr) > >> !**************************************************** > >> > >> and the error message is: > >> > >> [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > >> [0]PETSC ERROR: Detected zero pivot in LU factorization > >> see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot > ! > >> [0]PETSC ERROR: Zero pivot row 0 value 3.57045e-20 tolerance 1e-12! > >> [0]PETSC ERROR: > ------------------------------------------------------------------------ > >> ... > >> [0]PETSC ERROR: > ------------------------------------------------------------------------ > >> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ() line 574 in > src/mat/impls/aij/seq/aijfact.c > >> [0]PETSC ERROR: MatLUFactorNumeric() line 2587 in > src/mat/interface/matrix.c > >> [0]PETSC ERROR: PCSetUp_LU() line 158 in src/ksp/pc/impls/factor/lu/lu.c > >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c > >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c > >> [0]PETSC ERROR: PCSetUp_MG() line 602 in src/ksp/pc/impls/mg/mg.c > >> [0]PETSC ERROR: PCSetUp_ML() line 668 in src/ksp/pc/impls/ml/ml.c > >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c > >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c > >> [0]PETSC ERROR: KSPSolve() line 353 in src/ksp/ksp/interface/itfunc.c > >> > >> Regards > >> Stijn > >> > >> Stijn A.M. Vantieghem > >> Earth and Planetary Magnetism > >> Institute for Geophysics > >> ETH Z?rich > >> Sonneggstrasse 5 - CH 8092 Z?rich > >> tel: +41 44 632 39 90 > >> e-mail: stijn.vantieghem at erdw.ethz.ch > >> > >> > >> > >> > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 17 06:00:57 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 06:00:57 -0500 Subject: [petsc-users] Adding vectors of different dimension In-Reply-To: <500530F2.7050700@gmail.com> References: <500530F2.7050700@gmail.com> Message-ID: On Tue, Jul 17, 2012 at 4:31 AM, TAY wee-beng wrote: > Hi, > > I have 3 vectors of different dimension, create using 2 DM > (da_dof1,da_dof2), with dof = 1 and 2. > > They are declared as: > > Vec u,v,duv > > call DMCreateLocalVector(da_dof1,u,**ierr) > > call DMCreateLocalVector(da_dof1,v,**ierr) > > call DMCreateLocalVector(da_dof2,**duv,ierr) > > How can I add duv to u and v to get new values for u,v? > You can split duv into single component vectors: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecStrideScatter.html Matt > I'm currently using DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 to > access the arrays. > > Thanks > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From B.Sanderse at cwi.nl Tue Jul 17 07:07:50 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Tue, 17 Jul 2012 14:07:50 +0200 Subject: [petsc-users] ML - zero pivot error In-Reply-To: References: <69AD671D-5C83-4A1A-9100-6E5ACDA29224@erdw.ethz.ch> <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> Message-ID: <44E1E554-BFBB-4531-AC6A-66A45F57C893@cwi.nl> The two processor case does not reach far enough that ksp_view gives me output. The one processor case gives me the output below. I also added -mg_levels_pc_factor_shift_type nonzero, but still with the same error. Benjamin KSP Object: 1 MPI processes type: cg maximum iterations=500 tolerances: relative=1e-10, absolute=1e-10, divergence=10000 left preconditioning has attached null space using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=6, cols=6 total: nonzeros=36, allocated nonzeros=36 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=130, cols=130 total: nonzeros=2704, allocated nonzeros=2704 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=1000, cols=1000 total: nonzeros=6400, allocated nonzeros=6400 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=1000, cols=1000 total: nonzeros=6400, allocated nonzeros=6400 total number of mallocs used during MatSetValues calls =0 not using I-node routines Op 17 jul 2012, om 12:58 heeft Matthew Knepley het volgende geschreven: > On Tue, Jul 17, 2012 at 3:23 AM, Benjamin Sanderse wrote: > Hello all, > > I am trying to use ML to solve a Poisson equation with Neumann BC (singular matrix) and found the thread below to avoid a zero pivot error. I used the option that Barry suggests, -mg_coarse_pc_factor_shift_type nonzero, which works, but only when I run on a single processor. For two or more processors I still get a zero pivot error. Are there more options to be set for the parallel case? > > Should still work. Can you check that the prefix for the coarse solver is correct with -ksp_view? > > Matt > > Benjamin > > > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [1]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Detected zero pivot in LU factorization: > see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! > [0]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: bin/navier-stokes on a linux-gnu named gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 > [0]PETSC ERROR: Libraries linked from /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapack-dir=/sara > /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 --download-ml > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatPivotCheck_none() line 583 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > [0]PETSC ERROR: MatPivotCheck() line 602 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: bin/navier-stokes on a linux-gnu named gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 > [1]PETSC ERROR: Libraries linked from /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib > [1]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 > [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre --with-blas-lapack-dir=/sara > /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 --with-debugging=0 --download-ml > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: MatPivotCheck_none() line 583 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c > [0]PETSC ERROR: MatLUFactorNumeric() line 2790 in /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 160 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUp_Redundant() line 176 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c > [0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: PCSetUp_MG() line 729 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: [1]PETSC ERROR: MatPivotCheck() line 602 in /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h > [1]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c > [1]PETSC ERROR: MatLUFactorNumeric() line 2790 in /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c > [1]PETSC ERROR: PCSetUp_LU() line 160 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c > [1]PETSC ERROR: PCSetUp_ML() line 820 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c > [0]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: PCSetUp_Redundant() line 176 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c > [1]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: PCSetUp_MG() line 729 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c > [1]PETSC ERROR: PCSetUp_ML() line 820 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c > [1]PETSC ERROR: PCSetUp() line 832 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: KSPSetUp() line 278 in /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c > > > Op 3 jun 2011, om 14:26 heeft Barry Smith het volgende geschreven: > > > > > It is the direct solver on the the coarse grid that is finding the zero pivot (since the coarse grid problem like all the levels has a null space). > > > > You can use the option -mg_coarse_pc_factor_shift_type nonzero (in petsc-3.1 or petsc-dev) Also keep the KSPSetNullSpace() function you are using. > > > > > > Barry > > > > On Jun 3, 2011, at 3:54 AM, Stijn A. M. Vantieghem wrote: > > > >> Dear all, > >> > >> I am using PETSc (Fortran interface) to solve a Poisson equation with Neumann boundary conditions. Up till now, I managed to do this with Hypre's BoomerAMG. Now, I am investigating whether I can improve the performance of my code by using ML. However, at execution I receive a zero pivot error; I tried to remove the (constant) null space with KSPSetNullSpace, but this didn't solve my problem. Do you have an idea of what I'm doing wrong? Thanks. > >> > >> The relevant portions of my code are as follows: > >> > >> !**************************************************** > >> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) > >> call KSPSetOperators(ksp,M,M,DIFFERENT_NONZERO_PATTERN,ierr) > >> > >> call MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,sp,ierr) > >> call KSPSetNullSpace(ksp,sp,ierr) > >> > >> call KSPGetPC(ksp,pc,ierr) > >> call PCSetType(pc,PCML,ierr) > >> > >> call KSPSetFromOptions(ksp,ierr) > >> ... > >> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr) > >> call KSPSolve(ksp,petsc_rhs,petsc_pressure,ierr) > >> !**************************************************** > >> > >> and the error message is: > >> > >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > >> [0]PETSC ERROR: Detected zero pivot in LU factorization > >> see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot! > >> [0]PETSC ERROR: Zero pivot row 0 value 3.57045e-20 tolerance 1e-12! > >> [0]PETSC ERROR: ------------------------------------------------------------------------ > >> ... > >> [0]PETSC ERROR: ------------------------------------------------------------------------ > >> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ() line 574 in src/mat/impls/aij/seq/aijfact.c > >> [0]PETSC ERROR: MatLUFactorNumeric() line 2587 in src/mat/interface/matrix.c > >> [0]PETSC ERROR: PCSetUp_LU() line 158 in src/ksp/pc/impls/factor/lu/lu.c > >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c > >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c > >> [0]PETSC ERROR: PCSetUp_MG() line 602 in src/ksp/pc/impls/mg/mg.c > >> [0]PETSC ERROR: PCSetUp_ML() line 668 in src/ksp/pc/impls/ml/ml.c > >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c > >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c > >> [0]PETSC ERROR: KSPSolve() line 353 in src/ksp/ksp/interface/itfunc.c > >> > >> Regards > >> Stijn > >> > >> Stijn A.M. Vantieghem > >> Earth and Planetary Magnetism > >> Institute for Geophysics > >> ETH Z?rich > >> Sonneggstrasse 5 - CH 8092 Z?rich > >> tel: +41 44 632 39 90 > >> e-mail: stijn.vantieghem at erdw.ethz.ch > >> > >> > >> > >> > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -- Ir. B. Sanderse Centrum Wiskunde en Informatica Science Park 123 1098 XG Amsterdam t: +31 20 592 4161 e: sanderse at cwi.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 17 07:11:32 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 07:11:32 -0500 Subject: [petsc-users] ML - zero pivot error In-Reply-To: <44E1E554-BFBB-4531-AC6A-66A45F57C893@cwi.nl> References: <69AD671D-5C83-4A1A-9100-6E5ACDA29224@erdw.ethz.ch> <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> <44E1E554-BFBB-4531-AC6A-66A45F57C893@cwi.nl> Message-ID: On Tue, Jul 17, 2012 at 7:07 AM, Benjamin Sanderse wrote: > The two processor case does not reach far enough that ksp_view gives me > output. The one processor case gives me the output below. > I also added -mg_levels_pc_factor_shift_type nonzero, but still with the > same error. > Stick in the identity instead. There must be a reason the solver does not receive the option. You could also break inside the PCSetFromOptions_Factor() method. Matt > Benjamin > > > KSP Object: 1 MPI processes > type: cg > maximum iterations=500 > tolerances: relative=1e-10, absolute=1e-10, divergence=10000 > left preconditioning > has attached null space > using nonzero initial guess > using PRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: ml > MG: type is MULTIPLICATIVE, levels=4 cycles=v > Cycles per PCApply=1 > Using Galerkin computed coarse grid matrices > Coarse grid solver -- level ------------------------------- > KSP Object: (mg_coarse_) 1 MPI processes > type: preonly > maximum iterations=1, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_) 1 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > * using diagonal shift to prevent zero pivot* > matrix ordering: nd > factor fill ratio given 5, needed 1 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqaij > rows=1, cols=1 > package used to perform factorization: petsc > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=1, cols=1 > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI processes > type: richardson > Richardson: damping factor=1 > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1 > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=6, cols=6 > total: nonzeros=36, allocated nonzeros=36 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 2 nodes, limit used is 5 > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 2 ------------------------------- > KSP Object: (mg_levels_2_) 1 MPI processes > type: richardson > Richardson: damping factor=1 > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_2_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1 > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=130, cols=130 > total: nonzeros=2704, allocated nonzeros=2704 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 3 ------------------------------- > KSP Object: (mg_levels_3_) 1 MPI processes > type: richardson > Richardson: damping factor=1 > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_3_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, > omega = 1 > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=1000, cols=1000 > total: nonzeros=6400, allocated nonzeros=6400 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=1000, cols=1000 > total: nonzeros=6400, allocated nonzeros=6400 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > > > Op 17 jul 2012, om 12:58 heeft Matthew Knepley het volgende geschreven: > > On Tue, Jul 17, 2012 at 3:23 AM, Benjamin Sanderse wrote: > >> Hello all, >> >> I am trying to use ML to solve a Poisson equation with Neumann BC >> (singular matrix) and found the thread below to avoid a zero pivot error. I >> used the option that Barry suggests, -mg_coarse_pc_factor_shift_type >> nonzero, which works, but only when I run on a single processor. For two or >> more processors I still get a zero pivot error. Are there more options to >> be set for the parallel case? >> > > Should still work. Can you check that the prefix for the coarse solver is > correct with -ksp_view? > > Matt > > >> Benjamin >> >> >> [1]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: Detected zero pivot in LU factorization: >> see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! >> [1]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 >> CDT 2012 >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Detected zero pivot in LU factorization: >> see http://www.mcs.anl.gov/petsc/documentation/faq.html#ZeroPivot! >> [0]PETSC ERROR: Zero pivot row 1 value 5.7431e-18 tolerance 2.22045e-14! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 1, Fri Jun 15 09:30:49 >> CDT 2012 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: bin/navier-stokes on a linux-gnu named >> gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 >> [0]PETSC ERROR: Libraries linked from >> /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib >> [0]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 >> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 >> --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre >> --with-blas-lapack-dir=/sara >> /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 >> --with-debugging=0 --download-ml >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatPivotCheck_none() line 583 in >> /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h >> [0]PETSC ERROR: MatPivotCheck() line 602 in >> /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: bin/navier-stokes on a linux-gnu named >> gb-r3n32.irc.sara.nl by sanderse Tue Jul 17 10:04:05 2012 >> [1]PETSC ERROR: Libraries linked from >> /home/sanderse/Software/petsc-3.3-p1/linux-gnu-c-debug/lib >> [1]PETSC ERROR: Configure run at Mon Jul 16 21:06:33 2012 >> [1]PETSC ERROR: Configure options --with-cc=mpicc --with-fc=mpif90 >> --with-cxx=mpicxx --with-shared-libraries --with-hypre --download-hypre >> --with-blas-lapack-dir=/sara >> /sw/intel/Compiler/11.0/069 --with-hdf5 --download-hdf5 >> --with-debugging=0 --download-ml >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: MatPivotCheck_none() line 583 in >> /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h >> MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in >> /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c >> [0]PETSC ERROR: MatLUFactorNumeric() line 2790 in >> /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c >> [0]PETSC ERROR: PCSetUp_LU() line 160 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c >> [0]PETSC ERROR: PCSetUp() line 832 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 278 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: PCSetUp_Redundant() line 176 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c >> [0]PETSC ERROR: PCSetUp() line 832 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 278 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: PCSetUp_MG() line 729 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: [1]PETSC ERROR: MatPivotCheck() line 602 in >> /home/sanderse/Software/petsc-3.3-p1/include/petsc-private/matimpl.h >> [1]PETSC ERROR: MatLUFactorNumeric_SeqAIJ_Inode() line 1469 in >> /home/sanderse/Software/petsc-3.3-p1/src/mat/impls/aij/seq/inode.c >> [1]PETSC ERROR: MatLUFactorNumeric() line 2790 in >> /home/sanderse/Software/petsc-3.3-p1/src/mat/interface/matrix.c >> [1]PETSC ERROR: PCSetUp_LU() line 160 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/factor/lu/lu.c >> [1]PETSC ERROR: PCSetUp_ML() line 820 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c >> [0]PETSC ERROR: PCSetUp() line 832 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 278 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c >> PCSetUp() line 832 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c >> [1]PETSC ERROR: KSPSetUp() line 278 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c >> [1]PETSC ERROR: PCSetUp_Redundant() line 176 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/redundant/redundant.c >> [1]PETSC ERROR: PCSetUp() line 832 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c >> [1]PETSC ERROR: KSPSetUp() line 278 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c >> [1]PETSC ERROR: PCSetUp_MG() line 729 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/mg/mg.c >> [1]PETSC ERROR: PCSetUp_ML() line 820 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/impls/ml/ml.c >> [1]PETSC ERROR: PCSetUp() line 832 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/pc/interface/precon.c >> [1]PETSC ERROR: KSPSetUp() line 278 in >> /home/sanderse/Software/petsc-3.3-p1/src/ksp/ksp/interface/itfunc.c >> >> >> Op 3 jun 2011, om 14:26 heeft Barry Smith het volgende geschreven: >> >> > >> > It is the direct solver on the the coarse grid that is finding the >> zero pivot (since the coarse grid problem like all the levels has a null >> space). >> > >> > You can use the option -mg_coarse_pc_factor_shift_type nonzero (in >> petsc-3.1 or petsc-dev) Also keep the KSPSetNullSpace() function you are >> using. >> > >> > >> > Barry >> > >> > On Jun 3, 2011, at 3:54 AM, Stijn A. M. Vantieghem wrote: >> > >> >> Dear all, >> >> >> >> I am using PETSc (Fortran interface) to solve a Poisson equation with >> Neumann boundary conditions. Up till now, I managed to do this with Hypre's >> BoomerAMG. Now, I am investigating whether I can improve the performance of >> my code by using ML. However, at execution I receive a zero pivot error; I >> tried to remove the (constant) null space with KSPSetNullSpace, but this >> didn't solve my problem. Do you have an idea of what I'm doing wrong? >> Thanks. >> >> >> >> The relevant portions of my code are as follows: >> >> >> >> !**************************************************** >> >> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) >> >> call KSPSetOperators(ksp,M,M,DIFFERENT_NONZERO_PATTERN,ierr) >> >> >> >> call >> MatNullSpaceCreate(PETSC_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,sp,ierr) >> >> call KSPSetNullSpace(ksp,sp,ierr) >> >> >> >> call KSPGetPC(ksp,pc,ierr) >> >> call PCSetType(pc,PCML,ierr) >> >> >> >> call KSPSetFromOptions(ksp,ierr) >> >> ... >> >> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr) >> >> call KSPSolve(ksp,petsc_rhs,petsc_pressure,ierr) >> >> !**************************************************** >> >> >> >> and the error message is: >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> >> [0]PETSC ERROR: Detected zero pivot in LU factorization >> >> see >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#ZeroPivot >> ! >> >> [0]PETSC ERROR: Zero pivot row 0 value 3.57045e-20 tolerance 1e-12! >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> >> ... >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> >> [0]PETSC ERROR: MatLUFactorNumeric_SeqAIJ() line 574 in >> src/mat/impls/aij/seq/aijfact.c >> >> [0]PETSC ERROR: MatLUFactorNumeric() line 2587 in >> src/mat/interface/matrix.c >> >> [0]PETSC ERROR: PCSetUp_LU() line 158 in >> src/ksp/pc/impls/factor/lu/lu.c >> >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c >> >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c >> >> [0]PETSC ERROR: PCSetUp_MG() line 602 in src/ksp/pc/impls/mg/mg.c >> >> [0]PETSC ERROR: PCSetUp_ML() line 668 in src/ksp/pc/impls/ml/ml.c >> >> [0]PETSC ERROR: PCSetUp() line 795 in src/ksp/pc/interface/precon.c >> >> [0]PETSC ERROR: KSPSetUp() line 237 in src/ksp/ksp/interface/itfunc.c >> >> [0]PETSC ERROR: KSPSolve() line 353 in src/ksp/ksp/interface/itfunc.c >> >> >> >> Regards >> >> Stijn >> >> >> >> Stijn A.M. Vantieghem >> >> Earth and Planetary Magnetism >> >> Institute for Geophysics >> >> ETH Z?rich >> >> Sonneggstrasse 5 - CH 8092 Z?rich >> >> tel: +41 44 632 39 90 >> >> e-mail: stijn.vantieghem at erdw.ethz.ch >> >> >> >> >> >> >> >> >> > >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- > Ir. B. Sanderse > > Centrum Wiskunde en Informatica > Science Park 123 > 1098 XG Amsterdam > > t: +31 20 592 4161 > e: sanderse at cwi.nl > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jul 17 07:39:19 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 17 Jul 2012 07:39:19 -0500 Subject: [petsc-users] ML - zero pivot error In-Reply-To: <44E1E554-BFBB-4531-AC6A-66A45F57C893@cwi.nl> References: <69AD671D-5C83-4A1A-9100-6E5ACDA29224@erdw.ethz.ch> <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> <44E1E554-BFBB-4531-AC6A-66A45F57C893@cwi.nl> Message-ID: On Tue, Jul 17, 2012 at 7:07 AM, Benjamin Sanderse wrote: > The two processor case does not reach far enough that ksp_view gives me > output. The one processor case gives me the output below. > I also added -mg_levels_pc_factor_shift_type nonzero, but still with the > same error. > -mg_coarse_redundant_pc_factor_shift_type nonzero Or use -mg_coarse_pc_type svd and forget about it (provided the coarsest grid is not too big). -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Tue Jul 17 06:20:50 2012 From: nicolas.tardieu at edf.fr (Nicolas TARDIEU) Date: Tue, 17 Jul 2012 13:20:50 +0200 Subject: [petsc-users] Number of nonzeros > 2^31-1 Message-ID: Dear PETSc users, I am solving an unstructured finite element-based problem with 500 millions unkwowns with PETSc. The number of nonzeros is greater than 2^31-1. Here is the KSPView I get : ---------------------------------------------------------------------------------------------------------------- Matrix Object: 700 MPI processes type: mpiaij rows=499125000, cols=499125000 total: nonzeros=-2147483648, allocated nonzeros=-2147483648 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 242923 nodes, limit used is 5 ---------------------------------------------------------------------------------------------------------------- As you can see, The number of nonzeros is <0. I would like to check that this is due to the number of nonzeros being greater than 2^31-1. Here is a short description of the size of differents types : ---------------------------------------------------------------------------------------------------------------- Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 ---------------------------------------------------------------------------------------------------------------- Is there a workaround? Should I worry about the result of my simulation? Thanks in advance, Nicolas Nicolas TARDIEU Ing. Chercheur EDF - R&D Dpt AMA nicolas.tardieu at edf.fr T?l. : 01 47 65 39 05 Un geste simple pour l'environnement, n'imprimez ce message que si vous en avez l'utilit?. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1816 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1151 bytes Desc: not available URL: From nies.david at googlemail.com Tue Jul 17 09:04:28 2012 From: nies.david at googlemail.com (David Nies) Date: Tue, 17 Jul 2012 16:04:28 +0200 Subject: [petsc-users] Determine if PETSc has been installed with HYPRE support Message-ID: Hello all! Is there a way to find out if a given PETSc installation has been configured and built with '--with-hypre=1'? The background is that I want to write an M4 test for our local build system to see if we should support PETSc's HYPRE capabilities. Of couse, I could test if there is a 'libHYPRE.a' present, but that would only prove that HYPRE is installed, not that PETSc supports HYPRE... Thank you very much in advance! Yours -David From balay at mcs.anl.gov Tue Jul 17 09:50:56 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 17 Jul 2012 09:50:56 -0500 (CDT) Subject: [petsc-users] Determine if PETSc has been installed with HYPRE support In-Reply-To: References: Message-ID: If PETSc is installed with hypre - then the corresponding petscconf.h will have a flag PETSC_HAVE_HYPRE defined. [You could use this flag directly in your code to enable-disable corresponding functionality.] Satish On Tue, 17 Jul 2012, David Nies wrote: > Hello all! > > Is there a way to find out if a given PETSc installation has been > configured and built with '--with-hypre=1'? The background is that I > want to write an M4 test for our local build system to see if we > should support PETSc's HYPRE capabilities. Of couse, I could test if > there is a 'libHYPRE.a' present, but that would only prove that HYPRE > is installed, not that PETSc supports HYPRE... > > Thank you very much in advance! > > Yours > -David > From knepley at gmail.com Tue Jul 17 09:58:24 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 09:58:24 -0500 Subject: [petsc-users] Number of nonzeros > 2^31-1 In-Reply-To: References: Message-ID: On Tue, Jul 17, 2012 at 6:20 AM, Nicolas TARDIEU wrote: > > Dear PETSc users, > > I am solving an unstructured finite element-based problem with 500 > millions unkwowns with PETSc. > The number of nonzeros is greater than 2^31-1. > Here is the KSPView I get : > > ---------------------------------------------------------------------------------------------------------------- > Matrix Object: 700 MPI processes > type: mpiaij > rows=499125000, cols=499125000 > total: nonzeros=-2147483648, allocated nonzeros=-2147483648 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 242923 nodes, limit used > is 5 > > ---------------------------------------------------------------------------------------------------------------- > > As you can see, The number of nonzeros is <0. > I would like to check that this is due to the number of nonzeros being > greater than 2^31-1. > > Here is a short description of the size of differents types : > > ---------------------------------------------------------------------------------------------------------------- > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 > > ---------------------------------------------------------------------------------------------------------------- > > Is there a workaround? Should I worry about the result of my simulation? > You could configure using --with-64-bit-indices. Matt > Thanks in advance, > Nicolas *Nicolas TARDIEU** > Ing. Chercheur* > EDF - R&D Dpt AMA > > *nicolas.tardieu at edf.fr* > T?l. : 01 47 65 39 05 Un geste simple pour l'environnement, n'imprimez > ce message que si vous en avez l'utilit?. > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1816 bytes Desc: not available URL: From u.tabak at tudelft.nl Tue Jul 17 10:00:28 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 17 Jul 2012 17:00:28 +0200 Subject: [petsc-users] Determine if PETSc has been installed with HYPRE support In-Reply-To: References: Message-ID: <50057E0C.2030309@tudelft.nl> On 07/17/2012 04:04 PM, David Nies wrote: > Hello all! > > Is there a way to find out if a given PETSc installation has been > configured and built with '--with-hypre=1'? The background is that I > want to write an M4 test for our local build system to see if we > should support PETSc's HYPRE capabilities. Of couse, I could test if > there is a 'libHYPRE.a' present, but that would only prove that HYPRE > is installed, not that PETSc supports HYPRE... > > Thank you very much in advance! > > Yours > -David From nies.david at googlemail.com Tue Jul 17 10:02:44 2012 From: nies.david at googlemail.com (David Nies) Date: Tue, 17 Jul 2012 17:02:44 +0200 Subject: [petsc-users] Determine if PETSc has been installed with HYPRE support In-Reply-To: References: Message-ID: Great! Thank you, that was exactly what I was looking for! Could not find it in the documentation, so I asked here. Cheers, -David On Tue, Jul 17, 2012 at 4:50 PM, Satish Balay wrote: > If PETSc is installed with hypre - then the corresponding petscconf.h > will have a flag PETSC_HAVE_HYPRE defined. > > [You could use this flag directly in your code to enable-disable > corresponding functionality.] > > Satish > > On Tue, 17 Jul 2012, David Nies wrote: > >> Hello all! >> >> Is there a way to find out if a given PETSc installation has been >> configured and built with '--with-hypre=1'? The background is that I >> want to write an M4 test for our local build system to see if we >> should support PETSc's HYPRE capabilities. Of couse, I could test if >> there is a 'libHYPRE.a' present, but that would only prove that HYPRE >> is installed, not that PETSc supports HYPRE... >> >> Thank you very much in advance! >> >> Yours >> -David >> > From B.Sanderse at cwi.nl Tue Jul 17 10:38:55 2012 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Tue, 17 Jul 2012 17:38:55 +0200 Subject: [petsc-users] ML - zero pivot error In-Reply-To: References: <69AD671D-5C83-4A1A-9100-6E5ACDA29224@erdw.ethz.ch> <0422B860-8CBC-40BB-986F-0B128E98C85B@mcs.anl.gov> <44E1E554-BFBB-4531-AC6A-66A45F57C893@cwi.nl> Message-ID: The first one works, thanks. Op 17 jul 2012, om 14:39 heeft Jed Brown het volgende geschreven: > On Tue, Jul 17, 2012 at 7:07 AM, Benjamin Sanderse wrote: > The two processor case does not reach far enough that ksp_view gives me output. The one processor case gives me the output below. > I also added -mg_levels_pc_factor_shift_type nonzero, but still with the same error. > > -mg_coarse_redundant_pc_factor_shift_type nonzero > > > Or use -mg_coarse_pc_type svd and forget about it (provided the coarsest grid is not too big). -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.hisch at gmail.com Tue Jul 17 12:31:50 2012 From: t.hisch at gmail.com (Thomas Hisch) Date: Tue, 17 Jul 2012 19:31:50 +0200 Subject: [petsc-users] Store only Matrix Structure (Binary) In-Reply-To: References: Message-ID: On Mon, Jul 16, 2012 at 11:44 PM, Matthew Knepley wrote: > On Mon, Jul 16, 2012 at 3:47 PM, Thomas Hisch wrote: >> >> Hello list, >> >> is it possible to store just the nonzero pattern of a large (sparse) >> matrix in a file? If I write the whole matrix to disk >> (PetscViewerBinaryOpen/MatView) its size is about 30MB. As I only need >> its structure it would be a good idea to load and store just the >> non-zero pattern of the matrix. >> >> Has anyone tried to do that and are there corresponding functions in >> petsc-dev? > > > This is not in PETSc currrently. It would not be hard to do by commenting > out the > value parts in the source. In which file do I have to look at ? Thomas From knepley at gmail.com Tue Jul 17 12:33:56 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 17 Jul 2012 12:33:56 -0500 Subject: [petsc-users] Store only Matrix Structure (Binary) In-Reply-To: References: Message-ID: On Tue, Jul 17, 2012 at 12:31 PM, Thomas Hisch wrote: > On Mon, Jul 16, 2012 at 11:44 PM, Matthew Knepley > wrote: > > On Mon, Jul 16, 2012 at 3:47 PM, Thomas Hisch wrote: > >> > >> Hello list, > >> > >> is it possible to store just the nonzero pattern of a large (sparse) > >> matrix in a file? If I write the whole matrix to disk > >> (PetscViewerBinaryOpen/MatView) its size is about 30MB. As I only need > >> its structure it would be a good idea to load and store just the > >> non-zero pattern of the matrix. > >> > >> Has anyone tried to do that and are there corresponding functions in > >> petsc-dev? > > > > > > This is not in PETSc currrently. It would not be hard to do by commenting > > out the > > value parts in the source. > > In which file do I have to look at ? http://petsc.cs.iit.edu/petsc/petsc-dev/annotate/96539c3bad2e/src/mat/impls/aij/mpi/mpiaij.c#l1290 Matt > > Thomas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrosso at uci.edu Tue Jul 17 12:43:52 2012 From: mrosso at uci.edu (Michele Rosso) Date: Tue, 17 Jul 2012 10:43:52 -0700 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: References: <50006A51.5090207@uci.edu> Message-ID: <5005A458.2020905@uci.edu> Hi Hong, I have some problems with the block jacobi preconditioner. I am solving a 3D Poisson equation with periodic BCs, discretized by using finite differences (7-points stencil). Thus the problem is singular and the nullspace has to be removed. If I solve with the PCG method + JACOBI preconditioner the results are fine. If I use PCG + Block Jacobi preconditioner + ICC on each block the results are fine on the majority of the processors, but on few of them the error is very large. Do you have any idea/suggestions on how to fix this problem? This is the fragment of code I am using ( petsc 3.1 and Fortran 90): PetscErrorCode petsc_err Mat A PC pc, subpc KSP ksp KSP subksp(1) : : : call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) call KSPGetPC(ksp,pc,petsc_err) call PCSetType(pc,PCBJACOBI,petsc_err) call KSPSetUp(ksp,petsc_err) ! KSP context for each single block call PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) call KSPGetPC(subksp(1),subpc,petsc_err) call PCSetType(subpc,PCICC,petsc_err) call KSPSetType(subksp(1),KSPCG, petsc_err) call KSPSetTolerances(subksp(1),tol ,PETSC_DEFAULT_DOUBLE_PRECISION,& & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) ! Remove nullspace from the singular system (Check PETSC_NULL) call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) call KSPSetNullSpace(ksp, nullspace, petsc_err) call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) call KSPSolve(ksp,b,x,petsc_err) Thank you, Michele On 07/13/2012 12:14 PM, Hong Zhang wrote: > Michele : > > > I need to use the ICC factorization as preconditioner, but I > noticed that no parallel version is supported. > Is that correct? > > Correct. > > If so, is there a work around, like building the preconditioner > "by hand" by using PETSc functions? > > You may try block jacobi with icc in the blocks '-ksp_type cg > -pc_type bjacobi -sub_pc_type icc' > > Hong > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmulas at oa-cagliari.inaf.it Tue Jul 17 12:56:40 2012 From: gmulas at oa-cagliari.inaf.it (Giacomo Mulas) Date: Tue, 17 Jul 2012 19:56:40 +0200 (CEST) Subject: [petsc-users] request suggestions for most appropriate eigenvalue solver In-Reply-To: <62F26FC3-EF61-4A93-8749-19913849D026@dsic.upv.es> References: <62F26FC3-EF61-4A93-8749-19913849D026@dsic.upv.es> Message-ID: Hello Jose. Some months ago, I wrote to the petsc-users mailing list asking if it would be possible to use an iterative solver in slepc which converges eigenvectors in order of maximum projection on a given basis vector. Back then you told me you might look into it in some time and let me know. Since some time passed, and you might quite understandably not remember about this, I am quoting that email exchange below. In the meanwhile, I found that there the Davidson algorithm and its derivatives (e.g. block Davidson) appears to behave as I would like, or close to it. You probably know them, but I send you a couple of references in any case: E. Davidson, J. Comput. Phys. 17, 87 ?1975?. E. Davidson, Comput. Phys. Commun. 53, 49 ?1989?. F. Ribeiro, C. Iung, and C. Leforestier, Chem. Phys. Lett. 362, 199 ?2002?. F. Ribeiro, C. Iung, and C. Leforestier, J. Theor. Comput. Chem. 2, 609 ?2003?. Any hope of implementing something like that (or some other algorithm to obtain the same behaviour) in Slepc/Petsc any time soon? Thanks in advance, bye Giacomo Mulas On Wed, 16 May 2012, Jose E. Roman wrote: > El 16/05/2012, a las 13:12, Giacomo Mulas escribi?: > >> Hello. This is a slepc issue. >> >> I have developed a code to compute anharmonic corrections to a harmonic >> vibrational analysis of a molecule, including an explicit treatment of >> accidental resonances. >> This results in setting up a number of eigenvalue problems "around" pure harmonic states, which are my basis set. These eigenvalue problems are >> sparse, and I only need a relatively small subset of the solutions. However, >> what is unusual is _which_ eigenpairs I want: I want the eigenpairs whose >> eigenvectors span a subspace which covers (within a predetermined accuracy) >> the few pure harmonic states I am interested in. That is, I want enough >> eigenpairs that the projection of my pure harmonic state on these >> eigenvectors is "close enough" to it. >> >> So far, I am relying on spectral slicing to obtain the eigenpairs in a >> neighbourhood of the pure harmonic states I start from, increasing >> neighbourhood radius until I cover the starting state adequately. However, >> this results in a lot of waste: many eigenstates are accidentally close to >> my target harmonic states, with little or no projection on them. I end up >> computing 1-2 orders of magnitude more states than the needed ones (checked >> a posteriori). >> >> The best, for my needs, would be to be able to specify, in slepc, that my >> target solutions are the ones with the highest projection on some vector >> (or, better, subspace spanned by some vectors), instead of using a selection >> criterion based on eigenvalues closest to a target or in an interval. Is >> there some (not too complex) way to "convince" slepc to work like this? I >> can think of providing my target vectors (one by one, or a linear >> combination) as a starting point to generate the Krylov subspace, but then >> how do I select eigenvectors to really be the ones I want? >> >> Thanks in advance >> Giacomo > > Currently there is no way to do this. But we have had a couple of similar requests before. We are now reorganizing parts of code within SLEPc, so I will think if it is viable to provide a solution for this. I will get back to you. > > Jose > > -- _________________________________________________________________ Giacomo Mulas _________________________________________________________________ OSSERVATORIO ASTRONOMICO DI CAGLIARI Str. 54, Loc. Poggio dei Pini * 09012 Capoterra (CA) Tel. (OAC): +39 070 71180 248 Fax : +39 070 71180 222 Tel. (UNICA): +39 070 675 4916 _________________________________________________________________ "When the storms are raging around you, stay right where you are" (Freddy Mercury) _________________________________________________________________ From hzhang at mcs.anl.gov Tue Jul 17 13:03:32 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 17 Jul 2012 13:03:32 -0500 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: <5005A458.2020905@uci.edu> References: <50006A51.5090207@uci.edu> <5005A458.2020905@uci.edu> Message-ID: Michele : > > I have some problems with the block jacobi preconditioner. > I am solving a 3D Poisson equation with periodic BCs, discretized by > using finite differences (7-points stencil). > Thus the problem is singular and the nullspace has to be removed. > For Poisson equations, multigrid precondition should be the method of choice. If I solve with the PCG method + JACOBI preconditioner the results are fine. > If I use PCG + Block Jacobi preconditioner + ICC on each block the results > are fine on the majority of the processors, > but on few of them the error is very large. > How do you know " few of them"? Do you have any idea/suggestions on how to fix this problem? > This is the fragment of code I am using ( petsc 3.1 and Fortran 90): > Please update to petsc-3.3. petsc-3.1 is too old. > > PetscErrorCode petsc_err > Mat A > PC pc, subpc > KSP ksp > KSP subksp(1) > : > : > : > call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) > call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) > call KSPSetType(ksp,KSPCG, ) !the default type is gmres. I guess you want CG call KSPGetPC(ksp,pc,petsc_err) > call PCSetType(pc,PCBJACOBI,petsc_err) > ! call KSPSetUp(ksp,petsc_err) call this at the end > > ! KSP context for each single block > call > PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) > > call KSPGetPC(subksp(1),subpc,petsc_err) > call PCSetType(subpc,PCICC,petsc_err) > > call KSPSetType(subksp(1),KSPPREONLY petsc_err) > > call KSPSetTolerances(subksp(1),tol ,PETSC_DEFAULT_DOUBLE_PRECISION,& > & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) > > ! Remove nullspace from the singular system (Check PETSC_NULL) > call > MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) > call KSPSetNullSpace(ksp, nullspace, petsc_err) > call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) > > call KSPSolve(ksp,b,x,petsc_err) > I modified your code slightly. All these options can be provided at runtime: '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' Hong > > > > > > > > > > On 07/13/2012 12:14 PM, Hong Zhang wrote: > > Michele : > >> >> I need to use the ICC factorization as preconditioner, but I noticed that >> no parallel version is supported. >> Is that correct? >> > Correct. > > >> If so, is there a work around, like building the preconditioner "by >> hand" by using PETSc functions? >> > You may try block jacobi with icc in the blocks '-ksp_type cg -pc_type > bjacobi -sub_pc_type icc' > > Hong > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 17 13:31:06 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 17 Jul 2012 13:31:06 -0500 Subject: [petsc-users] Adding vectors of different dimension In-Reply-To: <500530F2.7050700@gmail.com> References: <500530F2.7050700@gmail.com> Message-ID: On Jul 17, 2012, at 4:31 AM, TAY wee-beng wrote: > Hi, > > I have 3 vectors of different dimension, create using 2 DM (da_dof1,da_dof2), with dof = 1 and 2. > > They are declared as: > > Vec u,v,duv > > call DMCreateLocalVector(da_dof1,u,ierr) > > call DMCreateLocalVector(da_dof1,v,ierr) > > call DMCreateLocalVector(da_dof2,duv,ierr) > > How can I add duv to u and v to get new values for u,v? > > I'm currently using DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 to access the arrays. Then you just need to write the FORTARAN code to do this. Loop over the local indices and do something like uarray(i,j) = uarray(i,j) + duvarray(0,i,j) varray(i,j) = varray(i,j) + duvarray(1,i,j) > > Thanks > > -- > Yours sincerely, > > TAY wee-beng > From mrosso at uci.edu Tue Jul 17 13:36:32 2012 From: mrosso at uci.edu (Michele Rosso) Date: Tue, 17 Jul 2012 11:36:32 -0700 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: References: <50006A51.5090207@uci.edu> <5005A458.2020905@uci.edu> Message-ID: <5005B0B0.8030007@uci.edu> On 07/17/2012 11:03 AM, Hong Zhang wrote: > Michele : > > > I have some problems with the block jacobi preconditioner. > I am solving a 3D Poisson equation with periodic BCs, discretized > by using finite differences (7-points stencil). > Thus the problem is singular and the nullspace has to be removed. > > > For Poisson equations, multigrid precondition should be the method of > choice. Thank you for the suggestion. I do not have any experience with multigrid, but I will try. > > If I solve with the PCG method + JACOBI preconditioner the results > are fine. > If I use PCG + Block Jacobi preconditioner + ICC on each block the > results are fine on the majority of the processors, > but on few of them the error is very large. > > How do you know " few of them"? Basically the solution is not correct on some grid points, say 6 grid nodes out of 64^3. The 6 grid nodes with problems belongs to 2 of the 128 processors I am using. > > Do you have any idea/suggestions on how to fix this problem? > This is the fragment of code I am using ( petsc 3.1 and Fortran 90): > > Please update to petsc-3.3. petsc-3.1 is too old. I would do that but the version installed on the platform (Intrepid at ALCF) I am working on is 3.1-p2. > > PetscErrorCode petsc_err > Mat A > PC pc, subpc > KSP ksp > KSP subksp(1) > : > : > : > call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) > call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) > > call KSPSetType(ksp,KSPCG, ) !the default type is gmres. I guess you > want CG > > call KSPGetPC(ksp,pc,petsc_err) > call PCSetType(pc,PCBJACOBI,petsc_err) > ! call KSPSetUp(ksp,petsc_err) call this at the end > > ! KSP context for each single block > call > PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) > > call KSPGetPC(subksp(1),subpc,petsc_err) > call PCSetType(subpc,PCICC,petsc_err) > > call KSPSetType(subksp(1),KSPPREONLY petsc_err) > > call KSPSetTolerances(subksp(1),tol > ,PETSC_DEFAULT_DOUBLE_PRECISION,& > & > PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) > > ! Remove nullspace from the singular system (Check PETSC_NULL) > call > MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) > call KSPSetNullSpace(ksp, nullspace, petsc_err) > call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) > > call KSPSolve(ksp,b,x,petsc_err) > > > I modified your code slightly. All these options can be provided at > runtime: > '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' > > Hong > > > > > > > > > > > On 07/13/2012 12:14 PM, Hong Zhang wrote: >> Michele : >> >> >> I need to use the ICC factorization as preconditioner, but I >> noticed that no parallel version is supported. >> Is that correct? >> >> Correct. >> >> If so, is there a work around, like building the >> preconditioner "by hand" by using PETSc functions? >> >> You may try block jacobi with icc in the blocks '-ksp_type cg >> -pc_type bjacobi -sub_pc_type icc' >> >> Hong >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 17 14:13:55 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 17 Jul 2012 14:13:55 -0500 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: <5005B0B0.8030007@uci.edu> References: <50006A51.5090207@uci.edu> <5005A458.2020905@uci.edu> <5005B0B0.8030007@uci.edu> Message-ID: <86600BC0-482F-4E83-9A88-4F0BE1E28406@mcs.anl.gov> >> Please update to petsc-3.3. petsc-3.1 is too old. > I would do that but the version installed on the platform (Intrepid at ALCF) I am working on is 3.1-p2. Satish, Please fix this. Thanks Barry On Jul 17, 2012, at 1:36 PM, Michele Rosso wrote: > > On 07/17/2012 11:03 AM, Hong Zhang wrote: >> Michele : >> >> I have some problems with the block jacobi preconditioner. >> I am solving a 3D Poisson equation with periodic BCs, discretized by using finite differences (7-points stencil). >> Thus the problem is singular and the nullspace has to be removed. >> >> For Poisson equations, multigrid precondition should be the method of >> choice. > Thank you for the suggestion. I do not have any experience with multigrid, but I will try. >> >> If I solve with the PCG method + JACOBI preconditioner the results are fine. >> If I use PCG + Block Jacobi preconditioner + ICC on each block the results are fine on the majority of the processors, >> but on few of them the error is very large. >> >> How do you know " few of them"? > Basically the solution is not correct on some grid points, say 6 grid nodes out of 64^3. The 6 grid nodes with problems belongs to 2 of the 128 processors > I am using. >> >> Do you have any idea/suggestions on how to fix this problem? >> This is the fragment of code I am using ( petsc 3.1 and Fortran 90): >> >> Please update to petsc-3.3. petsc-3.1 is too old. > I would do that but the version installed on the platform (Intrepid at ALCF) I am working on is 3.1-p2. > >> >> PetscErrorCode petsc_err >> Mat A >> PC pc, subpc >> KSP ksp >> KSP subksp(1) >> : >> : >> : >> call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) >> call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) >> >> call KSPSetType(ksp,KSPCG, ) !the default type is gmres. I guess you want CG >> >> call KSPGetPC(ksp,pc,petsc_err) >> call PCSetType(pc,PCBJACOBI,petsc_err) >> ! call KSPSetUp(ksp,petsc_err) call this at the end >> >> ! KSP context for each single block >> call PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) >> call KSPGetPC(subksp(1),subpc,petsc_err) >> call PCSetType(subpc,PCICC,petsc_err) >> >> call KSPSetType(subksp(1),KSPPREONLY petsc_err) >> >> call KSPSetTolerances(subksp(1),tol ,PETSC_DEFAULT_DOUBLE_PRECISION,& >> & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) >> >> ! Remove nullspace from the singular system (Check PETSC_NULL) >> call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) >> call KSPSetNullSpace(ksp, nullspace, petsc_err) >> call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) >> >> call KSPSolve(ksp,b,x,petsc_err) >> >> I modified your code slightly. All these options can be provided at runtime: >> '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' >> >> Hong >> >> >> >> >> >> >> >> >> >> On 07/13/2012 12:14 PM, Hong Zhang wrote: >>> Michele : >>> >>> I need to use the ICC factorization as preconditioner, but I noticed that no parallel version is supported. >>> Is that correct? >>> Correct. >>> >>> If so, is there a work around, like building the preconditioner "by hand" by using PETSc functions? >>> You may try block jacobi with icc in the blocks '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' >>> >>> Hong >>> >> >> >> > > From mrosso at uci.edu Tue Jul 17 17:28:17 2012 From: mrosso at uci.edu (Michele Rosso) Date: Tue, 17 Jul 2012 15:28:17 -0700 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: <86600BC0-482F-4E83-9A88-4F0BE1E28406@mcs.anl.gov> References: <50006A51.5090207@uci.edu> <5005A458.2020905@uci.edu> <5005B0B0.8030007@uci.edu> <86600BC0-482F-4E83-9A88-4F0BE1E28406@mcs.anl.gov> Message-ID: <5005E701.4030102@uci.edu> Thank a lot. Please let me know when version 3.3 is available. Michele On 07/17/2012 12:13 PM, Barry Smith wrote: >>> Please update to petsc-3.3. petsc-3.1 is too old. >> I would do that but the version installed on the platform (Intrepid at ALCF) I am working on is 3.1-p2. > Satish, > > Please fix this. > > Thanks > > Barry > > On Jul 17, 2012, at 1:36 PM, Michele Rosso wrote: > >> On 07/17/2012 11:03 AM, Hong Zhang wrote: >>> Michele : >>> >>> I have some problems with the block jacobi preconditioner. >>> I am solving a 3D Poisson equation with periodic BCs, discretized by using finite differences (7-points stencil). >>> Thus the problem is singular and the nullspace has to be removed. >>> >>> For Poisson equations, multigrid precondition should be the method of >>> choice. >> Thank you for the suggestion. I do not have any experience with multigrid, but I will try. >>> If I solve with the PCG method + JACOBI preconditioner the results are fine. >>> If I use PCG + Block Jacobi preconditioner + ICC on each block the results are fine on the majority of the processors, >>> but on few of them the error is very large. >>> >>> How do you know " few of them"? >> Basically the solution is not correct on some grid points, say 6 grid nodes out of 64^3. The 6 grid nodes with problems belongs to 2 of the 128 processors >> I am using. >>> Do you have any idea/suggestions on how to fix this problem? >>> This is the fragment of code I am using ( petsc 3.1 and Fortran 90): >>> >>> Please update to petsc-3.3. petsc-3.1 is too old. >> I would do that but the version installed on the platform (Intrepid at ALCF) I am working on is 3.1-p2. >> >>> PetscErrorCode petsc_err >>> Mat A >>> PC pc, subpc >>> KSP ksp >>> KSP subksp(1) >>> : >>> : >>> : >>> call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) >>> call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) >>> >>> call KSPSetType(ksp,KSPCG, ) !the default type is gmres. I guess you want CG >>> >>> call KSPGetPC(ksp,pc,petsc_err) >>> call PCSetType(pc,PCBJACOBI,petsc_err) >>> ! call KSPSetUp(ksp,petsc_err) call this at the end >>> >>> ! KSP context for each single block >>> call PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) >>> call KSPGetPC(subksp(1),subpc,petsc_err) >>> call PCSetType(subpc,PCICC,petsc_err) >>> >>> call KSPSetType(subksp(1),KSPPREONLY petsc_err) >>> >>> call KSPSetTolerances(subksp(1),tol ,PETSC_DEFAULT_DOUBLE_PRECISION,& >>> & PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) >>> >>> ! Remove nullspace from the singular system (Check PETSC_NULL) >>> call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) >>> call KSPSetNullSpace(ksp, nullspace, petsc_err) >>> call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) >>> >>> call KSPSolve(ksp,b,x,petsc_err) >>> >>> I modified your code slightly. All these options can be provided at runtime: >>> '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' >>> >>> Hong >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On 07/13/2012 12:14 PM, Hong Zhang wrote: >>>> Michele : >>>> >>>> I need to use the ICC factorization as preconditioner, but I noticed that no parallel version is supported. >>>> Is that correct? >>>> Correct. >>>> >>>> If so, is there a work around, like building the preconditioner "by hand" by using PETSc functions? >>>> You may try block jacobi with icc in the blocks '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' >>>> >>>> Hong >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrayver at gfz-potsdam.de Wed Jul 18 04:49:18 2012 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 18 Jul 2012 11:49:18 +0200 Subject: [petsc-users] ubuntu default mpich gives error with parmetis Message-ID: <5006869E.6040609@gfz-potsdam.de> Hello PETSc team, Trying to configure petsc on ubuntu with installed mpich (apt-get install mpich2) like that: ./configure --with-petsc-arch=mpich-gcc-double-debug-f --with-fortran-interfaces=1 --download-mumps --download-metis --download-scalapack --download-blacs --with-scalar-type=real --download-blas-lapack --with-precision=double --download-parmetis --with-shared-libraries=1 Give me following error: /usr/bin/ld: /home/lib/petsc-3.3-p2/mpich-gcc-double-debug-f/lib/libmpich.a(allreduce.o): relocation R_X86_64_32S against `MPID_Op_builtin' can not be used when making a shared object; recompile with -fPIC /home/lib/petsc-3.3-p2/mpich-gcc-double-debug-f/lib/libmpich.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [libparmetis/libparmetis.so] Error 1 make[1]: *** [libparmetis/CMakeFiles/parmetis.dir/all] Error 2 make: *** [all] Error 2 I'm not sure what should I recompile with -fPIC and would it help at all? This doesn't happen when using --download-mpich though. Thanks. -- Regards, Alexander From agrayver at gfz-potsdam.de Wed Jul 18 04:56:51 2012 From: agrayver at gfz-potsdam.de (Alexander Grayver) Date: Wed, 18 Jul 2012 11:56:51 +0200 Subject: [petsc-users] ubuntu default mpich gives error with parmetis In-Reply-To: <5006869E.6040609@gfz-potsdam.de> References: <5006869E.6040609@gfz-potsdam.de> Message-ID: <50068863.9060805@gfz-potsdam.de> Sorry, it was my fault. This error happened because I first tried to build PETSc with --download-mpich and then used same directory bu without this options thus previously downloaded and configured MPICH library was actually used although I implied PETSc to use ubuntu's MPICH. On 18.07.2012 11:49, Alexander Grayver wrote: > Hello PETSc team, > > Trying to configure petsc on ubuntu with installed mpich (apt-get > install mpich2) like that: > > ./configure --with-petsc-arch=mpich-gcc-double-debug-f > --with-fortran-interfaces=1 --download-mumps --download-metis > --download-scalapack --download-blacs --with-scalar-type=real > --download-blas-lapack --with-precision=double --download-parmetis > --with-shared-libraries=1 > > Give me following error: > > /usr/bin/ld: > /home/lib/petsc-3.3-p2/mpich-gcc-double-debug-f/lib/libmpich.a(allreduce.o): > relocation R_X86_64_32S against `MPID_Op_builtin' can not be used when > making a shared object; recompile with -fPIC > /home/lib/petsc-3.3-p2/mpich-gcc-double-debug-f/lib/libmpich.a: could > not read symbols: Bad value > collect2: ld returned 1 exit status > make[2]: *** [libparmetis/libparmetis.so] Error 1 > make[1]: *** [libparmetis/CMakeFiles/parmetis.dir/all] Error 2 > make: *** [all] Error 2 > > I'm not sure what should I recompile with -fPIC and would it help at all? > > This doesn't happen when using --download-mpich though. > > Thanks. > -- Regards, Alexander From Flo.44 at gmx.de Wed Jul 18 05:50:10 2012 From: Flo.44 at gmx.de (Florian Beck) Date: Wed, 18 Jul 2012 12:50:10 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? Message-ID: <20120718105010.148120@gmx.net> Hi, I want to use the petsc library in a shared library which I'm dynamically loading in my main program. Therefore I'm not using the functions to destroy a Vector such as VecDestroy. If I use the function I got an segmentation fault error. Of course I have a memory leak, because I'm not using the functions to destroy my vectors. Is there a simple example how to use the petsc-library in a program like the following pseudo-code: main{ for 1 to 10 do_something call function_to_solve_Ax=b_with_petsc do_something end } From jedbrown at mcs.anl.gov Wed Jul 18 06:27:52 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 18 Jul 2012 06:27:52 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <20120718105010.148120@gmx.net> References: <20120718105010.148120@gmx.net> Message-ID: On Wed, Jul 18, 2012 at 5:50 AM, Florian Beck wrote: > Hi, > > I want to use the petsc library in a shared library which I'm dynamically > loading in my main program. Therefore I'm not using the functions to > destroy a Vector such as VecDestroy. If I use the function I got an > segmentation fault error. Send the full error message (including stack trace). > Of course I have a memory leak, because I'm not using the functions to > destroy my vectors. Is there a simple example how to use the petsc-library > in a program like the following pseudo-code: > Is MPI initialized before this is called? Did you plan to do this in parallel? Are you linking PETSc dynamically (as in, you dlopen and dlsym PETSc functions to call them, or perhaps you declare weak symbols in your code), linking your app-specific solver module (you call PETSc normally and use dlsym at a higher level), or something else? Remember to configure PETSc --with-dynamic-loading if necessary. The best way is to reuse data structures, but if you are going to destroy them each iteration, make sure you destroy all of them. Note that MPI cannot be initialized more than once, but presumably you aren't doing that because the rest of the app needs MPI to formulate the original problem. Note that doing everything dynamic is more work and defers more errors from compile and link time to run time. It is possible, but it takes more effort and some familiarity with requirements for dynamic loading. > > main{ > > for 1 to 10 > > do_something > > call function_to_solve_Ax=b_with_petsc > > do_something > > end > > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Wed Jul 18 07:54:56 2012 From: popov at uni-mainz.de (Anton Popov) Date: Wed, 18 Jul 2012 14:54:56 +0200 Subject: [petsc-users] block size Message-ID: <5006B220.6010205@uni-mainz.de> Dear petsc team, could you please tell me what's wrong with the attached example file? I run it on 4 processors with petsc-3.3-p1. What could error message "Local size 1000 not compatible with block size 3!" mean? I've another question related to this issue. What is the real purpose of PetscViewerBinarySkipInfo function? I see no reason to skip creating "info" file, because the file produced by the attached example seems to be correct. Moreover, similar block size error occurs in our code while reading file with multiple vectors, irrespective whether "info" file exists or not. Thank you, Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- //---------------------------------------------------------------------------------------- #include #include //---------------------------------------------------------------------------------------- #undef __FUNCT__ #define __FUNCT__ "SaveVector" PetscErrorCode SaveVector(Vec V3, Vec V1) { PetscErrorCode ierr; PetscViewer view_out; char SaveFileName[PETSC_MAX_PATH_LEN]="vec.dat"; ierr = PetscViewerCreate(PETSC_COMM_WORLD, &view_out); CHKERRQ(ierr); ierr = PetscViewerSetType(view_out, PETSCVIEWERBINARY); CHKERRQ(ierr); ierr = PetscViewerFileSetMode(view_out, FILE_MODE_WRITE); CHKERRQ(ierr); ierr = PetscViewerFileSetName(view_out, SaveFileName); CHKERRQ(ierr); ierr = VecView(V1, view_out); CHKERRQ(ierr); ierr = VecView(V3, view_out); CHKERRQ(ierr); ierr = PetscViewerDestroy(&view_out); CHKERRQ(ierr); PetscFunctionReturn(0); } //---------------------------------------------------------------------------------------- #undef __FUNCT__ #define __FUNCT__ "ReadVector" PetscErrorCode ReadVector(Vec V3, Vec V1) { PetscErrorCode ierr; PetscViewer view_in; char LoadFileName[PETSC_MAX_PATH_LEN]="vec.dat"; printf("Reading vector\n"); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, LoadFileName, FILE_MODE_READ, &view_in); CHKERRQ(ierr); ierr = VecLoad(V1, view_in); CHKERRQ(ierr); ierr = VecLoad(V3, view_in); CHKERRQ(ierr); ierr = PetscViewerDestroy(&view_in); CHKERRQ(ierr); PetscFunctionReturn(0); } //---------------------------------------------------------------------------------------- #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **argv) { PetscErrorCode ierr; PetscInt dof, s = 1; PetscInt M = 20, N = 20, P = 20; PetscScalar cf = 1.0; DM da3, da1; Vec V3w, V3r, V1w, V1r; ierr = PetscInitialize(&argc, &argv, PETSC_NULL, PETSC_NULL); CHKERRQ(ierr); dof = 3; ierr = DMDACreate3d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE, DMDA_STENCIL_BOX, M, N, P, PETSC_DECIDE, PETSC_DECIDE, PETSC_DECIDE, dof, s, PETSC_NULL, PETSC_NULL, PETSC_NULL, &da3); CHKERRQ(ierr); dof = 1; ierr = DMDACreate3d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE, DMDA_STENCIL_BOX, M, N, P, PETSC_DECIDE, PETSC_DECIDE, PETSC_DECIDE, dof, s, PETSC_NULL, PETSC_NULL, PETSC_NULL, &da1); CHKERRQ(ierr); ierr = DMCreateGlobalVector(da3, &V3w); CHKERRQ(ierr); ierr = DMCreateGlobalVector(da3, &V3r); CHKERRQ(ierr); ierr = DMCreateGlobalVector(da1, &V1w); CHKERRQ(ierr); ierr = DMCreateGlobalVector(da1, &V1r); CHKERRQ(ierr); ierr = VecSet(V3w, cf); CHKERRQ(ierr); ierr = VecSet(V1w, cf); CHKERRQ(ierr); ierr = SaveVector(V3w, V1w); CHKERRQ(ierr); ierr = ReadVector(V3r, V1r); CHKERRQ(ierr); ierr = DMDestroy(&da3); CHKERRQ(ierr); ierr = DMDestroy(&da1); CHKERRQ(ierr); ierr = VecDestroy(&V3w); CHKERRQ(ierr); ierr = VecDestroy(&V3r); CHKERRQ(ierr); ierr = VecDestroy(&V1w); CHKERRQ(ierr); ierr = VecDestroy(&V1r); CHKERRQ(ierr); ierr = PetscFinalize(); CHKERRQ(ierr); return 0; } //---------------------------------------------------------------------------------------- From zonexo at gmail.com Wed Jul 18 08:33:31 2012 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 18 Jul 2012 15:33:31 +0200 Subject: [petsc-users] Not given explicit type for Subroutine DMCompositeGetEntries1 Message-ID: <5006BB2B.5040600@gmail.com> Hi, When compiling in vs2008 and Fortran, I always get the error msg: C:\Libs\petsc-3.2-dev_win32_cvf/include\finclude/ftn-custom/petscdmcomposite.h90(8) : Warning: This name has not been given an explicit type. [D1] Subroutine DMCompositeGetEntries1(dm1, d1,ierr) Is it serious? How can I eliminate it? Thanks -- Yours sincerely, TAY wee-beng From Flo.44 at gmx.de Wed Jul 18 08:52:24 2012 From: Flo.44 at gmx.de (Florian Beck) Date: Wed, 18 Jul 2012 15:52:24 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: References: <20120718105010.148120@gmx.net> Message-ID: <20120718135224.32940@gmx.net> Hi, > On Wed, Jul 18, 2012 at 5:50 AM, Florian Beck wrote: > > > Hi, > > > > I want to use the petsc library in a shared library which I'm > dynamically > > loading in my main program. Therefore I'm not using the functions to > > destroy a Vector such as VecDestroy. If I use the function I got an > > segmentation fault error. > > > Send the full error message (including stack trace). [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 CDT 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: -no_signal_handler,--with-dynamic-loading on a linux-gnu named riemann by beck Wed Jul 18 15:41:20 2012 [0]PETSC ERROR: Libraries linked from /home/hazelsct/repositories/petsc/linux-gnu-c-opt/lib [0]PETSC ERROR: Configure run at Wed Aug 4 15:00:14 2010 [0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-clanguage=C++ --with-c-support --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas --with-lapack-lib=-llapack --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so --with-hdf5=1 --with-hdf5-dir=/usr [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- > > Of course I have a memory leak, because I'm not using the functions to > > destroy my vectors. Is there a simple example how to use the > petsc-library > > in a program like the following pseudo-code: > > > > Is MPI initialized before this is called? Did you plan to do this in > parallel? Are you linking PETSc dynamically (as in, you dlopen and dlsym > PETSc functions to call them, or perhaps you declare weak symbols in your > code), linking your app-specific solver module (you call PETSc normally > and > use dlsym at a higher level), or something else? Remember to configure > PETSc --with-dynamic-loading if necessary. I plan to use it parallel, but first I want to calculate serial. I'm using dlopen to link my library. What happens if I call the PetsInitalize function several times? I call it in every function call. > > The best way is to reuse data structures, but if you are going to destroy > them each iteration, make sure you destroy all of them. Note that MPI > cannot be initialized more than once, but presumably you aren't doing that > because the rest of the app needs MPI to formulate the original problem. > > Note that doing everything dynamic is more work and defers more errors > from > compile and link time to run time. It is possible, but it takes more > effort > and some familiarity with requirements for dynamic loading. > > > > > > main{ > > > > for 1 to 10 > > > > do_something > > > > call function_to_solve_Ax=b_with_petsc > > > > do_something > > > > end > > > > } > > From hzhang at mcs.anl.gov Wed Jul 18 09:10:20 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 18 Jul 2012 09:10:20 -0500 Subject: [petsc-users] block size In-Reply-To: <5006B220.6010205@uni-mainz.de> References: <5006B220.6010205@uni-mainz.de> Message-ID: You use same datafile 'vec.dat' for writing two different vectors, V3 and V1: ierr = VecView(V1, view_out); ierr = VecView(V3, view_out); // here, vec.dat holds V3 Then read it in the order ierr = VecLoad(V1, view_in); //crash here because reading V3 into V1 ierr = VecLoad(V3, view_in); Comment out one of vectors, your code works fine. Hong On Wed, Jul 18, 2012 at 7:54 AM, Anton Popov wrote: > Dear petsc team, > > could you please tell me what's wrong with the attached example file? > I run it on 4 processors with petsc-3.3-p1. > > What could error message "Local size 1000 not compatible with block size > 3!" mean? > > I've another question related to this issue. What is the real purpose of > PetscViewerBinarySkipInfo function? > I see no reason to skip creating "info" file, because the file produced by > the attached example seems to be correct. > > Moreover, similar block size error occurs in our code while reading file > with multiple vectors, irrespective whether "info" file exists or not. > > Thank you, > > Anton > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 18 09:13:27 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 18 Jul 2012 09:13:27 -0500 Subject: [petsc-users] block size In-Reply-To: References: <5006B220.6010205@uni-mainz.de> Message-ID: The usual workaround is to give the vectors different prefixes. On Wed, Jul 18, 2012 at 9:10 AM, Hong Zhang wrote: > You use same datafile 'vec.dat' for writing two different vectors, > V3 and V1: > > ierr = VecView(V1, view_out); > ierr = VecView(V3, view_out); > > // here, vec.dat holds V3 > > Then read it in the order > ierr = VecLoad(V1, view_in); > //crash here because reading V3 into V1 > > ierr = VecLoad(V3, view_in); > > Comment out one of vectors, your code works fine. > > Hong > > > > > On Wed, Jul 18, 2012 at 7:54 AM, Anton Popov wrote: > >> Dear petsc team, >> >> could you please tell me what's wrong with the attached example file? >> I run it on 4 processors with petsc-3.3-p1. >> >> What could error message "Local size 1000 not compatible with block size >> 3!" mean? >> >> I've another question related to this issue. What is the real purpose of >> PetscViewerBinarySkipInfo function? >> I see no reason to skip creating "info" file, because the file produced >> by the attached example seems to be correct. >> >> Moreover, similar block size error occurs in our code while reading file >> with multiple vectors, irrespective whether "info" file exists or not. >> >> Thank you, >> >> Anton >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 18 09:15:21 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 18 Jul 2012 09:15:21 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <20120718135224.32940@gmx.net> References: <20120718105010.148120@gmx.net> <20120718135224.32940@gmx.net> Message-ID: On Wed, Jul 18, 2012 at 8:52 AM, Florian Beck wrote: > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and > run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52 > CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: -no_signal_handler,--with-dynamic-loading on a linux-gnu > named riemann by beck Wed Jul 18 15:41:20 2012 > [0]PETSC ERROR: Libraries linked from > /home/hazelsct/repositories/petsc/linux-gnu-c-opt/lib > [0]PETSC ERROR: Configure run at Wed Aug 4 15:00:14 2010 > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0 > --useThreads 0 --with-clanguage=C++ --with-c-support > --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi > --with-mpi-shared=1 --with-blas-lib=-lblas --with-lapack-lib=-llapack > --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse > --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" > --with-spooles=1 --with-spooles-include=/usr/include/spooles > --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 > --with-hypre-dir=/usr --with-scotch=1 > --with-scotch-include=/usr/include/scotch > --with-scotch-lib=/usr/lib/libscotch.so --with-hdf5=1 --with-hdf5-dir=/usr > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > This is in your code. Run in a debugger to find out what's wrong. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > > > > > Of course I have a memory leak, because I'm not using the functions to > > > destroy my vectors. Is there a simple example how to use the > > petsc-library > > > in a program like the following pseudo-code: > > > > > > > Is MPI initialized before this is called? Did you plan to do this in > > parallel? Are you linking PETSc dynamically (as in, you dlopen and dlsym > > PETSc functions to call them, or perhaps you declare weak symbols in your > > code), linking your app-specific solver module (you call PETSc normally > > and > > use dlsym at a higher level), or something else? Remember to configure > > PETSc --with-dynamic-loading if necessary. > > I plan to use it parallel, but first I want to calculate serial. I'm using > dlopen to link my library. > > What happens if I call the PetsInitalize function several times? I call it > in every function call. > You can call it multiple times, but MPI_Init() can only be called once. We usually recommend that people only call PetscInitialize once (the logging/profiling/debugging infrastructure is more useful that way). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Jul 18 10:28:22 2012 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 18 Jul 2012 17:28:22 +0200 Subject: [petsc-users] request suggestions for most appropriate eigenvalue solver In-Reply-To: References: <62F26FC3-EF61-4A93-8749-19913849D026@dsic.upv.es> Message-ID: <530E2C7E-C650-4282-8AA5-F491DBDEFF12@dsic.upv.es> El 17/07/2012, a las 19:56, Giacomo Mulas escribi?: > Hello Jose. > > Some months ago, I wrote to the petsc-users mailing list asking if it would > be possible to use an iterative solver in slepc which converges eigenvectors > in order of maximum projection on a given basis vector. Back then you told > me you might look into it in some time and let me know. Since some time > passed, and you might quite understandably not remember about this, I am > quoting that email exchange below. > > In the meanwhile, I found that there the Davidson algorithm and its derivatives (e.g. block Davidson) appears to behave as I would like, or close to it. > You probably know them, but I send you a couple of references in any case: > > E. Davidson, J. Comput. Phys. 17, 87 ?1975?. > > E. Davidson, Comput. Phys. Commun. 53, 49 ?1989?. > > F. Ribeiro, C. Iung, and C. Leforestier, Chem. Phys. Lett. 362, 199 > ?2002?. > > F. Ribeiro, C. Iung, and C. Leforestier, J. Theor. Comput. Chem. 2, 609 > ?2003?. > > Any hope of implementing something like that (or some other algorithm to > obtain the same behaviour) in Slepc/Petsc any time soon? > > Thanks in advance, bye > Giacomo Mulas Yes, we will try to have this included in the release. I will come back to you in a week or so. Jose From balay at mcs.anl.gov Wed Jul 18 11:09:58 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 18 Jul 2012 11:09:58 -0500 (CDT) Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: <5005E701.4030102@uci.edu> References: <50006A51.5090207@uci.edu> <5005A458.2020905@uci.edu> <5005B0B0.8030007@uci.edu> <86600BC0-482F-4E83-9A88-4F0BE1E28406@mcs.anl.gov> <5005E701.4030102@uci.edu> Message-ID: Its now useable on Intrepid with PETSC_DIR=/soft/apps/libraries/petsc/3.3-p2/xl-opt Check /soft/apps/libraries/petsc/README Satish On Tue, 17 Jul 2012, Michele Rosso wrote: > Thank a lot. > > Please let me know when version 3.3 is available. > > Michele > > On 07/17/2012 12:13 PM, Barry Smith wrote: > > > > Please update to petsc-3.3. petsc-3.1 is too old. > > > I would do that but the version installed on the platform (Intrepid > > > at ALCF) I am working on is 3.1-p2. > > Satish, > > > > Please fix this. > > > > Thanks > > > > Barry > > > > On Jul 17, 2012, at 1:36 PM, Michele Rosso wrote: > > > > > On 07/17/2012 11:03 AM, Hong Zhang wrote: > > > > Michele : > > > > > > > > I have some problems with the block jacobi preconditioner. > > > > I am solving a 3D Poisson equation with periodic BCs, discretized by > > > > using finite differences (7-points stencil). > > > > Thus the problem is singular and the nullspace has to be removed. > > > > > > > > For Poisson equations, multigrid precondition should be the method of > > > > choice. > > > Thank you for the suggestion. I do not have any experience with multigrid, > > > but I will try. > > > > If I solve with the PCG method + JACOBI preconditioner the results are > > > > fine. > > > > If I use PCG + Block Jacobi preconditioner + ICC on each block the > > > > results are fine on the majority of the processors, > > > > but on few of them the error is very large. > > > > How do you know " few of them"? > > > Basically the solution is not correct on some grid points, say 6 grid > > > nodes out of 64^3. The 6 grid nodes with problems belongs to 2 of the 128 > > > processors > > > I am using. > > > > Do you have any idea/suggestions on how to fix this problem? > > > > This is the fragment of code I am using ( petsc 3.1 and Fortran 90): > > > > Please update to petsc-3.3. petsc-3.1 is too old. > > > I would do that but the version installed on the platform (Intrepid > > > at ALCF) I am working on is 3.1-p2. > > > > > > > PetscErrorCode petsc_err > > > > Mat A > > > > PC pc, subpc > > > > KSP ksp > > > > KSP subksp(1) > > > > : > > > > : > > > > : > > > > call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) > > > > call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) > > > > call KSPSetType(ksp,KSPCG, ) !the default type is gmres. I guess you > > > > want CG > > > > > > > > call KSPGetPC(ksp,pc,petsc_err) > > > > call PCSetType(pc,PCBJACOBI,petsc_err) > > > > ! call KSPSetUp(ksp,petsc_err) call this at the end > > > > ! KSP context for each single block > > > > call > > > > PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) > > > > call KSPGetPC(subksp(1),subpc,petsc_err) > > > > call PCSetType(subpc,PCICC,petsc_err) > > > > call KSPSetType(subksp(1),KSPPREONLY petsc_err) > > > > call KSPSetTolerances(subksp(1),tol > > > > ,PETSC_DEFAULT_DOUBLE_PRECISION,& > > > > & > > > > PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) > > > > > > > > ! Remove nullspace from the singular system (Check PETSC_NULL) > > > > call > > > > MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) > > > > call KSPSetNullSpace(ksp, nullspace, petsc_err) > > > > call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) > > > > > > > > call KSPSolve(ksp,b,x,petsc_err) > > > > > > > > I modified your code slightly. All these options can be provided at > > > > runtime: > > > > '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' > > > > > > > > Hong > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 07/13/2012 12:14 PM, Hong Zhang wrote: > > > > > Michele : > > > > > > > > > > I need to use the ICC factorization as preconditioner, but I noticed > > > > > that no parallel version is supported. > > > > > Is that correct? > > > > > Correct. > > > > > If so, is there a work around, like building the preconditioner "by > > > > > hand" by using PETSc functions? > > > > > You may try block jacobi with icc in the blocks '-ksp_type cg > > > > > -pc_type bjacobi -sub_pc_type icc' > > > > > > > > > > Hong > > > > > > > > > > > > > > > > > > > > > From john.fettig at gmail.com Wed Jul 18 12:09:01 2012 From: john.fettig at gmail.com (John Fettig) Date: Wed, 18 Jul 2012 13:09:01 -0400 Subject: [petsc-users] MatRARt Message-ID: Should MatRARt work for SeqAIJ matrices? I don't understand what is wrong with this test code: #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **args) { Mat A,R,C; PetscInt ai[5] = {0,2,4,6,8}; PetscInt aj[8] = {0,3,1,2,1,2,0,3}; PetscReal av[8] = {1,2,3,4,5,6,7,8}; PetscInt ri[3] = {0,1,2}; PetscInt rj[2] = {1,2}; PetscReal rv[2] = {1,1}; PetscErrorCode ierr; PetscInitialize(&argc,&args,(char *)0,(char *)0); ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,4,4,ai,aj,av,&A);CHKERRQ(ierr); ierr = MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,2,4,ri,rj,rv,&R);CHKERRQ(ierr); //this is ok: //ierr = MatMatMult(R,A,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); //this is not: ierr = MatRARt(A,R,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); ierr = PetscFinalize(); return 0; } When I run it, it says: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: MatGetColumnIJ() not supported for matrix type seqaij! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./rart_simple on a linux-deb named lagrange.tomato by jfe Wed Jul 18 13:07:33 2012 [0]PETSC ERROR: Libraries linked from /home/jfe/local/petsc-3.3-p2/linux-debug/lib [0]PETSC ERROR: Configure run at Wed Jul 18 13:00:59 2012 [0]PETSC ERROR: Configure options --with-x=0 --download-f-blas-lapack=1 --with-mpi=1 --with-mpi-shared=1 --with-mpi=1 --download-mpich=1 --with-debugging=1 --with-gnu-compilers=yes --with-shared-libraries=1 --with-c++-support --with-clanguage=C [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatTransposeColoringCreate_SeqAIJ() line 1294 in /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matmatmult.c [0]PETSC ERROR: MatTransposeColoringCreate() line 9318 in /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c [0]PETSC ERROR: MatRARtSymbolic_SeqAIJ_SeqAIJ() line 342 in /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c [0]PETSC ERROR: MatRARt_SeqAIJ_SeqAIJ() line 541 in /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c [0]PETSC ERROR: MatRARt() line 8405 in /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c [0]PETSC ERROR: main() line 25 in src/ksp/ksp/examples/tutorials/rart_simple.c What am I doing wrong? Thanks, John From mrosso at uci.edu Wed Jul 18 12:35:07 2012 From: mrosso at uci.edu (Michele Rosso) Date: Wed, 18 Jul 2012 10:35:07 -0700 Subject: [petsc-users] Parallel Incomplete Choleski Factorization In-Reply-To: References: <50006A51.5090207@uci.edu> <5005A458.2020905@uci.edu> <5005B0B0.8030007@uci.edu> <86600BC0-482F-4E83-9A88-4F0BE1E28406@mcs.anl.gov> <5005E701.4030102@uci.edu> Message-ID: <5006F3CB.9080208@uci.edu> Thank you. Michele On 07/18/2012 09:09 AM, Satish Balay wrote: > Its now useable on Intrepid with PETSC_DIR=/soft/apps/libraries/petsc/3.3-p2/xl-opt > > Check /soft/apps/libraries/petsc/README > > Satish > > On Tue, 17 Jul 2012, Michele Rosso wrote: > >> Thank a lot. >> >> Please let me know when version 3.3 is available. >> >> Michele >> >> On 07/17/2012 12:13 PM, Barry Smith wrote: >>>>> Please update to petsc-3.3. petsc-3.1 is too old. >>>> I would do that but the version installed on the platform (Intrepid >>>> at ALCF) I am working on is 3.1-p2. >>> Satish, >>> >>> Please fix this. >>> >>> Thanks >>> >>> Barry >>> >>> On Jul 17, 2012, at 1:36 PM, Michele Rosso wrote: >>> >>>> On 07/17/2012 11:03 AM, Hong Zhang wrote: >>>>> Michele : >>>>> >>>>> I have some problems with the block jacobi preconditioner. >>>>> I am solving a 3D Poisson equation with periodic BCs, discretized by >>>>> using finite differences (7-points stencil). >>>>> Thus the problem is singular and the nullspace has to be removed. >>>>> >>>>> For Poisson equations, multigrid precondition should be the method of >>>>> choice. >>>> Thank you for the suggestion. I do not have any experience with multigrid, >>>> but I will try. >>>>> If I solve with the PCG method + JACOBI preconditioner the results are >>>>> fine. >>>>> If I use PCG + Block Jacobi preconditioner + ICC on each block the >>>>> results are fine on the majority of the processors, >>>>> but on few of them the error is very large. >>>>> How do you know " few of them"? >>>> Basically the solution is not correct on some grid points, say 6 grid >>>> nodes out of 64^3. The 6 grid nodes with problems belongs to 2 of the 128 >>>> processors >>>> I am using. >>>>> Do you have any idea/suggestions on how to fix this problem? >>>>> This is the fragment of code I am using ( petsc 3.1 and Fortran 90): >>>>> Please update to petsc-3.3. petsc-3.1 is too old. >>>> I would do that but the version installed on the platform (Intrepid >>>> at ALCF) I am working on is 3.1-p2. >>>> >>>>> PetscErrorCode petsc_err >>>>> Mat A >>>>> PC pc, subpc >>>>> KSP ksp >>>>> KSP subksp(1) >>>>> : >>>>> : >>>>> : >>>>> call KSPCreate(PETSC_COMM_WORLD,ksp,petsc_err) >>>>> call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,petsc_err) >>>>> call KSPSetType(ksp,KSPCG, ) !the default type is gmres. I guess you >>>>> want CG >>>>> >>>>> call KSPGetPC(ksp,pc,petsc_err) >>>>> call PCSetType(pc,PCBJACOBI,petsc_err) >>>>> ! call KSPSetUp(ksp,petsc_err) call this at the end >>>>> ! KSP context for each single block >>>>> call >>>>> PCBJacobiGetSubKSP(pc,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,subksp(1),petsc_err) >>>>> call KSPGetPC(subksp(1),subpc,petsc_err) >>>>> call PCSetType(subpc,PCICC,petsc_err) >>>>> call KSPSetType(subksp(1),KSPPREONLY petsc_err) >>>>> call KSPSetTolerances(subksp(1),tol >>>>> ,PETSC_DEFAULT_DOUBLE_PRECISION,& >>>>> & >>>>> PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_INTEGER,petsc_err) >>>>> >>>>> ! Remove nullspace from the singular system (Check PETSC_NULL) >>>>> call >>>>> MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL,nullspace,petsc_err) >>>>> call KSPSetNullSpace(ksp, nullspace, petsc_err) >>>>> call MatNullSpaceRemove(nullspace, b, PETSC_NULL,petsc_err) >>>>> >>>>> call KSPSolve(ksp,b,x,petsc_err) >>>>> >>>>> I modified your code slightly. All these options can be provided at >>>>> runtime: >>>>> '-ksp_type cg -pc_type bjacobi -sub_pc_type icc' >>>>> >>>>> Hong >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On 07/13/2012 12:14 PM, Hong Zhang wrote: >>>>>> Michele : >>>>>> >>>>>> I need to use the ICC factorization as preconditioner, but I noticed >>>>>> that no parallel version is supported. >>>>>> Is that correct? >>>>>> Correct. >>>>>> If so, is there a work around, like building the preconditioner "by >>>>>> hand" by using PETSc functions? >>>>>> You may try block jacobi with icc in the blocks '-ksp_type cg >>>>>> -pc_type bjacobi -sub_pc_type icc' >>>>>> >>>>>> Hong >>>>>> >>>>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Wed Jul 18 12:39:44 2012 From: popov at uni-mainz.de (Anton Popov) Date: Wed, 18 Jul 2012 19:39:44 +0200 Subject: [petsc-users] block size In-Reply-To: References: <5006B220.6010205@uni-mainz.de> Message-ID: <5006F4E0.6060409@uni-mainz.de> "Comment out one of vectors, your code works fine." I wouldn't say "fine" because I need to store both vectors. So what is the workaround? - I'm not allowed to mix vectors with different block size in the same file? - I'm not allowed to write more then ONE vector in a file? On 7/18/12 4:10 PM, Hong Zhang wrote: > You use same datafile 'vec.dat' for writing two different vectors, > V3 and V1: > > ierr = VecView(V1, view_out); > ierr = VecView(V3, view_out); > > // here, vec.dat holds V3 > > Then read it in the order > ierr = VecLoad(V1, view_in); > //crash here because reading V3 into V1 > > ierr = VecLoad(V3, view_in); > > Comment out one of vectors, your code works fine. > > Hong > > > > > On Wed, Jul 18, 2012 at 7:54 AM, Anton Popov > wrote: > > Dear petsc team, > > could you please tell me what's wrong with the attached example file? > I run it on 4 processors with petsc-3.3-p1. > > What could error message "Local size 1000 not compatible with > block size 3!" mean? > > I've another question related to this issue. What is the real > purpose of PetscViewerBinarySkipInfo function? > I see no reason to skip creating "info" file, because the file > produced by the attached example seems to be correct. > > Moreover, similar block size error occurs in our code while > reading file with multiple vectors, irrespective whether "info" > file exists or not. > > Thank you, > > Anton > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Wed Jul 18 12:43:00 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Wed, 18 Jul 2012 12:43:00 -0500 Subject: [petsc-users] block size In-Reply-To: <5006F4E0.6060409@uni-mainz.de> References: <5006B220.6010205@uni-mainz.de> <5006F4E0.6060409@uni-mainz.de> Message-ID: <77236A2B-D2AF-40F4-9BF0-A694B8F93151@lsu.edu> search for an older thread on this list possible fixes are to either delete / not create the .info files or to assign different prefix to each vec B On Jul 18, 2012, at 12:39 PM, Anton Popov wrote: > "Comment out one of vectors, your code works fine." > I wouldn't say "fine" because I need to store both vectors. > > So what is the workaround? > > - I'm not allowed to mix vectors with different block size in the same file? > - I'm not allowed to write more then ONE vector in a file? > > > On 7/18/12 4:10 PM, Hong Zhang wrote: >> You use same datafile 'vec.dat' for writing two different vectors, >> V3 and V1: >> >> ierr = VecView(V1, view_out); >> ierr = VecView(V3, view_out); >> >> // here, vec.dat holds V3 >> >> Then read it in the order >> ierr = VecLoad(V1, view_in); >> //crash here because reading V3 into V1 >> >> ierr = VecLoad(V3, view_in); >> >> Comment out one of vectors, your code works fine. >> >> Hong >> >> >> >> >> On Wed, Jul 18, 2012 at 7:54 AM, Anton Popov wrote: >> Dear petsc team, >> >> could you please tell me what's wrong with the attached example file? >> I run it on 4 processors with petsc-3.3-p1. >> >> What could error message "Local size 1000 not compatible with block size 3!" mean? >> >> I've another question related to this issue. What is the real purpose of PetscViewerBinarySkipInfo function? >> I see no reason to skip creating "info" file, because the file produced by the attached example seems to be correct. >> >> Moreover, similar block size error occurs in our code while reading file with multiple vectors, irrespective whether "info" file exists or not. >> >> Thank you, >> >> Anton >> >> > > -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: From flo.44 at gmx.de Wed Jul 18 14:12:38 2012 From: flo.44 at gmx.de (Florian) Date: Wed, 18 Jul 2012 21:12:38 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: References: <20120718105010.148120@gmx.net> <20120718135224.32940@gmx.net> Message-ID: <1342638758.2371.11.camel@F-UB> Am Mittwoch, den 18.07.2012, 09:15 -0500 schrieb Jed Brown: > On Wed, Jul 18, 2012 at 8:52 AM, Florian Beck wrote: > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, > recompile, link, and run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun > 4 15:34:52 CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent > updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble > shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: -no_signal_handler,--with-dynamic-loading on a > linux-gnu named riemann by beck Wed Jul 18 15:41:20 2012 > [0]PETSC ERROR: Libraries linked > from /home/hazelsct/repositories/petsc/linux-gnu-c-opt/lib > [0]PETSC ERROR: Configure run at Wed Aug 4 15:00:14 2010 > [0]PETSC ERROR: Configure options --with-shared > --with-debugging=0 --useThreads 0 --with-clanguage=C++ > --with-c-support --with-fortran-interfaces=1 > --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 > --with-blas-lib=-lblas --with-lapack-lib=-llapack > --with-umfpack=1 > --with-umfpack-include=/usr/include/suitesparse > --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so --with-hdf5=1 --with-hdf5-dir=/usr > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown > directory unknown file > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > > This is in your code. Run in a debugger to find out what's wrong. > > I have used ddd and when I step into the VecDestroy function I get the signal 11. I have three Vectors and it's only possible to destroy one of them. Do I have consider something special before I destroy them? I read values from the Vector which I'm able to destroy. > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI > processes. > You may or may not see output from other processes, depending > on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > > > > > Of course I have a memory leak, because I'm not using the > functions to > > > destroy my vectors. Is there a simple example how to use > the > > petsc-library > > > in a program like the following pseudo-code: > > > > > > > Is MPI initialized before this is called? Did you plan to do > this in > > parallel? Are you linking PETSc dynamically (as in, you > dlopen and dlsym > > PETSc functions to call them, or perhaps you declare weak > symbols in your > > code), linking your app-specific solver module (you call > PETSc normally > > and > > use dlsym at a higher level), or something else? Remember to > configure > > PETSc --with-dynamic-loading if necessary. > > > I plan to use it parallel, but first I want to calculate > serial. I'm using dlopen to link my library. > > What happens if I call the PetsInitalize function several > times? I call it in every function call. > > You can call it multiple times, but MPI_Init() can only be called > once. We usually recommend that people only call PetscInitialize once > (the logging/profiling/debugging infrastructure is more useful that > way). From jedbrown at mcs.anl.gov Wed Jul 18 15:01:20 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 18 Jul 2012 15:01:20 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <1342638758.2371.11.camel@F-UB> References: <20120718105010.148120@gmx.net> <20120718135224.32940@gmx.net> <1342638758.2371.11.camel@F-UB> Message-ID: On Wed, Jul 18, 2012 at 2:12 PM, Florian wrote: > I have used ddd and when I step into the VecDestroy function I get the > signal 11. I have three Vectors and it's only possible to destroy one of > them. Do I have consider something special before I destroy them? I read > values from the Vector which I'm able to destroy. > You are likely passing an invalid address to VecDestroy. Something is wrong with your debugger if it doesn't tell you what is invalid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Jul 18 15:13:38 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 18 Jul 2012 15:13:38 -0500 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: Can I have your rart_simple.c for debugging? Hong On Wed, Jul 18, 2012 at 12:09 PM, John Fettig wrote: > Should MatRARt work for SeqAIJ matrices? I don't understand what is > wrong with this test code: > > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > Mat A,R,C; > PetscInt ai[5] = {0,2,4,6,8}; > PetscInt aj[8] = {0,3,1,2,1,2,0,3}; > PetscReal av[8] = {1,2,3,4,5,6,7,8}; > PetscInt ri[3] = {0,1,2}; > PetscInt rj[2] = {1,2}; > PetscReal rv[2] = {1,1}; > PetscErrorCode ierr; > > PetscInitialize(&argc,&args,(char *)0,(char *)0); > > ierr = > MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,4,4,ai,aj,av,&A);CHKERRQ(ierr); > ierr = > MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,2,4,ri,rj,rv,&R);CHKERRQ(ierr); > > //this is ok: > //ierr = MatMatMult(R,A,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); > > //this is not: > ierr = MatRARt(A,R,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); > > ierr = PetscFinalize(); > return 0; > } > > > When I run it, it says: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: MatGetColumnIJ() not supported for matrix type seqaij! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./rart_simple on a linux-deb named lagrange.tomato by > jfe Wed Jul 18 13:07:33 2012 > [0]PETSC ERROR: Libraries linked from > /home/jfe/local/petsc-3.3-p2/linux-debug/lib > [0]PETSC ERROR: Configure run at Wed Jul 18 13:00:59 2012 > [0]PETSC ERROR: Configure options --with-x=0 > --download-f-blas-lapack=1 --with-mpi=1 --with-mpi-shared=1 > --with-mpi=1 --download-mpich=1 --with-debugging=1 > --with-gnu-compilers=yes --with-shared-libraries=1 --with-c++-support > --with-clanguage=C > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatTransposeColoringCreate_SeqAIJ() line 1294 in > /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matmatmult.c > [0]PETSC ERROR: MatTransposeColoringCreate() line 9318 in > /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c > [0]PETSC ERROR: MatRARtSymbolic_SeqAIJ_SeqAIJ() line 342 in > /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c > [0]PETSC ERROR: MatRARt_SeqAIJ_SeqAIJ() line 541 in > /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c > [0]PETSC ERROR: MatRARt() line 8405 in > /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c > [0]PETSC ERROR: main() line 25 in > src/ksp/ksp/examples/tutorials/rart_simple.c > > What am I doing wrong? > > Thanks, > John > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.fettig at gmail.com Wed Jul 18 15:18:51 2012 From: john.fettig at gmail.com (John Fettig) Date: Wed, 18 Jul 2012 16:18:51 -0400 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: Hong, That is what was pasted into the email. For good measure, I have attached it to this email (I also modified it to compute RARt using MatMatTransposeMult and MatMatMult). Regards, John On Wed, Jul 18, 2012 at 4:13 PM, Hong Zhang wrote: > Can I have your rart_simple.c for debugging? > Hong > > On Wed, Jul 18, 2012 at 12:09 PM, John Fettig wrote: >> >> Should MatRARt work for SeqAIJ matrices? I don't understand what is >> wrong with this test code: >> >> #include >> >> #undef __FUNCT__ >> #define __FUNCT__ "main" >> int main(int argc,char **args) >> { >> Mat A,R,C; >> PetscInt ai[5] = {0,2,4,6,8}; >> PetscInt aj[8] = {0,3,1,2,1,2,0,3}; >> PetscReal av[8] = {1,2,3,4,5,6,7,8}; >> PetscInt ri[3] = {0,1,2}; >> PetscInt rj[2] = {1,2}; >> PetscReal rv[2] = {1,1}; >> PetscErrorCode ierr; >> >> PetscInitialize(&argc,&args,(char *)0,(char *)0); >> >> ierr = >> MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,4,4,ai,aj,av,&A);CHKERRQ(ierr); >> ierr = >> MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,2,4,ri,rj,rv,&R);CHKERRQ(ierr); >> >> //this is ok: >> //ierr = MatMatMult(R,A,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); >> >> //this is not: >> ierr = MatRARt(A,R,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); >> >> ierr = PetscFinalize(); >> return 0; >> } >> >> >> When I run it, it says: >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: No support for this operation for this object type! >> [0]PETSC ERROR: MatGetColumnIJ() not supported for matrix type seqaij! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 >> 15:42:00 CDT 2012 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./rart_simple on a linux-deb named lagrange.tomato by >> jfe Wed Jul 18 13:07:33 2012 >> [0]PETSC ERROR: Libraries linked from >> /home/jfe/local/petsc-3.3-p2/linux-debug/lib >> [0]PETSC ERROR: Configure run at Wed Jul 18 13:00:59 2012 >> [0]PETSC ERROR: Configure options --with-x=0 >> --download-f-blas-lapack=1 --with-mpi=1 --with-mpi-shared=1 >> --with-mpi=1 --download-mpich=1 --with-debugging=1 >> --with-gnu-compilers=yes --with-shared-libraries=1 --with-c++-support >> --with-clanguage=C >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatTransposeColoringCreate_SeqAIJ() line 1294 in >> /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matmatmult.c >> [0]PETSC ERROR: MatTransposeColoringCreate() line 9318 in >> /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c >> [0]PETSC ERROR: MatRARtSymbolic_SeqAIJ_SeqAIJ() line 342 in >> /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c >> [0]PETSC ERROR: MatRARt_SeqAIJ_SeqAIJ() line 541 in >> /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c >> [0]PETSC ERROR: MatRARt() line 8405 in >> /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c >> [0]PETSC ERROR: main() line 25 in >> src/ksp/ksp/examples/tutorials/rart_simple.c >> >> What am I doing wrong? >> >> Thanks, >> John > > -------------- next part -------------- A non-text attachment was scrubbed... Name: rart_simple.c Type: text/x-csrc Size: 941 bytes Desc: not available URL: From knepley at gmail.com Wed Jul 18 17:40:03 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 18 Jul 2012 17:40:03 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: References: <20120718105010.148120@gmx.net> <20120718135224.32940@gmx.net> <1342638758.2371.11.camel@F-UB> Message-ID: On Wed, Jul 18, 2012 at 3:01 PM, Jed Brown wrote: > On Wed, Jul 18, 2012 at 2:12 PM, Florian wrote: > >> I have used ddd and when I step into the VecDestroy function I get the >> signal 11. I have three Vectors and it's only possible to destroy one of >> them. Do I have consider something special before I destroy them? I read >> values from the Vector which I'm able to destroy. >> > > You are likely passing an invalid address to VecDestroy. Something is > wrong with your debugger if it doesn't tell you what is invalid. > Are you sure you are passing the address? VecDestroy(&v); Also, take a look at the examples. Modify an example until it does what you want. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Jul 18 22:26:11 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 18 Jul 2012 22:26:11 -0500 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: John : > Should MatRARt work for SeqAIJ matrices? I don't understand what is > wrong with this test code: > The code runs well on Mac and Linux using petsc-3.3 or petsc-dev. I cannot repeat the error. However, as you noticed, MatRARt calls coloring, which is experimental, not well-tested, and the code needs to be cleaned :-( Can you test MatRARt on petsc-3.3/src/mat/examples/tests/ex94 with attached matrix data file by ./ex94 -f0 medium -f1 medium Do it crash? >From flops analysis and experiments, MatRARt() is not as efficient as MatPtAP, likely costs twice as MatPtAP. Hong > #include > > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > Mat A,R,C; > PetscInt ai[5] = {0,2,4,6,8}; > PetscInt aj[8] = {0,3,1,2,1,2,0,3}; > PetscReal av[8] = {1,2,3,4,5,6,7,8}; > PetscInt ri[3] = {0,1,2}; > PetscInt rj[2] = {1,2}; > PetscReal rv[2] = {1,1}; > PetscErrorCode ierr; > > PetscInitialize(&argc,&args,(char *)0,(char *)0); > > ierr = > MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,4,4,ai,aj,av,&A);CHKERRQ(ierr); > ierr = > MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,2,4,ri,rj,rv,&R);CHKERRQ(ierr); > > //this is ok: > //ierr = MatMatMult(R,A,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); > > //this is not: > ierr = MatRARt(A,R,MAT_INITIAL_MATRIX,1.0,&C);CHKERRQ(ierr); > > ierr = PetscFinalize(); > return 0; > } > > > When I run it, it says: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: MatGetColumnIJ() not supported for matrix type seqaij! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 > 15:42:00 CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./rart_simple on a linux-deb named lagrange.tomato by > jfe Wed Jul 18 13:07:33 2012 > [0]PETSC ERROR: Libraries linked from > /home/jfe/local/petsc-3.3-p2/linux-debug/lib > [0]PETSC ERROR: Configure run at Wed Jul 18 13:00:59 2012 > [0]PETSC ERROR: Configure options --with-x=0 > --download-f-blas-lapack=1 --with-mpi=1 --with-mpi-shared=1 > --with-mpi=1 --download-mpich=1 --with-debugging=1 > --with-gnu-compilers=yes --with-shared-libraries=1 --with-c++-support > --with-clanguage=C > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatTransposeColoringCreate_SeqAIJ() line 1294 in > /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matmatmult.c > [0]PETSC ERROR: MatTransposeColoringCreate() line 9318 in > /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c > [0]PETSC ERROR: MatRARtSymbolic_SeqAIJ_SeqAIJ() line 342 in > /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c > [0]PETSC ERROR: MatRARt_SeqAIJ_SeqAIJ() line 541 in > /home/jfe/local/petsc-3.3-p2/src/mat/impls/aij/seq/matrart.c > [0]PETSC ERROR: MatRARt() line 8405 in > /home/jfe/local/petsc-3.3-p2/src/mat/interface/matrix.c > [0]PETSC ERROR: main() line 25 in > src/ksp/ksp/examples/tutorials/rart_simple.c > > What am I doing wrong? > > Thanks, > John > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: medium Type: application/octet-stream Size: 29136 bytes Desc: not available URL: From Flo.44 at gmx.de Thu Jul 19 02:40:53 2012 From: Flo.44 at gmx.de (Florian Beck) Date: Thu, 19 Jul 2012 09:40:53 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: References: <20120718105010.148120@gmx.net> Message-ID: <20120719074053.148130@gmx.net> Hi, On Wed, Jul 18, 2012 at 3:01 PM, Jed Brown wrote: > On Wed, Jul 18, 2012 at 2:12 PM, Florian wrote: > >> I have used ddd and when I step into the VecDestroy function I get the >> signal 11. I have three Vectors and it's only possible to destroy one of >> them. Do I have consider something special before I destroy them? I read >> values from the Vector which I'm able to destroy. >> > > You are likely passing an invalid address to VecDestroy. Something is > wrong with your debugger if it doesn't tell you what is invalid. > > >Are you sure you are passing the address? > > VecDestroy(&v); > >Also, take a look at the examples. Modify an example until it does what >you >want. > > Matt In the debugger is the adress of my vector at the beginning and the end the same, but if I look inside the vector the adress of ops, map and data are 0x0 after I call the function VecDuplicate(). Is there a function to synchronize the vectors before I destroy them? From u.tabak at tudelft.nl Thu Jul 19 03:44:41 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Thu, 19 Jul 2012 10:44:41 +0200 Subject: [petsc-users] advice on an eigenvalue problem with special structure Message-ID: <5007C8F9.3000005@tudelft.nl> Dear all, After some projection operations, I am ending up with a dense generalized non-symmetric eigenvalue problem, such as A\phi = \lambda B\phi where A and B are given as [A11 A12] [ 0 A22] [B11 0 ] [B21 B22] So there are two large 0 blocks in A and B. Moreover, B21 = -A12^T, I was wondering if I can tailor some efficient solver for these matrices with large zero blocks? Any ideas and pointers are appreciated highly. Best regards, Umut From nicolas.tardieu at edf.fr Thu Jul 19 04:28:05 2012 From: nicolas.tardieu at edf.fr (Nicolas TARDIEU) Date: Thu, 19 Jul 2012 11:28:05 +0200 Subject: [petsc-users] Number of nonzeros > 2^31-1 In-Reply-To: Message-ID: Thanks for your answer, Matt. The problem is that I am using ML as a preconditioner and it does not support 64 bit indices (information reported in the PETSc configure phase). I am afraid I am stuck with this wrong nonzeros number. Nicolas Nicolas TARDIEU Ing. Chercheur EDF - R&D Dpt AMA nicolas.tardieu at edf.fr T?l. : 01 47 65 39 05 Un geste simple pour l'environnement, n'imprimez ce message que si vous en avez l'utilit?. knepley at gmail.com Envoy? par : petsc-users-bounces at mcs.anl.gov 17/07/2012 16:59 Veuillez r?pondre ? petsc-users at mcs.anl.gov A petsc-users at mcs.anl.gov cc Objet Re: [petsc-users] Number of nonzeros > 2^31-1 On Tue, Jul 17, 2012 at 6:20 AM, Nicolas TARDIEU wrote: Dear PETSc users, I am solving an unstructured finite element-based problem with 500 millions unkwowns with PETSc. The number of nonzeros is greater than 2^31-1. Here is the KSPView I get : ---------------------------------------------------------------------------------------------------------------- Matrix Object: 700 MPI processes type: mpiaij rows=499125000, cols=499125000 total: nonzeros=-2147483648, allocated nonzeros=-2147483648 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 242923 nodes, limit used is 5 ---------------------------------------------------------------------------------------------------------------- As you can see, The number of nonzeros is <0. I would like to check that this is due to the number of nonzeros being greater than 2^31-1. Here is a short description of the size of differents types : ---------------------------------------------------------------------------------------------------------------- Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 ---------------------------------------------------------------------------------------------------------------- Is there a workaround? Should I worry about the result of my simulation? You could configure using --with-64-bit-indices. Matt Thanks in advance, Nicolas Nicolas TARDIEU Ing. Chercheur EDF - R&D Dpt AMA nicolas.tardieu at edf.fr T?l. : 01 47 65 39 05 Un geste simple pour l'environnement, n'imprimez ce message que si vous en avez l'utilit?. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1816 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1816 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1151 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Thu Jul 19 06:36:36 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 19 Jul 2012 06:36:36 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <20120719074053.148130@gmx.net> References: <20120718105010.148120@gmx.net> <20120719074053.148130@gmx.net> Message-ID: On Thu, Jul 19, 2012 at 2:40 AM, Florian Beck wrote: > In the debugger is the adress of my vector at the beginning and the end > the same, but if I look inside the vector the adress of ops, map and data > are 0x0 after I call the function VecDuplicate(). Is there a function to > synchronize the vectors before I destroy them? Sounds like a reference-counting bug. The most common cause is basically Vec X,Y; VecCreate(comm,&X); ... Y = X; ... VecDestroy(&X); VecDestroy(&Y); (perhaps spread across multiple functions, and perhaps with a reference obtained through a function that does not give you an "ownership share" by incrementing reference count). You can use "watch -l X->hdr.refct" in recent gdb to follow all the locations where reference count was changed (this includes the library), if you need a heavyweight methodology for tracking down the error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 19 06:58:37 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Jul 2012 06:58:37 -0500 Subject: [petsc-users] advice on an eigenvalue problem with special structure In-Reply-To: <5007C8F9.3000005@tudelft.nl> References: <5007C8F9.3000005@tudelft.nl> Message-ID: On Thu, Jul 19, 2012 at 3:44 AM, Umut Tabak wrote: > Dear all, > > After some projection operations, I am ending up with a dense generalized > non-symmetric eigenvalue problem, such as > > A\phi = \lambda B\phi > > where A and B are given as > > [A11 A12] > [ 0 A22] > > [B11 0 ] > [B21 B22] > > So there are two large 0 blocks in A and B. Moreover, B21 = -A12^T, I was > wondering if I can tailor some efficient solver for these matrices with > large zero blocks? > > Any ideas and pointers are appreciated highly. > You can try using MatNest for the two matrices, and MatTranspose for B21. Matt > Best regards, > Umut > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 19 07:02:58 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Jul 2012 07:02:58 -0500 Subject: [petsc-users] Number of nonzeros > 2^31-1 In-Reply-To: References: Message-ID: On Thu, Jul 19, 2012 at 4:28 AM, Nicolas TARDIEU wrote: > > Thanks for your answer, Matt. > The problem is that I am using ML as a preconditioner and it does not > support 64 bit indices (information reported in the PETSc configure phase). > I am afraid I am stuck with this wrong nonzeros number. > We have recently added a preconditioner that is very similar to ML and should work with 64-bit indices, GAMG. You are welcome to try it out. Thanks, Matt > Nicolas > *Nicolas TARDIEU** > Ing. Chercheur* > EDF - R&D Dpt AMA > > *nicolas.tardieu at edf.fr* > T?l. : 01 47 65 39 05 Un geste simple pour l'environnement, n'imprimez > ce message que si vous en avez l'utilit?. > > > > *knepley at gmail.com* > Envoy? par : petsc-users-bounces at mcs.anl.gov > > 17/07/2012 16:59 > Veuillez r?pondre ? > petsc-users at mcs.anl.gov > > A > petsc-users at mcs.anl.gov > cc > Objet > Re: [petsc-users] Number of nonzeros > 2^31-1 > > > > > On Tue, Jul 17, 2012 at 6:20 AM, Nicolas TARDIEU <*nicolas.tardieu at edf.fr*> > wrote: > > Dear PETSc users, > > I am solving an unstructured finite element-based problem with 500 > millions unkwowns with PETSc. > The number of nonzeros is greater than 2^31-1. > Here is the KSPView I get : > > ---------------------------------------------------------------------------------------------------------------- > Matrix Object: 700 MPI processes > type: mpiaij > rows=499125000, cols=499125000 > total: nonzeros=-*2147483648* <2147483648>, allocated nonzeros=-* > 2147483648* <2147483648> > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 242923 nodes, limit used > is 5 > > ---------------------------------------------------------------------------------------------------------------- > > As you can see, The number of nonzeros is <0. > I would like to check that this is due to the number of nonzeros being > greater than 2^31-1. > > Here is a short description of the size of differents types : > > ---------------------------------------------------------------------------------------------------------------- > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 > > ---------------------------------------------------------------------------------------------------------------- > > Is there a workaround? Should I worry about the result of my simulation? > > You could configure using --with-64-bit-indices. > > Matt > > Thanks in advance, > Nicolas *Nicolas TARDIEU > Ing. Chercheur* > EDF - R&D Dpt AMA > * > **nicolas.tardieu at edf.fr* > T?l. : 01 47 65 39 05 Un geste simple pour l'environnement, n'imprimez > ce message que si vous en avez l'utilit?. > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont > ?tablis ? l'intention exclusive des destinataires et les informations qui y > figurent sont strictement confidentielles. Toute utilisation de ce Message > non conforme ? sa destination, toute diffusion ou toute publication totale > ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de > le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou > partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de > votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace > sur quelque support que ce soit. Nous vous remercions ?galement d'en > avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie > ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute > erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for > the addressees. The information contained in this Message is confidential. > Any use of information contained in this Message not in accord with its > purpose, any dissemination or disclosure, either whole or partial, is > prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use > any part of it. If you have received this message in error, please delete > it and all copies from your system and notify the sender immediately by > return message. > > E-mail communication cannot be guaranteed to be timely secure, error or > virus-free. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1816 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1816 bytes Desc: not available URL: From john.fettig at gmail.com Thu Jul 19 08:10:45 2012 From: john.fettig at gmail.com (John Fettig) Date: Thu, 19 Jul 2012 09:10:45 -0400 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: On Wed, Jul 18, 2012 at 11:26 PM, Hong Zhang wrote: > Can you test MatRARt on > petsc-3.3/src/mat/examples/tests/ex94 with attached matrix data file by > > ./ex94 -f0 medium -f1 medium > > Do it crash? Yes, it does crash in the same exact fashion as the code I sent. This is 3.3-p2. petsc-dev also crashes, but I haven't pulled changes in about a month (and the server is down right now). > From flops analysis and experiments, MatRARt() is not as efficient as > MatPtAP, > likely costs twice as MatPtAP. I can just as easily construct P, so this isn't a big deal. How would MatPtAP compare to a hypothetical MatRAP? Is there a reason this routine doesn't exist? John From jedbrown at mcs.anl.gov Thu Jul 19 08:31:38 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 19 Jul 2012 08:31:38 -0500 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: On Thu, Jul 19, 2012 at 8:10 AM, John Fettig wrote: > Yes, it does crash in the same exact fashion as the code I sent. This > is 3.3-p2. petsc-dev also crashes, but I haven't pulled changes in > about a month (and the server is down right now). > You can use this mirror https://bitbucket.org/petsc/petsc-dev > > From flops analysis and experiments, MatRARt() is not as efficient as > > MatPtAP, > > likely costs twice as MatPtAP. > > I can just as easily construct P, so this isn't a big deal. How would > MatPtAP compare to a hypothetical MatRAP? Is there a reason this > routine doesn't exist? > Time and calling convenience. It's also not clear that it can be implemented significantly more efficiently than R(AP). -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Jul 19 09:30:29 2012 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 19 Jul 2012 09:30:29 -0500 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: John: > > > Can you test MatRARt on > > petsc-3.3/src/mat/examples/tests/ex94 with attached matrix data file by > > > > ./ex94 -f0 medium -f1 medium > > > > Do it crash? > > Yes, it does crash in the same exact fashion as the code I sent. This > is 3.3-p2. petsc-dev also crashes, but I haven't pulled changes in > about a month (and the server is down right now). > Hmm, ex94 runs well on my tests. May I have your ~petsc/configure.log? > > > From flops analysis and experiments, MatRARt() is not as efficient as > > MatPtAP, > > likely costs twice as MatPtAP. > > I can just as easily construct P, so this isn't a big deal. How would > MatPtAP compare to a hypothetical MatRAP? Is there a reason this > routine doesn't exist? > Using petsc aij matrix format, sparse inner-product used in A*Rt (MatMatTransposeMult) costs twice as A*P (MatMatMult). Out-product used in Pt*A (MatTransposeMatMult) requires extensive data movement. I'm planning to implement MatRAP (MatMatMatMult), but have not been able to work on it. Suggestions are welcome. Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.fettig at gmail.com Thu Jul 19 09:57:56 2012 From: john.fettig at gmail.com (John Fettig) Date: Thu, 19 Jul 2012 10:57:56 -0400 Subject: [petsc-users] MatRARt In-Reply-To: References: Message-ID: On Thu, Jul 19, 2012 at 10:30 AM, Hong Zhang wrote: > Hmm, ex94 runs well on my tests. > May I have your ~petsc/configure.log? Sure, I've sent it to petsc-maint. John From gmulas at oa-cagliari.inaf.it Thu Jul 19 10:49:19 2012 From: gmulas at oa-cagliari.inaf.it (Giacomo Mulas) Date: Thu, 19 Jul 2012 17:49:19 +0200 (CEST) Subject: [petsc-users] request suggestions for most appropriate eigenvalue solver In-Reply-To: <530E2C7E-C650-4282-8AA5-F491DBDEFF12@dsic.upv.es> References: <62F26FC3-EF61-4A93-8749-19913849D026@dsic.upv.es> <530E2C7E-C650-4282-8AA5-F491DBDEFF12@dsic.upv.es> Message-ID: On Wed, 18 Jul 2012, Jose E. Roman wrote: > Yes, we will try to have this included in the release. I will come back to > you in a week or so. Great! I also found an additional paper that describes a modified Davidson method for finding eigenvectors with the largest component in a given subspace (non only in one given starting vector, but in the subspace spanned by possibly more than one), starting from lowest eigenvalues but without having to get to convergence all eigenvectors in order of eigenvalues. This appears to be even closer to what I would want (what I tried to describe in my initial email): W. Butscher, W.E. Kammer (1976) Journal of Computational Physics 20, 313 Bye Giacomo -- _________________________________________________________________ Giacomo Mulas _________________________________________________________________ OSSERVATORIO ASTRONOMICO DI CAGLIARI Str. 54, Loc. Poggio dei Pini * 09012 Capoterra (CA) Tel. (OAC): +39 070 71180 248 Fax : +39 070 71180 222 Tel. (UNICA): +39 070 675 4916 _________________________________________________________________ "When the storms are raging around you, stay right where you are" (Freddy Mercury) _________________________________________________________________ From bsmith at mcs.anl.gov Thu Jul 19 13:35:22 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 19 Jul 2012 13:35:22 -0500 Subject: [petsc-users] Number of nonzeros > 2^31-1 In-Reply-To: References: Message-ID: On Jul 19, 2012, at 4:28 AM, Nicolas TARDIEU wrote: > > Thanks for your answer, Matt. > The problem is that I am using ML as a preconditioner and it does not support 64 bit indices (information reported in the PETSc configure phase). > I am afraid I am stuck with this wrong nonzeros number. You cannot use ML for such large problems, if ML it is only built for 32 bit integers. It is that simple. Barry > > Nicolas > > Nicolas TARDIEU > Ing. Chercheur > EDF - R&D Dpt AMA > > nicolas.tardieu at edf.fr > T?l. : 01 47 65 39 05 > Un geste simple pour l'environnement, n'imprimez ce message que si vous en avez l'utilit?. > > > > > knepley at gmail.com > Envoy? par : petsc-users-bounces at mcs.anl.gov > 17/07/2012 16:59 > Veuillez r?pondre ? > petsc-users at mcs.anl.gov > > A > petsc-users at mcs.anl.gov > cc > Objet > Re: [petsc-users] Number of nonzeros > 2^31-1 > > > > > > On Tue, Jul 17, 2012 at 6:20 AM, Nicolas TARDIEU wrote: > > Dear PETSc users, > > I am solving an unstructured finite element-based problem with 500 millions unkwowns with PETSc. > The number of nonzeros is greater than 2^31-1. > Here is the KSPView I get : > ---------------------------------------------------------------------------------------------------------------- > Matrix Object: 700 MPI processes > type: mpiaij > rows=499125000, cols=499125000 > total: nonzeros=-2147483648, allocated nonzeros=-2147483648 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 242923 nodes, limit used is 5 > ---------------------------------------------------------------------------------------------------------------- > > As you can see, The number of nonzeros is <0. > I would like to check that this is due to the number of nonzeros being greater than 2^31-1. > > Here is a short description of the size of differents types : > ---------------------------------------------------------------------------------------------------------------- > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 > ---------------------------------------------------------------------------------------------------------------- > > Is there a workaround? Should I worry about the result of my simulation? > > You could configure using --with-64-bit-indices. > > Matt > > Thanks in advance, > Nicolas > > Nicolas TARDIEU > Ing. Chercheur > EDF - R&D Dpt AMA > > nicolas.tardieu at edf.fr > T?l. : 01 47 65 39 05 > Un geste simple pour l'environnement, n'imprimez ce message que si vous en avez l'utilit?. > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. > From bsmith at mcs.anl.gov Thu Jul 19 14:00:21 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 19 Jul 2012 14:00:21 -0500 Subject: [petsc-users] Not given explicit type for Subroutine DMCompositeGetEntries1 In-Reply-To: <5006BB2B.5040600@gmail.com> References: <5006BB2B.5040600@gmail.com> Message-ID: Thanks for reporting this. It is not important. I have fixed in in petsc-3.3 and petsc-dev and it will be in the next patch release. Barry On Jul 18, 2012, at 8:33 AM, TAY wee-beng wrote: > Hi, > > When compiling in vs2008 and Fortran, I always get the error msg: > > C:\Libs\petsc-3.2-dev_win32_cvf/include\finclude/ftn-custom/petscdmcomposite.h90(8) : Warning: This name has not been given an explicit type. [D1] > Subroutine DMCompositeGetEntries1(dm1, d1,ierr) > > Is it serious? How can I eliminate it? > > Thanks > > -- > Yours sincerely, > > TAY wee-beng > From zhenglun.wei at gmail.com Thu Jul 19 15:21:02 2012 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Thu, 19 Jul 2012 15:21:02 -0500 Subject: [petsc-users] Local Refinement? Message-ID: <50086C2E.9090803@gmail.com> Dear All, I hope you're having a nice day. I'm trying to use PETSc to program a Poisson Solver with local refinement grid (or adaptive mesh). Is that possible? I checked DMDA functions and found DMDASetRefinementFactor. Is this what I need to look into? or it is only for overall refinement in Multigrid solver? thanks, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 19 15:22:22 2012 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 19 Jul 2012 15:22:22 -0500 Subject: [petsc-users] Local Refinement? In-Reply-To: <50086C2E.9090803@gmail.com> References: <50086C2E.9090803@gmail.com> Message-ID: On Thu, Jul 19, 2012 at 3:21 PM, Zhenglun (Alan) Wei wrote: > Dear All, > I hope you're having a nice day. > I'm trying to use PETSc to program a Poisson Solver with local > refinement grid (or adaptive mesh). Is that possible? I checked DMDA > functions and found DMDASetRefinementFactor. Is this what I need to look > into? or it is only for overall refinement in Multigrid solver? > DMDA is a purely regular grid. You can check out SAMRAI or Chombo or Deal II for adaptive grid refinement. All of them can use PETSc. Matt > thanks, > Alan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jul 19 16:39:04 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 19 Jul 2012 16:39:04 -0500 Subject: [petsc-users] Local Refinement? In-Reply-To: References: <50086C2E.9090803@gmail.com> Message-ID: On Thu, Jul 19, 2012 at 3:22 PM, Matthew Knepley wrote: > On Thu, Jul 19, 2012 at 3:21 PM, Zhenglun (Alan) Wei < > zhenglun.wei at gmail.com> wrote: > >> Dear All, >> I hope you're having a nice day. >> I'm trying to use PETSc to program a Poisson Solver with local >> refinement grid (or adaptive mesh). Is that possible? I checked DMDA >> functions and found DMDASetRefinementFactor. Is this what I need to look >> into? or it is only for overall refinement in Multigrid solver? >> > > DMDA is a purely regular grid. You can check out SAMRAI or Chombo or Deal > II for > adaptive grid refinement. All of them can use PETSc. > Or libmesh or fenics or .... There are many AMR packages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Flo.44 at gmx.de Fri Jul 20 01:39:15 2012 From: Flo.44 at gmx.de (Florian Beck) Date: Fri, 20 Jul 2012 08:39:15 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: References: <20120718105010.148120@gmx.net> Message-ID: <20120720063915.252590@gmx.net> Hi, On Thu, Jul 19, 2012 at 2:40 AM, Florian Beck wrote: >> In the debugger is the adress of my vector at the beginning and the end >> the same, but if I look inside the vector the adress of ops, map and >data >> are 0x0 after I call the function VecDuplicate(). Is there a function to >> synchronize the vectors before I destroy them? > > >Sounds like a reference-counting bug. The most common cause is basically > >Vec X,Y; >VecCreate(comm,&X); >... >Y = X; >... >VecDestroy(&X); >VecDestroy(&Y); > >(perhaps spread across multiple functions, and perhaps with a reference >obtained through a function that does not give you an "ownership share" by >incrementing reference count). > >You can use "watch -l X->hdr.refct" in recent gdb to follow all the >locations where reference count was changed (this includes the library), >if >you need a heavyweight methodology for tracking down the error. > I have looked for the counting bug, but I haven't found anything because my gdb crashes. Is it not possible to restore the addresses ops, map and data? I think it's possible to get values and from the vectors, so why shouldn't it possible to destroy them? Or is it possible to destroy the Vectors manually? From ajay.rawat83 at gmail.com Fri Jul 20 02:57:02 2012 From: ajay.rawat83 at gmail.com (Ajay Rawat) Date: Fri, 20 Jul 2012 13:27:02 +0530 Subject: [petsc-users] Local Refinement? In-Reply-To: References: <50086C2E.9090803@gmail.com> Message-ID: On Fri, Jul 20, 2012 at 3:09 AM, Jed Brown wrote: > On Thu, Jul 19, 2012 at 3:22 PM, Matthew Knepley wrote: > >> On Thu, Jul 19, 2012 at 3:21 PM, Zhenglun (Alan) Wei < >> zhenglun.wei at gmail.com> wrote: >> >>> Dear All, >>> I hope you're having a nice day. >>> I'm trying to use PETSc to program a Poisson Solver with local >>> refinement grid (or adaptive mesh). Is that possible? I checked DMDA >>> functions and found DMDASetRefinementFactor. Is this what I need to look >>> into? or it is only for overall refinement in Multigrid solver? >>> >> >> DMDA is a purely regular grid. You can check out SAMRAI or Chombo or Deal >> II for >> adaptive grid refinement. All of them can use PETSc. >> > > Or libmesh or fenics or .... There are many AMR packages. > Dear Jed Is there any developmental activity going on for doing AMR inside PETSc, at least for purely regular grid (DMDA). It will be a great addition. -- Ajay Rawat Kalpakkam, IGCAR ------------------------------------------------------------------------- Save Himalayas.... ------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 20 06:02:20 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 Jul 2012 06:02:20 -0500 Subject: [petsc-users] Local Refinement? In-Reply-To: References: <50086C2E.9090803@gmail.com> Message-ID: On Fri, Jul 20, 2012 at 2:57 AM, Ajay Rawat wrote: > > > On Fri, Jul 20, 2012 at 3:09 AM, Jed Brown wrote: > >> On Thu, Jul 19, 2012 at 3:22 PM, Matthew Knepley wrote: >> >>> On Thu, Jul 19, 2012 at 3:21 PM, Zhenglun (Alan) Wei < >>> zhenglun.wei at gmail.com> wrote: >>> >>>> Dear All, >>>> I hope you're having a nice day. >>>> I'm trying to use PETSc to program a Poisson Solver with local >>>> refinement grid (or adaptive mesh). Is that possible? I checked DMDA >>>> functions and found DMDASetRefinementFactor. Is this what I need to look >>>> into? or it is only for overall refinement in Multigrid solver? >>>> >>> >>> DMDA is a purely regular grid. You can check out SAMRAI or Chombo or >>> Deal II for >>> adaptive grid refinement. All of them can use PETSc. >>> >> >> Or libmesh or fenics or .... There are many AMR packages. >> > > Dear Jed > > Is there any developmental activity going on for doing AMR inside PETSc, > at least for purely regular grid (DMDA). It will be a great addition. > No, the whole point is that there are all these other packages that already do it that you can use. Matt > > -- > Ajay Rawat > Kalpakkam, IGCAR > > ------------------------------------------------------------------------- > Save Himalayas.... > ------------------------------------------------------------------------- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 20 06:04:55 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 20 Jul 2012 06:04:55 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <20120720063915.252590@gmx.net> References: <20120718105010.148120@gmx.net> <20120720063915.252590@gmx.net> Message-ID: On Fri, Jul 20, 2012 at 1:39 AM, Florian Beck wrote: > Hi, > > > On Thu, Jul 19, 2012 at 2:40 AM, Florian Beck wrote: > > >> In the debugger is the adress of my vector at the beginning and the end > >> the same, but if I look inside the vector the adress of ops, map and > >data > >> are 0x0 after I call the function VecDuplicate(). Is there a function to > >> synchronize the vectors before I destroy them? > > > > > >Sounds like a reference-counting bug. The most common cause is basically > > > >Vec X,Y; > >VecCreate(comm,&X); > >... > >Y = X; > >... > >VecDestroy(&X); > >VecDestroy(&Y); > > > >(perhaps spread across multiple functions, and perhaps with a reference > >obtained through a function that does not give you an "ownership share" by > >incrementing reference count). > > > >You can use "watch -l X->hdr.refct" in recent gdb to follow all the > >locations where reference count was changed (this includes the library), > >if > >you need a heavyweight methodology for tracking down the error. > > > > I have looked for the counting bug, but I haven't found anything because > my gdb crashes. > > Is it not possible to restore the addresses ops, map and data? I think > it's possible to get values and from the vectors, so why shouldn't it > possible to destroy them? Or is it possible to destroy the Vectors manually? > 1) This has nothing to do with shared libraries. You have a bug in your code. 2) Debugging is about making your code simpler until you can find the bug, and learning to use debugging tools. This is a big part of programming. 3) Of course, you can destroy Vecs. You have a bug. Try using valgrind. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Sat Jul 21 04:30:29 2012 From: paeanball at gmail.com (Bao Kai) Date: Sat, 21 Jul 2012 12:30:29 +0300 Subject: [petsc-users] How to investigate the reason for slow convergence rate? Message-ID: > > > HI, all, I am still suffering from the slow convergence rate of the KSP solution. I changed the code to use Petsc3.3 and then try the gamg precoditioner, the convergence rate is better, while it took more total time because it took much more time for each iteration and some extra time for pre-processing. I am wondering if there are some ways that can help me to investigate the slow convergence rate for KSP solution so that I can do some improvement. Is DMMG will be a good solution? Thank you very much. Best Regards, Kai > > Message: 2 > Date: Wed, 11 Jul 2012 15:17:15 -0500 > From: Matthew Knepley > To: PETSc users list > Subject: Re: [petsc-users] Does this mean the matrix is > ill-conditioned? > Message-ID: > q+w1PKO7G_TW07iDzux90Sncbv9K7d0FD-MDrLRg at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai wrote: > > > Hi, all, > > > > The following is the output from the solution of a Poisson equation > > from Darcy's law. > > > > To compute the condition number of matrix, I did not use PC and use > > GMRES KSP to do the test. > > > > It seems like that the condition number keep increasing during the > > iterative solution. Does this mean the matrix is ill-conditioned? > > > > Generally yes. Krylov methods take a long time to resolve the smallest > eigenvalues, so this approximation is not great. > > > > For this test, it did not achieve convergence with 10000 iterations. > > > > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 > > times iteration to get the iteration convergent. > > > > Any suggestion for improving the convergence rate will be much > > appreciated. The solution of this equation has been the bottleneck of > > my code, it takes more than 90% of the total time. > > > > Try ML or GAMG. > > Matt > > > > Thank you very much. > > > > Best Regards, > > Kai > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 21 08:47:10 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 21 Jul 2012 08:47:10 -0500 Subject: [petsc-users] How to investigate the reason for slow convergence rate? In-Reply-To: References: Message-ID: On Sat, Jul 21, 2012 at 4:30 AM, Bao Kai wrote: > >> HI, all, > > I am still suffering from the slow convergence rate of the KSP solution. > > I changed the code to use Petsc3.3 and then try the gamg precoditioner, > the convergence rate is better, while it took more total time because it > took much more time for each iteration and some extra time for > pre-processing. > Try ML to see if it has better defaults for your problem. If not, you will have to start experimenting with the solver parameters. > I am wondering if there are some ways that can help me to investigate the > slow convergence rate for KSP solution so that I can do some improvement. > Is DMMG will be a good solution? > No. Matt > Thank you very much. > > Best Regards, > Kai > >> >> Message: 2 >> Date: Wed, 11 Jul 2012 15:17:15 -0500 >> From: Matthew Knepley >> To: PETSc users list >> Subject: Re: [petsc-users] Does this mean the matrix is >> ill-conditioned? >> Message-ID: >> > q+w1PKO7G_TW07iDzux90Sncbv9K7d0FD-MDrLRg at mail.gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai wrote: >> >> > Hi, all, >> > >> > The following is the output from the solution of a Poisson equation >> > from Darcy's law. >> > >> > To compute the condition number of matrix, I did not use PC and use >> > GMRES KSP to do the test. >> > >> > It seems like that the condition number keep increasing during the >> > iterative solution. Does this mean the matrix is ill-conditioned? >> > >> >> Generally yes. Krylov methods take a long time to resolve the smallest >> eigenvalues, so this approximation is not great. >> >> >> > For this test, it did not achieve convergence with 10000 iterations. >> > >> > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 >> > times iteration to get the iteration convergent. >> > >> > Any suggestion for improving the convergence rate will be much >> > appreciated. The solution of this equation has been the bottleneck of >> > my code, it takes more than 90% of the total time. >> > >> >> Try ML or GAMG. >> >> Matt >> >> >> > Thank you very much. >> > >> > Best Regards, >> > Kai >> > >> >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Jul 21 10:48:45 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 21 Jul 2012 10:48:45 -0500 Subject: [petsc-users] How to investigate the reason for slow convergence rate? In-Reply-To: References: Message-ID: <60D656F9-9404-446F-AB54-066C5A2E7AC1@mcs.anl.gov> http://www.mcs.anl.gov/petsc/documentation/faq.html#kspdiverged On Jul 21, 2012, at 4:30 AM, Bao Kai wrote: > > HI, all, > > I am still suffering from the slow convergence rate of the KSP solution. > > I changed the code to use Petsc3.3 and then try the gamg precoditioner, the convergence rate is better, while it took more total time because it took much more time for each iteration and some extra time for pre-processing. > > I am wondering if there are some ways that can help me to investigate the slow convergence rate for KSP solution so that I can do some improvement. Is DMMG will be a good solution? > > Thank you very much. > > Best Regards, > Kai > > Message: 2 > Date: Wed, 11 Jul 2012 15:17:15 -0500 > From: Matthew Knepley > To: PETSc users list > Subject: Re: [petsc-users] Does this mean the matrix is > ill-conditioned? > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai wrote: > > > Hi, all, > > > > The following is the output from the solution of a Poisson equation > > from Darcy's law. > > > > To compute the condition number of matrix, I did not use PC and use > > GMRES KSP to do the test. > > > > It seems like that the condition number keep increasing during the > > iterative solution. Does this mean the matrix is ill-conditioned? > > > > Generally yes. Krylov methods take a long time to resolve the smallest > eigenvalues, so this approximation is not great. > > > > For this test, it did not achieve convergence with 10000 iterations. > > > > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 > > times iteration to get the iteration convergent. > > > > Any suggestion for improving the convergence rate will be much > > appreciated. The solution of this equation has been the bottleneck of > > my code, it takes more than 90% of the total time. > > > > Try ML or GAMG. > > Matt > > > > Thank you very much. > > > > Best Regards, > > Kai > > > > From flo.44 at gmx.de Sat Jul 21 12:26:48 2012 From: flo.44 at gmx.de (Florian) Date: Sat, 21 Jul 2012 19:26:48 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? Message-ID: <1342891608.10030.8.camel@F-UB> >On Fri, Jul 20, 2012 at 1:39 AM, Florian Beck > >> >> Hi, >> >> >> On Thu, Jul 19, 2012 at 2:40 AM, Florian Beck wrote: >> >> >> In the debugger is the adress of my vector at the beginning and the end >> >> the same, but if I look inside the vector the adress of ops, map and >> >data >> >> are 0x0 after I call the function VecDuplicate(). Is there a function to >> >> synchronize the vectors before I destroy them? >> > >> > >> >Sounds like a reference-counting bug. The most common cause is basically >> > >> >Vec X,Y; >> >VecCreate(comm,&X); >> >... >> >Y = X; >> >... >> >VecDestroy(&X); >> >VecDestroy(&Y); >> > >> >(perhaps spread across multiple functions, and perhaps with a reference >> >obtained through a function that does not give you an "ownership share" by >> >incrementing reference count). >> > >> >You can use "watch -l X->hdr.refct" in recent gdb to follow all the >> >locations where reference count was changed (this includes the library), >> >if >> >you need a heavyweight methodology for tracking down the error. >> > >> >> I have looked for the counting bug, but I haven't found anything because >> my gdb crashes. >> >> Is it not possible to restore the addresses ops, map and data? I think >> it's possible to get values and from the vectors, so why shouldn't it >> possible to destroy them? Or is it possible to destroy the Vectors manually? >> > >1) This has nothing to do with shared libraries. You have a bug in your> >code. > >2) Debugging is about making your code simpler until you can find the bug, >and > learning to use debugging tools. This is a big part of programming.> > >3) Of course, you can destroy Vecs. You have a bug. Try using valgrind. I made it to step with gdb now in VecCreate(). The problem is inside the VecCreate function the new created vector vec has all the adresses set. But when I'm stepping back in my function the addresses are all 0x0. Another question is, what happens with my created vector. I mean I create a Object with Vec v and inside the VecCreate function the addres of my object v is first set to PETSC_NULL and later to the Vec which is created inside the VecCreate function. It seems like I never could destroy my created vector. Can somebody explain this to me? From jedbrown at mcs.anl.gov Sat Jul 21 12:29:20 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 21 Jul 2012 12:29:20 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <1342891608.10030.8.camel@F-UB> References: <1342891608.10030.8.camel@F-UB> Message-ID: On Sat, Jul 21, 2012 at 12:26 PM, Florian wrote: > I made it to step with gdb now in VecCreate(). The problem is inside the > VecCreate > function the new created vector vec has all the adresses set. But when I'm > stepping > back in my function the addresses are all 0x0. > Let VecCreate finish. > > Another question is, what happens with my created vector. I mean I create > a Object > with Vec v and inside the VecCreate function the addres of my object v is > first set > to PETSC_NULL and later to the Vec which is created inside the VecCreate > function. > It seems like I never could destroy my created vector. Can somebody > explain this to me? > The new object is created, then your pointer is made to point at the new object. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flo.44 at gmx.de Sat Jul 21 12:48:41 2012 From: flo.44 at gmx.de (Florian) Date: Sat, 21 Jul 2012 19:48:41 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? Message-ID: <1342892921.10030.14.camel@F-UB> >On Sat, Jul 21, 2012 at 12:26 PM, Florian wrote: > >> I made it to step with gdb now in VecCreate(). The problem is inside the >> VecCreate >> function the new created vector vec has all the adresses set. But when I'm >> stepping >> back in my function the addresses are all 0x0. >> > >Let VecCreate finish. What do you mean? I step to the last line in VecCreate and after the last line back. > > >> >> Another question is, what happens with my created vector. I mean I create >> a Object >> with Vec v and inside the VecCreate function the addres of my object v is >> first set >> to PETSC_NULL and later to the Vec which is created inside the VecCreate >> function. >> It seems like I never could destroy my created vector. Can somebody >> explain this to me? >> > >The new object is created, then your pointer is made to point at the new >object. So if I get it right I create with "Vec v;" only a pointer? But why is it possible before I call VecCreate to have a look at a vector object. I mean in ddd I can display a full vector object. From jedbrown at mcs.anl.gov Sat Jul 21 12:55:34 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 21 Jul 2012 12:55:34 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <1342892921.10030.14.camel@F-UB> References: <1342892921.10030.14.camel@F-UB> Message-ID: Please reply to email rather than starting a new thread. Your email client might be dropping headers. On Sat, Jul 21, 2012 at 12:48 PM, Florian wrote: > >On Sat, Jul 21, 2012 at 12:26 PM, Florian wrote: > > > >> I made it to step with gdb now in VecCreate(). The problem is inside the > >> VecCreate > >> function the new created vector vec has all the adresses set. But when > I'm > >> stepping > >> back in my function the addresses are all 0x0. > >> > > > >Let VecCreate finish. > > What do you mean? I step to the last line in VecCreate and after the last > line back. > The last line is *vec = v; which makes your pointer point at the new vector. > > > > > > >> > >> Another question is, what happens with my created vector. I mean I > create > >> a Object > >> with Vec v and inside the VecCreate function the addres of my object v > is > >> first set > >> to PETSC_NULL and later to the Vec which is created inside the VecCreate > >> function. > >> It seems like I never could destroy my created vector. Can somebody > >> explain this to me? > >> > > > >The new object is created, then your pointer is made to point at the new > >object. > > So if I get it right I create with "Vec v;" only a pointer? But why is it > possible before I call VecCreate > to have a look at a vector object. I mean in ddd I can display a full > vector object. > > In petscvec.h, you will find typedef struct _p_Vec *Vec; Of course DDD allows you to follow an invalid pointer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Sun Jul 22 04:01:07 2012 From: paeanball at gmail.com (Bao Kai) Date: Sun, 22 Jul 2012 12:01:07 +0300 Subject: [petsc-users] How to investigate the reason for slow convergence rate? Message-ID: Hi, Matt, I tried ML6.2 with petsc3.3 with default parameters by only specifying the preconditioner to be PCML. The KSP solver is gmres. The convergence rate is much faster, while it still took much longer time in total. For example, for the problem with 500^3 mesh ( 125 million unknowns ) with 512nodes(4 processors per node) on bluegene/P , it tooks about ten iterations to get convergent while the total time used is about 400 seconds. 506 the KSP type is gmres 507 the PC type is ml 508 KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 509 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 510 0 SNES Function norm 5.859593121800e+02 511 0 KSP Residual norm 8.340827070202e+06 512 1 KSP Residual norm 7.980806572332e+05 513 2 KSP Residual norm 1.870896731234e+05 514 3 KSP Residual norm 6.790580947452e+04 515 4 KSP Residual norm 2.665552335248e+04 516 5 KSP Residual norm 1.130212349885e+04 517 6 KSP Residual norm 4.053599972292e+03 518 7 KSP Residual norm 1.786770710693e+03 519 8 KSP Residual norm 7.313571654931e+02 520 9 KSP Residual norm 3.205683714450e+02 521 10 KSP Residual norm 1.263243312734e+02 522 11 KSP Residual norm 3.945082815178e+01 523 1 SNES Function norm 9.378772067642e-02 524 0 KSP Residual norm 5.413489711800e+01 525 1 KSP Residual norm 1.442598710609e+01 526 2 KSP Residual norm 4.073537172140e+00 527 3 KSP Residual norm 1.157455598705e+00 528 4 KSP Residual norm 3.509855901968e-01 529 5 KSP Residual norm 1.160625342728e-01 530 6 KSP Residual norm 3.209351890216e-02 531 7 KSP Residual norm 7.780869881329e-03 532 8 KSP Residual norm 1.820828886636e-03 533 9 KSP Residual norm 4.172544590190e-04 534 2 SNES Function norm 6.747963806680e-07 535 Number of KSP iteration is 9 536 SNES solve takes time 406.724867261176314 But with bcgs and bjacobi, it tooks about 550 KSP iterations ( 2 snes iterations ) and 69 seconds to get the result. For much smaller problems, benefiting from the fast convergence, it did takes less time to get the result. It seems that the ml can not be scaled, or I used it in a wrong way. Best Regards, Kai > Message: 2 > Date: Sat, 21 Jul 2012 08:47:10 -0500 > From: Matthew Knepley > To: PETSc users list > Subject: Re: [petsc-users] How to investigate the reason for slow > convergence rate? > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > On Sat, Jul 21, 2012 at 4:30 AM, Bao Kai wrote: > >> >>> HI, all, >> >> I am still suffering from the slow convergence rate of the KSP solution. >> >> I changed the code to use Petsc3.3 and then try the gamg precoditioner, >> the convergence rate is better, while it took more total time because it >> took much more time for each iteration and some extra time for >> pre-processing. >> > > Try ML to see if it has better defaults for your problem. If not, you will > have to start experimenting with the solver > parameters. > > >> I am wondering if there are some ways that can help me to investigate the >> slow convergence rate for KSP solution so that I can do some improvement. >> Is DMMG will be a good solution? >> > > No. > > Matt > > >> Thank you very much. >> >> Best Regards, >> Kai >> >>> >>> Message: 2 >>> Date: Wed, 11 Jul 2012 15:17:15 -0500 >>> From: Matthew Knepley >>> To: PETSc users list >>> Subject: Re: [petsc-users] Does this mean the matrix is >>> ill-conditioned? >>> Message-ID: >>> >> q+w1PKO7G_TW07iDzux90Sncbv9K7d0FD-MDrLRg at mail.gmail.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai wrote: >>> >>> > Hi, all, >>> > >>> > The following is the output from the solution of a Poisson equation >>> > from Darcy's law. >>> > >>> > To compute the condition number of matrix, I did not use PC and use >>> > GMRES KSP to do the test. >>> > >>> > It seems like that the condition number keep increasing during the >>> > iterative solution. Does this mean the matrix is ill-conditioned? >>> > >>> >>> Generally yes. Krylov methods take a long time to resolve the smallest >>> eigenvalues, so this approximation is not great. >>> >>> >>> > For this test, it did not achieve convergence with 10000 iterations. >>> > >>> > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 >>> > times iteration to get the iteration convergent. >>> > >>> > Any suggestion for improving the convergence rate will be much >>> > appreciated. The solution of this equation has been the bottleneck of >>> > my code, it takes more than 90% of the total time. >>> > >>> >>> Try ML or GAMG. >>> >>> Matt >>> >>> >>> > Thank you very much. >>> > >>> > Best Regards, >>> > Kai >>> > >>> >>> >>> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > From jedbrown at mcs.anl.gov Sun Jul 22 07:16:07 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 07:16:07 -0500 Subject: [petsc-users] How to investigate the reason for slow convergence rate? In-Reply-To: References: Message-ID: Please send -log_summary output with performance questions. On Sun, Jul 22, 2012 at 4:01 AM, Bao Kai wrote: > Hi, Matt, > > I tried ML6.2 with petsc3.3 with default parameters by only specifying > the preconditioner to be PCML. The KSP solver is gmres. > > The convergence rate is much faster, while it still took much longer > time in total. > > For example, for the problem with 500^3 mesh ( 125 million unknowns ) > with 512nodes(4 processors per node) on bluegene/P , it tooks about > ten iterations to get convergent while the total time used is about > 400 seconds. > > 506 the KSP type is gmres > 507 the PC type is ml > 508 KSP rtol = 0.100000000000000008E-04 abstol = > 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 > 509 SNES rtol = 0.100000000000000002E-07 abstol = > 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = > 50 maxf= 10000 > 510 0 SNES Function norm 5.859593121800e+02 > 511 0 KSP Residual norm 8.340827070202e+06 > 512 1 KSP Residual norm 7.980806572332e+05 > 513 2 KSP Residual norm 1.870896731234e+05 > 514 3 KSP Residual norm 6.790580947452e+04 > 515 4 KSP Residual norm 2.665552335248e+04 > 516 5 KSP Residual norm 1.130212349885e+04 > 517 6 KSP Residual norm 4.053599972292e+03 > 518 7 KSP Residual norm 1.786770710693e+03 > 519 8 KSP Residual norm 7.313571654931e+02 > 520 9 KSP Residual norm 3.205683714450e+02 > 521 10 KSP Residual norm 1.263243312734e+02 > 522 11 KSP Residual norm 3.945082815178e+01 > 523 1 SNES Function norm 9.378772067642e-02 > 524 0 KSP Residual norm 5.413489711800e+01 > 525 1 KSP Residual norm 1.442598710609e+01 > 526 2 KSP Residual norm 4.073537172140e+00 > 527 3 KSP Residual norm 1.157455598705e+00 > 528 4 KSP Residual norm 3.509855901968e-01 > 529 5 KSP Residual norm 1.160625342728e-01 > 530 6 KSP Residual norm 3.209351890216e-02 > 531 7 KSP Residual norm 7.780869881329e-03 > 532 8 KSP Residual norm 1.820828886636e-03 > 533 9 KSP Residual norm 4.172544590190e-04 > 534 2 SNES Function norm 6.747963806680e-07 > 535 Number of KSP iteration is 9 > 536 SNES solve takes time 406.724867261176314 > > But with bcgs and bjacobi, it tooks about 550 KSP iterations ( 2 snes > iterations ) and 69 seconds to get the result. > > For much smaller problems, benefiting from the fast convergence, it > did takes less time to get the result. It seems that the ml can not be > scaled, or I used it in a wrong way. > > Best Regards, > Kai > > > > Message: 2 > > Date: Sat, 21 Jul 2012 08:47:10 -0500 > > From: Matthew Knepley > > To: PETSc users list > > Subject: Re: [petsc-users] How to investigate the reason for slow > > convergence rate? > > Message-ID: > > < > CAMYG4GkfV6kmTFEKXUadOv+2CrKHk9hRY7UR-cFWf+vcRxCv5g at mail.gmail.com> > > Content-Type: text/plain; charset="iso-8859-1" > > > > On Sat, Jul 21, 2012 at 4:30 AM, Bao Kai wrote: > > > >> > >>> HI, all, > >> > >> I am still suffering from the slow convergence rate of the KSP solution. > >> > >> I changed the code to use Petsc3.3 and then try the gamg precoditioner, > >> the convergence rate is better, while it took more total time because it > >> took much more time for each iteration and some extra time for > >> pre-processing. > >> > > > > Try ML to see if it has better defaults for your problem. If not, you > will > > have to start experimenting with the solver > > parameters. > > > > > >> I am wondering if there are some ways that can help me to investigate > the > >> slow convergence rate for KSP solution so that I can do some > improvement. > >> Is DMMG will be a good solution? > >> > > > > No. > > > > Matt > > > > > >> Thank you very much. > >> > >> Best Regards, > >> Kai > >> > >>> > >>> Message: 2 > >>> Date: Wed, 11 Jul 2012 15:17:15 -0500 > >>> From: Matthew Knepley > >>> To: PETSc users list > >>> Subject: Re: [petsc-users] Does this mean the matrix is > >>> ill-conditioned? > >>> Message-ID: > >>> >>> q+w1PKO7G_TW07iDzux90Sncbv9K7d0FD-MDrLRg at mail.gmail.com> > >>> Content-Type: text/plain; charset="iso-8859-1" > >>> > >>> On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai wrote: > >>> > >>> > Hi, all, > >>> > > >>> > The following is the output from the solution of a Poisson equation > >>> > from Darcy's law. > >>> > > >>> > To compute the condition number of matrix, I did not use PC and use > >>> > GMRES KSP to do the test. > >>> > > >>> > It seems like that the condition number keep increasing during the > >>> > iterative solution. Does this mean the matrix is ill-conditioned? > >>> > > >>> > >>> Generally yes. Krylov methods take a long time to resolve the smallest > >>> eigenvalues, so this approximation is not great. > >>> > >>> > >>> > For this test, it did not achieve convergence with 10000 iterations. > >>> > > >>> > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about 600 > >>> > times iteration to get the iteration convergent. > >>> > > >>> > Any suggestion for improving the convergence rate will be much > >>> > appreciated. The solution of this equation has been the bottleneck > of > >>> > my code, it takes more than 90% of the total time. > >>> > > >>> > >>> Try ML or GAMG. > >>> > >>> Matt > >>> > >>> > >>> > Thank you very much. > >>> > > >>> > Best Regards, > >>> > Kai > >>> > > >>> > >>> > >>> > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > their > > experiments lead. > > -- Norbert Wiener > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120721/eaf7b2ee/attachment-0001.html > > > > > > ------------------------------ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paeanball at gmail.com Sun Jul 22 09:22:59 2012 From: paeanball at gmail.com (Bao Kai) Date: Sun, 22 Jul 2012 17:22:59 +0300 Subject: [petsc-users] How to investigate the reason for slow convergence rate? Message-ID: Hi, Jed, The following is the output. Two equations are solved during each time steps. One is using bicgstab is already very fast, the other one is using gmres+ml. The log_summary output can found in the end of the output. Best Regards, Kai > Message: 6 > Date: Sun, 22 Jul 2012 07:16:07 -0500 > From: Jed Brown > To: PETSc users list > Subject: Re: [petsc-users] How to investigate the reason for slow > convergence rate? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Please send -log_summary output with performance questions. > > On Sun, Jul 22, 2012 at 4:01 AM, Bao Kai wrote: > >> Hi, Matt, >> >> I tried ML6.2 with petsc3.3 with default parameters by only specifying >> the preconditioner to be PCML. The KSP solver is gmres. >> >> The convergence rate is much faster, while it still took much longer >> time in total. >> >> For example, for the problem with 500^3 mesh ( 125 million unknowns ) >> with 512nodes(4 processors per node) on bluegene/P , it tooks about >> ten iterations to get convergent while the total time used is about >> 400 seconds. >> >> 506 the KSP type is gmres >> 507 the PC type is ml >> 508 KSP rtol = 0.100000000000000008E-04 abstol = >> 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 >> 509 SNES rtol = 0.100000000000000002E-07 abstol = >> 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = >> 50 maxf= 10000 >> 510 0 SNES Function norm 5.859593121800e+02 >> 511 0 KSP Residual norm 8.340827070202e+06 >> 512 1 KSP Residual norm 7.980806572332e+05 >> 513 2 KSP Residual norm 1.870896731234e+05 >> 514 3 KSP Residual norm 6.790580947452e+04 >> 515 4 KSP Residual norm 2.665552335248e+04 >> 516 5 KSP Residual norm 1.130212349885e+04 >> 517 6 KSP Residual norm 4.053599972292e+03 >> 518 7 KSP Residual norm 1.786770710693e+03 >> 519 8 KSP Residual norm 7.313571654931e+02 >> 520 9 KSP Residual norm 3.205683714450e+02 >> 521 10 KSP Residual norm 1.263243312734e+02 >> 522 11 KSP Residual norm 3.945082815178e+01 >> 523 1 SNES Function norm 9.378772067642e-02 >> 524 0 KSP Residual norm 5.413489711800e+01 >> 525 1 KSP Residual norm 1.442598710609e+01 >> 526 2 KSP Residual norm 4.073537172140e+00 >> 527 3 KSP Residual norm 1.157455598705e+00 >> 528 4 KSP Residual norm 3.509855901968e-01 >> 529 5 KSP Residual norm 1.160625342728e-01 >> 530 6 KSP Residual norm 3.209351890216e-02 >> 531 7 KSP Residual norm 7.780869881329e-03 >> 532 8 KSP Residual norm 1.820828886636e-03 >> 533 9 KSP Residual norm 4.172544590190e-04 >> 534 2 SNES Function norm 6.747963806680e-07 >> 535 Number of KSP iteration is 9 >> 536 SNES solve takes time 406.724867261176314 >> >> But with bcgs and bjacobi, it tooks about 550 KSP iterations ( 2 snes >> iterations ) and 69 seconds to get the result. >> >> For much smaller problems, benefiting from the fast convergence, it >> did takes less time to get the result. It seems that the ml can not be >> scaled, or I used it in a wrong way. >> >> Best Regards, >> Kai >> >> >> > Message: 2 >> > Date: Sat, 21 Jul 2012 08:47:10 -0500 >> > From: Matthew Knepley >> > To: PETSc users list >> > Subject: Re: [petsc-users] How to investigate the reason for slow >> > convergence rate? >> > Message-ID: >> > < >> CAMYG4GkfV6kmTFEKXUadOv+2CrKHk9hRY7UR-cFWf+vcRxCv5g at mail.gmail.com> >> > Content-Type: text/plain; charset="iso-8859-1" >> > >> > On Sat, Jul 21, 2012 at 4:30 AM, Bao Kai wrote: >> > >> >> >> >>> HI, all, >> >> >> >> I am still suffering from the slow convergence rate of the KSP >> >> solution. >> >> >> >> I changed the code to use Petsc3.3 and then try the gamg >> >> precoditioner, >> >> the convergence rate is better, while it took more total time because >> >> it >> >> took much more time for each iteration and some extra time for >> >> pre-processing. >> >> >> > >> > Try ML to see if it has better defaults for your problem. If not, you >> will >> > have to start experimenting with the solver >> > parameters. >> > >> > >> >> I am wondering if there are some ways that can help me to investigate >> the >> >> slow convergence rate for KSP solution so that I can do some >> improvement. >> >> Is DMMG will be a good solution? >> >> >> > >> > No. >> > >> > Matt >> > >> > >> >> Thank you very much. >> >> >> >> Best Regards, >> >> Kai >> >> >> >>> >> >>> Message: 2 >> >>> Date: Wed, 11 Jul 2012 15:17:15 -0500 >> >>> From: Matthew Knepley >> >>> To: PETSc users list >> >>> Subject: Re: [petsc-users] Does this mean the matrix is >> >>> ill-conditioned? >> >>> Message-ID: >> >>> > >>> q+w1PKO7G_TW07iDzux90Sncbv9K7d0FD-MDrLRg at mail.gmail.com> >> >>> Content-Type: text/plain; charset="iso-8859-1" >> >>> >> >>> On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai >> >>> wrote: >> >>> >> >>> > Hi, all, >> >>> > >> >>> > The following is the output from the solution of a Poisson equation >> >>> > from Darcy's law. >> >>> > >> >>> > To compute the condition number of matrix, I did not use PC and use >> >>> > GMRES KSP to do the test. >> >>> > >> >>> > It seems like that the condition number keep increasing during the >> >>> > iterative solution. Does this mean the matrix is ill-conditioned? >> >>> > >> >>> >> >>> Generally yes. Krylov methods take a long time to resolve the >> >>> smallest >> >>> eigenvalues, so this approximation is not great. >> >>> >> >>> >> >>> > For this test, it did not achieve convergence with 10000 >> >>> > iterations. >> >>> > >> >>> > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about >> >>> > 600 >> >>> > times iteration to get the iteration convergent. >> >>> > >> >>> > Any suggestion for improving the convergence rate will be much >> >>> > appreciated. The solution of this equation has been the bottleneck >> of >> >>> > my code, it takes more than 90% of the total time. >> >>> > >> >>> >> >>> Try ML or GAMG. >> >>> >> >>> Matt >> >>> >> >>> >> >>> > Thank you very much. >> >>> > >> >>> > Best Regards, >> >>> > Kai >> >>> > >> >>> >> >>> >> >>> >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> > experiments is infinitely more interesting than any results to which >> their >> > experiments lead. >> > -- Norbert Wiener >> > -------------- next part -------------- >> > An HTML attachment was scrubbed... >> > URL: >> > < >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120721/eaf7b2ee/attachment-0001.html >> > >> > >> > ------------------------------ >> > >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 43, Issue 61 > ******************************************* > -------------- next part -------------- RST runs on 2048 processors. Calculating the coloring takes time 76.9829076376470596 Start the simulation of RST_advection on a 3D rectanguluar mesh of size 500x500x500. call solve_CO2(conc) ... solve_CO2 starts iT = 1 (tCurr= 0.000000000000000000E+00 , tNext= 360.000000000000000 ) using backwardEuler the memory used initially in SolveNE_option is 780578816.000000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.362482352898041427E-04 the KSP type is bcgs the PC type is bjacobi KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 2.027142332274e+05 0 KSP Residual norm 1.329836410984e+05 1 KSP Residual norm 2.003138625439e+02 2 KSP Residual norm 1.080655908195e+00 1 SNES Function norm 2.603715471664e+00 0 KSP Residual norm 1.821752606793e+00 1 KSP Residual norm 2.261877896690e-02 2 KSP Residual norm 5.689587885989e-04 3 KSP Residual norm 7.904719695607e-06 2 SNES Function norm 1.303262560403e-05 Number of KSP iteration is 3 SNES solve takes time 6.39838860470588600 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 928034816.000000000 in 0 th processor solve_NE for concentration equations took time 6.40435670941175772 the memory used initially in SolveNE_option is 928034816.000000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.465729411871507182E-04 the KSP type is gmres the PC type is ml KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 6.854305936118e+02 0 KSP Residual norm 8.373098135096e+06 1 KSP Residual norm 7.298874883063e+05 2 KSP Residual norm 1.312057861504e+05 3 KSP Residual norm 2.940040817676e+04 4 KSP Residual norm 5.550730454363e+03 5 KSP Residual norm 9.403163430181e+02 6 KSP Residual norm 1.989882957671e+02 7 KSP Residual norm 3.776734457069e+01 1 SNES Function norm 1.128845203882e-01 0 KSP Residual norm 3.642500285026e+01 1 KSP Residual norm 1.088628928189e+01 2 KSP Residual norm 2.261263906033e+00 3 KSP Residual norm 5.225048266035e-01 4 KSP Residual norm 1.744244938506e-01 5 KSP Residual norm 5.392736524647e-02 6 KSP Residual norm 1.561840513585e-02 7 KSP Residual norm 3.216915891215e-03 8 KSP Residual norm 8.352256776033e-04 9 KSP Residual norm 2.616795560745e-04 2 SNES Function norm 4.222128984457e-07 Number of KSP iteration is 9 SNES solve takes time 396.688963249411700 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1108242432.00000000 in 0 th processor solve_NE for flow equations took time 396.693956828235287 solve_CO2 starts iT = 2 (tCurr= 360.000000000000000 , tNext= 720.000000000000000 ) using backwardEuler the memory used initially in SolveNE_option is 1108242432.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.738905881689788657E-04 the KSP type is bcgs the PC type is bjacobi KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 1.744975811532e+07 0 KSP Residual norm 6.042616216530e+06 1 KSP Residual norm 2.411229464574e+06 2 KSP Residual norm 9.384253591517e+05 3 KSP Residual norm 8.172654250461e+05 4 KSP Residual norm 3.278806775039e+05 5 KSP Residual norm 1.892646740749e+05 6 KSP Residual norm 1.688085534614e+05 7 KSP Residual norm 1.070929112445e+05 8 KSP Residual norm 8.894975127847e+04 9 KSP Residual norm 8.439037109644e+04 10 KSP Residual norm 2.210751081503e+05 11 KSP Residual norm 8.018906048020e+05 12 KSP Residual norm 2.463740463795e+05 13 KSP Residual norm 4.701709773936e+05 14 KSP Residual norm 2.572534693760e+05 15 KSP Residual norm 1.400576861591e+06 16 KSP Residual norm 6.843908357621e+04 17 KSP Residual norm 6.835423406956e+04 18 KSP Residual norm 3.306744876360e+05 19 KSP Residual norm 1.274200720122e+05 20 KSP Residual norm 2.864430197567e+05 21 KSP Residual norm 6.303052018354e+04 22 KSP Residual norm 1.040766852733e+04 23 KSP Residual norm 1.403019031174e+04 24 KSP Residual norm 8.304296941412e+03 25 KSP Residual norm 5.063990038197e+03 26 KSP Residual norm 2.072491899897e+04 27 KSP Residual norm 2.833725405550e+04 28 KSP Residual norm 6.025326846882e+02 29 KSP Residual norm 2.012835940363e+02 30 KSP Residual norm 9.871182569019e+01 31 KSP Residual norm 7.994364942410e+01 32 KSP Residual norm 8.722846666516e+01 33 KSP Residual norm 3.278037987095e+01 1 SNES Function norm 5.359315696689e+02 0 KSP Residual norm 3.278037984348e+01 1 KSP Residual norm 5.111266391765e+01 2 KSP Residual norm 1.548643476480e+01 3 KSP Residual norm 1.155495193877e+01 4 KSP Residual norm 7.125481575618e+00 5 KSP Residual norm 9.024848561174e+00 6 KSP Residual norm 2.257573584187e+00 7 KSP Residual norm 5.418784129233e-01 8 KSP Residual norm 3.719657020252e-01 9 KSP Residual norm 1.205608997222e-01 10 KSP Residual norm 3.615255985887e+00 11 KSP Residual norm 2.894677555372e-02 12 KSP Residual norm 1.376453957581e-03 13 KSP Residual norm 4.166968255993e-04 14 KSP Residual norm 4.913433524474e-04 15 KSP Residual norm 9.839518905020e-05 2 SNES Function norm 2.253731138673e-03 Number of KSP iteration is 15 SNES solve takes time 11.2357833341176274 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1092841472.00000000 in 0 th processor solve_NE for concentration equations took time 11.2626100023528579 the memory used initially in SolveNE_option is 1092841472.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.392435293861126411E-04 the KSP type is gmres the PC type is ml KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 6.168715721071e+02 0 KSP Residual norm 7.978767835480e+06 1 KSP Residual norm 8.728618105505e+05 2 KSP Residual norm 4.558368047941e+05 3 KSP Residual norm 4.346984519871e+05 4 KSP Residual norm 2.286709358986e+05 5 KSP Residual norm 1.332498617875e+05 6 KSP Residual norm 1.264578916032e+05 7 KSP Residual norm 1.058066886132e+05 8 KSP Residual norm 4.688543293468e+04 9 KSP Residual norm 1.706033347392e+04 10 KSP Residual norm 7.798848104292e+03 11 KSP Residual norm 5.313346550965e+03 12 KSP Residual norm 4.267989536057e+03 13 KSP Residual norm 4.104126803381e+03 14 KSP Residual norm 4.082103088618e+03 15 KSP Residual norm 3.730866428917e+03 16 KSP Residual norm 2.662009656165e+03 17 KSP Residual norm 1.507688407744e+03 18 KSP Residual norm 1.117001235096e+03 19 KSP Residual norm 1.019680471256e+03 20 KSP Residual norm 1.008960551221e+03 21 KSP Residual norm 1.007162979788e+03 22 KSP Residual norm 9.787062543248e+02 23 KSP Residual norm 8.700126671308e+02 24 KSP Residual norm 5.076825476614e+02 25 KSP Residual norm 1.697222557097e+02 26 KSP Residual norm 5.606185528901e+01 1 SNES Function norm 1.303677605733e-01 0 KSP Residual norm 1.275643534800e+02 1 KSP Residual norm 1.755473787471e+01 2 KSP Residual norm 4.019549783414e+00 3 KSP Residual norm 7.944360565021e-01 4 KSP Residual norm 1.499573425904e-01 5 KSP Residual norm 2.452264940057e-02 6 KSP Residual norm 5.066981841293e-03 7 KSP Residual norm 9.904807389870e-04 2 SNES Function norm 3.389130418502e-06 Number of KSP iteration is 7 SNES solve takes time 433.673481138823490 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1161392128.00000000 in 0 th processor solve_NE for flow equations took time 433.678236383529452 solve_CO2 starts iT = 3 (tCurr= 720.000000000000000 , tNext= 1080.00000000000000 ) using backwardEuler the memory used initially in SolveNE_option is 1161392128.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.611341175726920483E-04 the KSP type is bcgs the PC type is bjacobi KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 1.746230911748e+07 0 KSP Residual norm 6.056016610516e+06 1 KSP Residual norm 2.410588606648e+06 2 KSP Residual norm 9.456442885758e+05 3 KSP Residual norm 8.094818865575e+05 4 KSP Residual norm 3.287134485841e+05 5 KSP Residual norm 1.896118306005e+05 6 KSP Residual norm 1.754760017003e+05 7 KSP Residual norm 1.074211675332e+05 8 KSP Residual norm 8.892934191703e+04 9 KSP Residual norm 8.425895560687e+04 10 KSP Residual norm 2.210542597785e+05 11 KSP Residual norm 7.832399341096e+05 12 KSP Residual norm 2.473533515058e+05 13 KSP Residual norm 4.614685234415e+05 14 KSP Residual norm 2.551240693195e+05 15 KSP Residual norm 1.373194764628e+06 16 KSP Residual norm 6.869917844180e+04 17 KSP Residual norm 6.827906464339e+04 18 KSP Residual norm 3.315401022229e+05 19 KSP Residual norm 1.282764833555e+05 20 KSP Residual norm 2.897981334419e+05 21 KSP Residual norm 6.308627435161e+04 22 KSP Residual norm 1.045231001236e+04 23 KSP Residual norm 1.491143524048e+04 24 KSP Residual norm 8.656791195722e+03 25 KSP Residual norm 5.045336356514e+03 26 KSP Residual norm 2.166365912996e+04 27 KSP Residual norm 3.094664062274e+04 28 KSP Residual norm 6.117301214046e+02 29 KSP Residual norm 2.012060974502e+02 30 KSP Residual norm 9.900282227487e+01 31 KSP Residual norm 8.011585552510e+01 32 KSP Residual norm 8.686592920308e+01 33 KSP Residual norm 3.278618019592e+01 1 SNES Function norm 5.356912110054e+02 0 KSP Residual norm 3.278618016861e+01 1 KSP Residual norm 5.118173917522e+01 2 KSP Residual norm 1.548726082616e+01 3 KSP Residual norm 1.156812789566e+01 4 KSP Residual norm 7.149486136715e+00 5 KSP Residual norm 7.980447622207e+00 6 KSP Residual norm 2.184234814452e+00 7 KSP Residual norm 5.474499778219e-01 8 KSP Residual norm 3.733409066648e-01 9 KSP Residual norm 1.199274608307e-01 10 KSP Residual norm 3.099768217390e-01 11 KSP Residual norm 2.932626926870e-02 12 KSP Residual norm 1.398491373209e-03 13 KSP Residual norm 4.172815233796e-04 14 KSP Residual norm 6.539192013676e-04 15 KSP Residual norm 9.719451000433e-05 2 SNES Function norm 2.224809336173e-03 Number of KSP iteration is 15 SNES solve takes time 11.2320685176470079 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1145991168.00000000 in 0 th processor solve_NE for concentration equations took time 11.2598186258823034 the memory used initially in SolveNE_option is 1145991168.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.531258823457392282E-04 the KSP type is gmres the PC type is ml KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 5.906457664566e+02 0 KSP Residual norm 7.912814686693e+06 1 KSP Residual norm 9.645782297768e+05 2 KSP Residual norm 8.861064046254e+05 3 KSP Residual norm 4.897630000578e+05 4 KSP Residual norm 3.166880917820e+05 5 KSP Residual norm 3.069992066317e+05 6 KSP Residual norm 1.923010666939e+05 7 KSP Residual norm 9.534057223926e+04 8 KSP Residual norm 5.617225030104e+04 9 KSP Residual norm 5.000967170570e+04 10 KSP Residual norm 4.988510808219e+04 11 KSP Residual norm 4.377142165464e+04 12 KSP Residual norm 3.537339493753e+04 13 KSP Residual norm 3.444119180522e+04 14 KSP Residual norm 3.140632281962e+04 15 KSP Residual norm 2.387794289515e+04 16 KSP Residual norm 2.029305730532e+04 17 KSP Residual norm 1.944823332521e+04 18 KSP Residual norm 1.926082773332e+04 19 KSP Residual norm 1.604520680206e+04 20 KSP Residual norm 1.083092031672e+04 21 KSP Residual norm 5.980890172430e+03 22 KSP Residual norm 4.230302343358e+03 23 KSP Residual norm 3.473716174483e+03 24 KSP Residual norm 2.539150935402e+03 25 KSP Residual norm 1.225826890173e+03 26 KSP Residual norm 3.869466816538e+02 27 KSP Residual norm 1.166013359601e+02 28 KSP Residual norm 3.948921651246e+01 1 SNES Function norm 9.374214140858e-02 0 KSP Residual norm 4.314990451734e+01 1 KSP Residual norm 3.515946386829e+01 2 KSP Residual norm 2.695621438095e+01 3 KSP Residual norm 1.390235007360e+01 4 KSP Residual norm 1.386211256871e+01 5 KSP Residual norm 7.834323161908e+00 6 KSP Residual norm 5.517283406603e+00 7 KSP Residual norm 5.426656354145e+00 8 KSP Residual norm 3.458546275824e+00 9 KSP Residual norm 2.340650733306e+00 10 KSP Residual norm 2.308443591276e+00 11 KSP Residual norm 1.978019536730e+00 12 KSP Residual norm 1.751838848414e+00 13 KSP Residual norm 1.748423642827e+00 14 KSP Residual norm 1.558995531050e+00 15 KSP Residual norm 1.434334333026e+00 16 KSP Residual norm 1.429581841785e+00 17 KSP Residual norm 1.234896777884e+00 18 KSP Residual norm 1.113705072932e+00 19 KSP Residual norm 1.113002183375e+00 20 KSP Residual norm 1.000636547921e+00 21 KSP Residual norm 7.666212672065e-01 22 KSP Residual norm 4.954697001390e-01 23 KSP Residual norm 3.763741385658e-01 24 KSP Residual norm 3.061802852706e-01 25 KSP Residual norm 2.172246927676e-01 26 KSP Residual norm 1.263551553515e-01 27 KSP Residual norm 8.065653452946e-02 28 KSP Residual norm 6.666958151528e-02 29 KSP Residual norm 5.969554338215e-02 30 KSP Residual norm 4.701119423782e-02 31 KSP Residual norm 3.880526060321e-02 32 KSP Residual norm 3.550352775410e-02 33 KSP Residual norm 3.518508431556e-02 34 KSP Residual norm 3.365882337676e-02 35 KSP Residual norm 2.990974854198e-02 36 KSP Residual norm 2.960135454716e-02 37 KSP Residual norm 2.642485998975e-02 38 KSP Residual norm 2.207970762270e-02 39 KSP Residual norm 2.207627751574e-02 40 KSP Residual norm 1.939485668126e-02 41 KSP Residual norm 1.826512902599e-02 42 KSP Residual norm 1.776217373211e-02 43 KSP Residual norm 1.494313801753e-02 44 KSP Residual norm 1.491723533996e-02 45 KSP Residual norm 1.294925018292e-02 46 KSP Residual norm 1.031091165182e-02 47 KSP Residual norm 9.274721439823e-03 48 KSP Residual norm 9.199144088117e-03 49 KSP Residual norm 7.380758295213e-03 50 KSP Residual norm 5.312928096668e-03 51 KSP Residual norm 3.590846812049e-03 52 KSP Residual norm 3.453458627238e-03 53 KSP Residual norm 2.682574287054e-03 54 KSP Residual norm 1.535180831843e-03 55 KSP Residual norm 1.124868394925e-03 56 KSP Residual norm 9.042295611229e-04 57 KSP Residual norm 6.864456305576e-04 58 KSP Residual norm 4.500332684970e-04 59 KSP Residual norm 2.315845183804e-04 2 SNES Function norm 5.750728763041e-07 Number of KSP iteration is 59 SNES solve takes time 577.862047618823453 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1161392128.00000000 in 0 th processor solve_NE for flow equations took time 577.867189442352810 solve_CO2 starts iT = 4 (tCurr= 1080.00000000000000 , tNext= 1440.00000000000000 ) using backwardEuler the memory used initially in SolveNE_option is 1161392128.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.689435294134455035E-04 the KSP type is bcgs the PC type is bjacobi KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 1.747595578968e+07 0 KSP Residual norm 6.070738647764e+06 1 KSP Residual norm 2.410071228327e+06 2 KSP Residual norm 9.549253739214e+05 3 KSP Residual norm 7.981187668861e+05 4 KSP Residual norm 3.291532237393e+05 5 KSP Residual norm 1.900364740271e+05 6 KSP Residual norm 1.808298981550e+05 7 KSP Residual norm 1.077099555482e+05 8 KSP Residual norm 8.894381121663e+04 9 KSP Residual norm 8.417853266785e+04 10 KSP Residual norm 2.201981533464e+05 11 KSP Residual norm 7.709268659791e+05 12 KSP Residual norm 2.479014722435e+05 13 KSP Residual norm 4.666147266566e+05 14 KSP Residual norm 2.558241680066e+05 15 KSP Residual norm 1.373629431710e+06 16 KSP Residual norm 6.911910203172e+04 17 KSP Residual norm 6.842401632722e+04 18 KSP Residual norm 3.306049474142e+05 19 KSP Residual norm 1.283124304691e+05 20 KSP Residual norm 2.909356471676e+05 21 KSP Residual norm 6.322781226246e+04 22 KSP Residual norm 1.055406528283e+04 23 KSP Residual norm 1.769757469826e+04 24 KSP Residual norm 9.622138538562e+03 25 KSP Residual norm 5.002105247328e+03 26 KSP Residual norm 2.226851519086e+04 27 KSP Residual norm 3.333932229927e+04 28 KSP Residual norm 6.154801922084e+02 29 KSP Residual norm 2.017271873264e+02 30 KSP Residual norm 9.927125290556e+01 31 KSP Residual norm 8.026861874655e+01 32 KSP Residual norm 8.676016243921e+01 33 KSP Residual norm 3.299156885807e+01 1 SNES Function norm 5.386668000559e+02 0 KSP Residual norm 3.299156885877e+01 1 KSP Residual norm 5.156932987442e+01 2 KSP Residual norm 1.559190093665e+01 3 KSP Residual norm 1.166013674890e+01 4 KSP Residual norm 7.234093208644e+00 5 KSP Residual norm 7.150545499500e+00 6 KSP Residual norm 2.129440698767e+00 7 KSP Residual norm 5.576694756879e-01 8 KSP Residual norm 3.788441553969e-01 9 KSP Residual norm 1.193654283561e-01 10 KSP Residual norm 1.595986614454e-01 11 KSP Residual norm 2.993377362791e-02 12 KSP Residual norm 1.432762216316e-03 13 KSP Residual norm 4.171238217969e-04 14 KSP Residual norm 9.270694825671e-04 15 KSP Residual norm 9.451948560740e-05 2 SNES Function norm 2.164055274017e-03 Number of KSP iteration is 15 SNES solve takes time 11.2357321717647665 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1145991168.00000000 in 0 th processor solve_NE for concentration equations took time 11.2634957200000372 the memory used initially in SolveNE_option is 1145991168.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.580552941755740903E-04 the KSP type is gmres the PC type is ml KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 5.637544127911e+02 0 KSP Residual norm 8.074132757094e+06 1 KSP Residual norm 8.162289267400e+05 2 KSP Residual norm 2.485691052163e+05 3 KSP Residual norm 1.694979473006e+05 4 KSP Residual norm 1.692597553909e+05 5 KSP Residual norm 1.141369714739e+05 6 KSP Residual norm 5.411385565863e+04 7 KSP Residual norm 2.907568800779e+04 8 KSP Residual norm 2.195670663729e+04 9 KSP Residual norm 1.987662933665e+04 10 KSP Residual norm 1.984128544322e+04 11 KSP Residual norm 1.896159506872e+04 12 KSP Residual norm 1.529241417387e+04 13 KSP Residual norm 1.159676508149e+04 14 KSP Residual norm 8.386589100387e+03 15 KSP Residual norm 6.545195108415e+03 16 KSP Residual norm 4.670526409709e+03 17 KSP Residual norm 2.616131111174e+03 18 KSP Residual norm 1.340432005158e+03 19 KSP Residual norm 6.930110261487e+02 20 KSP Residual norm 3.501481900693e+02 21 KSP Residual norm 1.624612621900e+02 22 KSP Residual norm 8.693046524046e+01 23 KSP Residual norm 6.266502671985e+01 1 SNES Function norm 6.161796268733e-02 0 KSP Residual norm 6.116148118636e+02 1 KSP Residual norm 5.230020822749e+01 2 KSP Residual norm 8.211943951986e+00 3 KSP Residual norm 1.666142209341e+00 4 KSP Residual norm 3.507550432364e-01 5 KSP Residual norm 6.165808645033e-02 6 KSP Residual norm 1.202676243967e-02 7 KSP Residual norm 2.481833379192e-03 2 SNES Function norm 8.769102970782e-06 0 KSP Residual norm 2.228753345052e-03 1 KSP Residual norm 7.086547861298e-04 2 KSP Residual norm 2.208481325117e-04 3 KSP Residual norm 1.451575632052e-04 4 KSP Residual norm 1.418274746815e-04 5 KSP Residual norm 9.951896679242e-05 6 KSP Residual norm 6.710322416550e-05 7 KSP Residual norm 4.951304989493e-05 8 KSP Residual norm 4.517130650698e-05 9 KSP Residual norm 4.516237226608e-05 10 KSP Residual norm 4.204281863840e-05 11 KSP Residual norm 3.107532296876e-05 12 KSP Residual norm 2.213892288359e-05 13 KSP Residual norm 1.311130661770e-05 14 KSP Residual norm 7.469402767238e-06 15 KSP Residual norm 4.676669974107e-06 16 KSP Residual norm 3.240876159844e-06 17 KSP Residual norm 3.034954224512e-06 18 KSP Residual norm 3.032806716269e-06 19 KSP Residual norm 2.969785088371e-06 20 KSP Residual norm 2.538694624997e-06 21 KSP Residual norm 1.879982074346e-06 22 KSP Residual norm 1.255296471142e-06 23 KSP Residual norm 7.489364422265e-07 24 KSP Residual norm 5.322075248129e-07 25 KSP Residual norm 4.222735297804e-07 26 KSP Residual norm 3.111446552243e-07 27 KSP Residual norm 1.943375346943e-07 28 KSP Residual norm 1.065696516704e-07 29 KSP Residual norm 5.044714108566e-08 30 KSP Residual norm 2.083483747171e-08 3 SNES Function norm 5.372515919004e-11 Number of KSP iteration is 30 SNES solve takes time 690.774282409411626 Number of Newton iterations = 3 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1161392128.00000000 in 0 th processor solve_NE for flow equations took time 690.779385967058715 solve_CO2 starts iT = 5 (tCurr= 1440.00000000000000 , tNext= 1800.00000000000000 ) using backwardEuler the memory used initially in SolveNE_option is 1161392128.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.581070585212728474E-04 the KSP type is bcgs the PC type is bjacobi KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 1.748870969739e+07 0 KSP Residual norm 6.086246201886e+06 1 KSP Residual norm 2.409389735394e+06 2 KSP Residual norm 9.622200928047e+05 3 KSP Residual norm 7.845100008720e+05 4 KSP Residual norm 3.296569920221e+05 5 KSP Residual norm 1.904589431018e+05 6 KSP Residual norm 1.844373287795e+05 7 KSP Residual norm 1.079285086282e+05 8 KSP Residual norm 8.891984486588e+04 9 KSP Residual norm 8.405179802744e+04 10 KSP Residual norm 2.194780141626e+05 11 KSP Residual norm 7.570843689971e+05 12 KSP Residual norm 2.482349011189e+05 13 KSP Residual norm 4.706092657967e+05 14 KSP Residual norm 2.563244095318e+05 15 KSP Residual norm 1.372950386269e+06 16 KSP Residual norm 6.944283147672e+04 17 KSP Residual norm 6.854632103265e+04 18 KSP Residual norm 3.267800586436e+05 19 KSP Residual norm 1.276852156441e+05 20 KSP Residual norm 2.929806690176e+05 21 KSP Residual norm 6.332255447748e+04 22 KSP Residual norm 1.065666927107e+04 23 KSP Residual norm 2.179427377587e+04 24 KSP Residual norm 1.090989269498e+04 25 KSP Residual norm 4.954750321830e+03 26 KSP Residual norm 2.290314834484e+04 27 KSP Residual norm 3.659049402840e+04 28 KSP Residual norm 6.198874437352e+02 29 KSP Residual norm 2.022064043710e+02 30 KSP Residual norm 9.950110754634e+01 31 KSP Residual norm 8.035889596931e+01 32 KSP Residual norm 8.653039748403e+01 33 KSP Residual norm 3.313695971119e+01 1 SNES Function norm 5.407716079365e+02 0 KSP Residual norm 3.313695972687e+01 1 KSP Residual norm 5.184734109949e+01 2 KSP Residual norm 1.566968138836e+01 3 KSP Residual norm 1.172734056613e+01 4 KSP Residual norm 7.295499990009e+00 5 KSP Residual norm 6.644562948205e+00 6 KSP Residual norm 2.090067616380e+00 7 KSP Residual norm 5.650798874604e-01 8 KSP Residual norm 3.828248875064e-01 9 KSP Residual norm 1.188237451760e-01 10 KSP Residual norm 1.219837906646e-01 11 KSP Residual norm 3.044197530806e-02 12 KSP Residual norm 1.437855652708e-03 13 KSP Residual norm 4.252496839951e-04 14 KSP Residual norm 1.998111152480e-04 2 SNES Function norm 4.503925443016e-03 Number of KSP iteration is 14 SNES solve takes time 11.1125425270588494 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1145991168.00000000 in 0 th processor solve_NE for concentration equations took time 11.1403631105881686 the memory used initially in SolveNE_option is 1145991168.00000000 in 0 th processor Calculating the Jacobian matrix using multi-coloring finite difference Calculating the Jacobian matrix using multi-coloring finite difference, end Calculating the Jacobian matrix takes time 0.470929412585974205E-04 the KSP type is gmres the PC type is ml KSP rtol = 0.100000000000000008E-04 abstol = 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 SNES rtol = 0.100000000000000002E-07 abstol = 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = 50 maxf= 10000 0 SNES Function norm 5.419004356488e+02 0 KSP Residual norm 8.261643592794e+06 1 KSP Residual norm 7.386189854719e+05 2 KSP Residual norm 1.584330883846e+05 3 KSP Residual norm 3.364196291575e+04 4 KSP Residual norm 7.368389136171e+03 5 KSP Residual norm 1.543727655193e+03 6 KSP Residual norm 3.911584281708e+02 7 KSP Residual norm 7.263753586758e+01 1 SNES Function norm 2.079911111668e-01 0 KSP Residual norm 6.479513763967e+01 1 KSP Residual norm 2.048884277583e+01 2 KSP Residual norm 1.594847809383e+01 3 KSP Residual norm 1.364588964425e+01 4 KSP Residual norm 6.269501761207e+00 5 KSP Residual norm 4.035896337551e+00 6 KSP Residual norm 4.031872117187e+00 7 KSP Residual norm 2.768850078744e+00 8 KSP Residual norm 1.403725412629e+00 9 KSP Residual norm 7.626536014837e-01 10 KSP Residual norm 6.202328834215e-01 11 KSP Residual norm 5.350435338368e-01 12 KSP Residual norm 4.666283999150e-01 13 KSP Residual norm 3.994738205585e-01 14 KSP Residual norm 2.771523168855e-01 15 KSP Residual norm 2.055070828125e-01 16 KSP Residual norm 1.791698744783e-01 17 KSP Residual norm 1.461949045869e-01 18 KSP Residual norm 9.828913420095e-02 19 KSP Residual norm 5.776184245190e-02 20 KSP Residual norm 3.376848313794e-02 21 KSP Residual norm 2.191994279649e-02 22 KSP Residual norm 1.907722095018e-02 23 KSP Residual norm 1.857170395786e-02 24 KSP Residual norm 1.828402291612e-02 25 KSP Residual norm 1.788622756769e-02 26 KSP Residual norm 1.688091022039e-02 27 KSP Residual norm 1.340769361306e-02 28 KSP Residual norm 7.135059196478e-03 29 KSP Residual norm 2.447119579046e-03 30 KSP Residual norm 7.151815788560e-04 31 KSP Residual norm 2.848411909092e-04 2 SNES Function norm 8.677875468083e-07 Number of KSP iteration is 31 SNES solve takes time 447.630811501176595 Number of Newton iterations = 2 the SNES Converged Reason is 3 the memory used finally in SolveNE_option is 1161392128.00000000 in 0 th processor solve_NE for flow equations took time 447.635830009412075 The simulation takes time (no initialization and finalization) 2598.08699473647039 The time for the flow equation solving (SNES) is 2546.65459863058823 The time for the concentration equation solving (SNES) is 51.3306441682352670 The total time for SNES is 2597.98524279882349 solve_CO2(conc) done. ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./rst on a arch-shah named ionode65 with 2048 processors, by Unknown Sun Jul 22 16:50:55 2012 Using Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 CDT 2012 Max Max/Min Avg Total Time (sec): 2.675e+03 1.00001 2.675e+03 Objects: 4.818e+03 1.00000 4.818e+03 Flops: 1.466e+10 1.11589 1.377e+10 2.820e+13 Flops/sec: 5.478e+06 1.11589 5.148e+06 1.054e+10 MPI Messages: 2.549e+06 18.36763 1.762e+06 3.609e+09 MPI Message Lengths: 3.313e+08 2.11656 1.667e+02 6.017e+11 MPI Reductions: 5.812e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.6754e+03 100.0% 2.8204e+13 100.0% 3.609e+09 100.0% 1.667e+02 100.0% 5.811e+03 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage PetscBarrier 10 1.0 1.1748e-0211.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESSolve 10 1.0 2.5979e+03 1.0 1.47e+10 1.1 3.6e+09 1.7e+02 5.7e+03 97100100100 98 97100100100 98 10857 SNESFunctionEval 31 1.0 9.2333e-01 1.2 0.00e+00 0.0 1.4e+06 3.5e+03 2.0e+00 0 0 0 1 0 0 0 0 1 0 0 SNESJacobianEval 21 1.0 5.0506e+01 1.0 7.33e+07 1.1 2.7e+07 3.5e+03 4.5e+01 2 1 1 16 1 2 1 1 16 1 2812 SNESLineSearch 21 1.0 1.2610e+00 1.0 8.81e+07 1.1 1.9e+06 3.5e+03 8.4e+01 0 1 0 1 1 0 1 0 1 1 134858 VecDot 413 1.0 1.4155e+00 4.7 5.33e+07 1.1 0.0e+00 0.0e+00 4.1e+02 0 0 0 0 7 0 0 0 0 7 72941 VecDotNorm2 196 1.0 1.1593e+00 9.0 5.06e+07 1.1 0.0e+00 0.0e+00 2.0e+02 0 0 0 0 3 0 0 0 0 3 84533 VecMDot 234 1.0 3.7595e+00 2.4 3.90e+08 1.1 0.0e+00 0.0e+00 2.3e+02 0 3 0 0 4 0 3 0 0 4 200889 VecNorm 496 1.0 1.1166e+00 2.5 6.40e+07 1.1 0.0e+00 0.0e+00 5.0e+02 0 0 0 0 9 0 0 0 0 9 111056 VecScale 4693 1.0 2.9463e-01 1.4 3.52e+07 1.5 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 224870 VecCopy 642 1.0 2.3825e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2769 1.0 1.2421e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 2064 1.0 7.9276e-01 1.4 1.15e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 280230 VecAYPX 1482 1.0 4.0547e-01 1.2 1.98e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 94524 VecAXPBYCZ 392 1.0 7.8397e-01 1.3 1.01e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 250009 VecWAXPY 413 1.0 2.6542e-01 1.3 5.19e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 379120 VecMAXPY 247 1.0 1.4193e+00 1.2 4.20e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 573348 VecScatterBegin 7853 1.0 4.9287e+0127.9 0.00e+00 0.0 3.4e+09 1.8e+02 0.0e+00 0 0 94100 0 0 0 94100 0 0 VecScatterEnd 7853 1.0 4.1848e+02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 13 0 0 0 0 13 0 0 0 0 0 VecReduceArith 52 1.0 2.3512e-02 1.1 6.71e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 552907 VecReduceComm 31 1.0 9.8284e-02 5.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.1e+01 0 0 0 0 1 0 0 0 0 1 0 VecNormalize 247 1.0 6.7727e-01 1.8 4.78e+07 1.1 0.0e+00 0.0e+00 2.5e+02 0 0 0 0 4 0 0 0 0 4 136762 MatMult 3723 1.0 1.7651e+02 5.4 3.85e+09 1.1 8.6e+08 2.3e+02 0.0e+00 5 26 24 32 0 5 26 24 32 0 42048 MatMultAdd 1482 1.0 2.3328e+0211.6 3.97e+07 1.1 0.0e+00 0.0e+00 2.2e+01 3 0 0 0 0 3 0 0 0 0 329 MatSolve 649 1.0 1.0815e+01 1.1 1.30e+09 1.1 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 0 9 0 0 0 233096 MatSOR 2964 1.0 3.9407e+02 3.9 8.07e+09 1.1 2.5e+09 1.1e+02 9.9e+02 11 55 69 47 17 11 55 69 47 17 39298 MatLUFactorSym 11 1.0 1.1463e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.3e+01 0 0 0 0 1 0 0 0 0 1 0 MatLUFactorNum 21 1.0 2.4922e+00 1.1 1.45e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 112406 MatILUFactorSym 5 1.0 2.2485e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 110 1.0 2.6155e+00 5.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+02 0 0 0 0 3 0 0 0 0 3 0 MatAssemblyEnd 110 1.0 4.2814e+00 2.7 0.00e+00 0.0 7.1e+07 2.1e+01 5.4e+02 0 0 2 0 9 0 0 2 0 9 0 MatGetRowIJ 16 1.0 1.3113e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 16 1.0 1.4726e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.2e+01 0 0 0 0 1 0 0 0 0 1 0 MatZeroEntries 21 1.0 1.0455e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatFDColorCreate 1 1.0 7.6147e+01 1.0 0.00e+00 0.0 8.9e+04 8.7e+02 9.2e+01 3 0 0 0 2 3 0 0 0 2 0 MatFDColorApply 21 1.0 5.0506e+01 1.0 7.33e+07 1.1 2.7e+07 3.5e+03 4.5e+01 2 1 1 16 1 2 1 1 16 1 2812 MatFDColorFunc 588 1.0 1.8813e+01 1.2 0.00e+00 0.0 2.6e+07 3.5e+03 0.0e+00 1 0 1 15 0 1 0 1 15 0 0 MatGetRedundant 11 1.0 1.0325e+01 1.3 0.00e+00 0.0 1.4e+08 6.0e+00 4.4e+01 0 0 4 0 1 0 0 4 0 1 0 KSPGMRESOrthog 234 1.0 4.9073e+00 1.7 7.80e+08 1.1 0.0e+00 0.0e+00 2.3e+02 0 5 0 0 4 0 5 0 0 4 307807 KSPSetUp 119 1.0 8.7377e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+02 0 0 0 0 4 0 0 0 0 4 0 KSPSolve 21 1.0 2.5458e+03 1.0 1.45e+10 1.1 3.6e+09 1.4e+02 5.5e+03 95 99 99 83 95 95 99 99 83 95 10955 PCSetUp 31 1.0 1.9466e+03 1.0 5.28e+08 1.1 2.2e+08 1.1e+02 2.9e+03 73 4 6 4 50 73 4 6 4 50 524 PCSetUpOnBlocks 10 1.0 2.7321e+00 1.1 1.45e+08 1.1 0.0e+00 0.0e+00 1.5e+01 0 1 0 0 0 0 1 0 0 0 102534 PCApply 649 1.0 5.8003e+02 1.0 1.06e+10 1.1 3.3e+09 1.1e+02 1.0e+03 22 72 92 63 17 22 72 92 63 17 35241 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Container 13 13 4212 0 SNES 1 1 812 0 SNESLineSearch 1 1 536 0 Vector 3975 3975 476579708 0 Vector Scatter 92 92 58512 0 Matrix 371 371 223370840 0 Matrix FD Coloring 1 1 13832832 0 Distributed Mesh 3 3 608136 0 Bipartite Graph 6 6 2424 0 Index Set 260 260 2533560 0 IS L to G Mapping 2 2 301224 0 Viewer 1 0 0 0 Krylov Solver 46 46 118572 0 Preconditioner 46 46 27124 0 ======================================================================================================================== Average time to get PetscTime(): 3.50475e-06 Average time for MPI_Barrier(): 3.76701e-06 Average time for zero size MPI_Send(): 1.35483e-05 #PETSc Option Table entries: -ksp_monitor -log_summary -snes_monitor #End of PETSc Option Table entries Compiled with FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Sat Jul 21 18:38:13 2012 Configure options: --known-bits-per-byte=8 --known-level1-dcache-assoc=0 --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 --known-memcmp-ok --known-mpi-long-double=1 --known-mpi-shared-libraries=0 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 --known-sizeof-float=4 --known-sizeof-int=4 --known-sizeof-long-long=8 --known-sizeof-long=4 --known-sizeof-short=2 --known-sizeof-size_t=4 --known-sizeof-void-p=4 --with-batch=1 --download-f-blas-lapack=1 --with-cc=mpixlc_r --with-cxx=mpixlcxx_r --with-debugging=0 --with-fc="mpixlf77_r -qnosave" --with-fortran-kernels=1 --with-is-color-value-type=short --with-shared-libraries=0 --with-x=0 -COPTFLAGS="-O3 -qarch=450d -qtune=450 -qmaxmem=-1" -CXXOPTFLAGS="-O3 -qarch=450d -qtune=450 -qmaxmem=-1" -FOPTFLAGS="-O3 -qarch=450d -qtune=450 -qmaxmem=-1" --download-ml=1 PETSC_ARCH=arch-shaheen-bgp-opt ----------------------------------------- Libraries compiled on Sat Jul 21 18:38:13 2012 on fen1 Machine characteristics: Linux-2.6.16.60-0.93.1-ppc64-ppc64-with-SuSE-10-ppc Using PETSc directory: /home/kaibao/petsc-3.3-p2 Using PETSc arch: arch-shaheen-bgp-opt ----------------------------------------- Using C compiler: mpixlc_r -O3 -qarch=450d -qtune=450 -qmaxmem=-1 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpixlf77_r -qnosave -O3 -qarch=450d -qtune=450 -qmaxmem=-1 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/kaibao/petsc-3.3-p2/arch-shaheen-bgp-opt/include -I/home/kaibao/petsc-3.3-p2/include -I/home/kaibao/petsc-3.3-p2/include -I/home/kaibao/petsc-3.3-p2/arch-shaheen-bgp-opt/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/include -I/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/sys/include ----------------------------------------- Using C linker: mpixlc_r Using Fortran linker: mpixlf77_r -qnosave Using libraries: -Wl,-rpath,/home/kaibao/petsc-3.3-p2/arch-shaheen-bgp-opt/lib -L/home/kaibao/petsc-3.3-p2/arch-shaheen-bgp-opt/lib -lpetsc -lpthread -Wl,-rpath,/home/kaibao/petsc-3.3-p2/arch-shaheen-bgp-opt/lib -L/home/kaibao/petsc-3.3-p2/arch-shaheen-bgp-opt/lib -lml -L/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/lib -L/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/sys/lib -L/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/runtime/SPI -L/opt/ibmcmp/xlsmp/bg/1.7/bglib -L/opt/ibmcmp/xlmass/bg/4.4/bglib -L/opt/ibmcmp/vac/bg/9.0/bglib -L/opt/ibmcmp/vacpp/bg/9.0/bglib -L/bgsys/drivers/ppcfloor/gnu-linux/lib/gcc/powerpc-bgp-linux/4.1.2 -L/bgsys/drivers/ppcfloor/gnu-linux/powerpc-bgp-linux/lib -Wl,-rpath,/opt/ibmcmp/lib/bg/bglib -Wl,-rpath,/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/default/lib -Wl,-rpath,/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/comm/sys/lib -Wl,-rpath,/bgsys/drivers/V1R4M2_200_2010-100508P/ppc/runtime/SPI -lcxxmpich.cnk -libmc++ -lstdc++ -lflapack -lfblas -L/opt/ibmcmp/xlf/bg/11.1/bglib -lxlf90_r -lxlomp_ser -lxlfmath -lm -lcxxmpich.cnk -libmc++ -lstdc++ -lcxxmpich.cnk -libmc++ -lstdc++ -ldl -lmpich.cnk -lopa -ldcmf.cnk -ldcmfcoll.cnk -lpthread -lSPI.cna -lrt -lxlopt -lxl -lgcc_eh -ldl ----------------------------------------- Finish RST_CO2. Finish interpreting the file infile_RSTi.m by RST. after mpi_finalize: ierr = 0 From jedbrown at mcs.anl.gov Sun Jul 22 09:42:11 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 09:42:11 -0500 Subject: [petsc-users] How to investigate the reason for slow convergence rate? In-Reply-To: References: Message-ID: On Sun, Jul 22, 2012 at 9:22 AM, Bao Kai wrote: > Hi, Jed, > > The following is the output. Two equations are solved during each > time steps. One is using bicgstab is already very fast, the other one > is using gmres+ml. > These equations are different? Why do you think they should take a similar amount of time? Most of your execution time is spent in the 31 PCSetUp calls (not all of which are ML). If only a few iterations are needed, ML setup is not worth it. Also, if the system is not changing much, you could lag the preconditioner so that you pay for setup less frequently. This can be done at the time integration level (e.g. using a Rosenbrock(-W) method) or at the nonlinear solve by a variety of schemes including modified Newton, MFFD Jacobian, Quasi-Newton, or NGMRES. Note that you can use PetscLogStagePush/Pop to profile each solve separately. > > The log_summary output can found in the end of the output. > > Best Regards, > Kai > > > Message: 6 > > Date: Sun, 22 Jul 2012 07:16:07 -0500 > > From: Jed Brown > > To: PETSc users list > > Subject: Re: [petsc-users] How to investigate the reason for slow > > convergence rate? > > Message-ID: > > < > CAM9tzSkxKz5FwFJJPZz0UO5eN-S45QsdQxmo7xnH345R1EdTEQ at mail.gmail.com> > > Content-Type: text/plain; charset="utf-8" > > > > Please send -log_summary output with performance questions. > > > > On Sun, Jul 22, 2012 at 4:01 AM, Bao Kai wrote: > > > >> Hi, Matt, > >> > >> I tried ML6.2 with petsc3.3 with default parameters by only specifying > >> the preconditioner to be PCML. The KSP solver is gmres. > >> > >> The convergence rate is much faster, while it still took much longer > >> time in total. > >> > >> For example, for the problem with 500^3 mesh ( 125 million unknowns ) > >> with 512nodes(4 processors per node) on bluegene/P , it tooks about > >> ten iterations to get convergent while the total time used is about > >> 400 seconds. > >> > >> 506 the KSP type is gmres > >> 507 the PC type is ml > >> 508 KSP rtol = 0.100000000000000008E-04 abstol = > >> 0.100000000000000001E-49 dtol = 10000.0000000000000 maxit = 10000 > >> 509 SNES rtol = 0.100000000000000002E-07 abstol = > >> 0.100000000000000001E-49 stol = 0.100000000000000002E-07 maxit = > >> 50 maxf= 10000 > >> 510 0 SNES Function norm 5.859593121800e+02 > >> 511 0 KSP Residual norm 8.340827070202e+06 > >> 512 1 KSP Residual norm 7.980806572332e+05 > >> 513 2 KSP Residual norm 1.870896731234e+05 > >> 514 3 KSP Residual norm 6.790580947452e+04 > >> 515 4 KSP Residual norm 2.665552335248e+04 > >> 516 5 KSP Residual norm 1.130212349885e+04 > >> 517 6 KSP Residual norm 4.053599972292e+03 > >> 518 7 KSP Residual norm 1.786770710693e+03 > >> 519 8 KSP Residual norm 7.313571654931e+02 > >> 520 9 KSP Residual norm 3.205683714450e+02 > >> 521 10 KSP Residual norm 1.263243312734e+02 > >> 522 11 KSP Residual norm 3.945082815178e+01 > >> 523 1 SNES Function norm 9.378772067642e-02 > >> 524 0 KSP Residual norm 5.413489711800e+01 > >> 525 1 KSP Residual norm 1.442598710609e+01 > >> 526 2 KSP Residual norm 4.073537172140e+00 > >> 527 3 KSP Residual norm 1.157455598705e+00 > >> 528 4 KSP Residual norm 3.509855901968e-01 > >> 529 5 KSP Residual norm 1.160625342728e-01 > >> 530 6 KSP Residual norm 3.209351890216e-02 > >> 531 7 KSP Residual norm 7.780869881329e-03 > >> 532 8 KSP Residual norm 1.820828886636e-03 > >> 533 9 KSP Residual norm 4.172544590190e-04 > >> 534 2 SNES Function norm 6.747963806680e-07 > >> 535 Number of KSP iteration is 9 > >> 536 SNES solve takes time 406.724867261176314 > >> > >> But with bcgs and bjacobi, it tooks about 550 KSP iterations ( 2 snes > >> iterations ) and 69 seconds to get the result. > >> > >> For much smaller problems, benefiting from the fast convergence, it > >> did takes less time to get the result. It seems that the ml can not be > >> scaled, or I used it in a wrong way. > >> > >> Best Regards, > >> Kai > >> > >> > >> > Message: 2 > >> > Date: Sat, 21 Jul 2012 08:47:10 -0500 > >> > From: Matthew Knepley > >> > To: PETSc users list > >> > Subject: Re: [petsc-users] How to investigate the reason for slow > >> > convergence rate? > >> > Message-ID: > >> > < > >> CAMYG4GkfV6kmTFEKXUadOv+2CrKHk9hRY7UR-cFWf+vcRxCv5g at mail.gmail.com> > >> > Content-Type: text/plain; charset="iso-8859-1" > >> > > >> > On Sat, Jul 21, 2012 at 4:30 AM, Bao Kai wrote: > >> > > >> >> > >> >>> HI, all, > >> >> > >> >> I am still suffering from the slow convergence rate of the KSP > >> >> solution. > >> >> > >> >> I changed the code to use Petsc3.3 and then try the gamg > >> >> precoditioner, > >> >> the convergence rate is better, while it took more total time because > >> >> it > >> >> took much more time for each iteration and some extra time for > >> >> pre-processing. > >> >> > >> > > >> > Try ML to see if it has better defaults for your problem. If not, you > >> will > >> > have to start experimenting with the solver > >> > parameters. > >> > > >> > > >> >> I am wondering if there are some ways that can help me to investigate > >> the > >> >> slow convergence rate for KSP solution so that I can do some > >> improvement. > >> >> Is DMMG will be a good solution? > >> >> > >> > > >> > No. > >> > > >> > Matt > >> > > >> > > >> >> Thank you very much. > >> >> > >> >> Best Regards, > >> >> Kai > >> >> > >> >>> > >> >>> Message: 2 > >> >>> Date: Wed, 11 Jul 2012 15:17:15 -0500 > >> >>> From: Matthew Knepley > >> >>> To: PETSc users list > >> >>> Subject: Re: [petsc-users] Does this mean the matrix is > >> >>> ill-conditioned? > >> >>> Message-ID: > >> >>> >> >>> q+w1PKO7G_TW07iDzux90Sncbv9K7d0FD-MDrLRg at mail.gmail.com> > >> >>> Content-Type: text/plain; charset="iso-8859-1" > >> >>> > >> >>> On Wed, Jul 11, 2012 at 12:40 PM, Bao Kai > >> >>> wrote: > >> >>> > >> >>> > Hi, all, > >> >>> > > >> >>> > The following is the output from the solution of a Poisson > equation > >> >>> > from Darcy's law. > >> >>> > > >> >>> > To compute the condition number of matrix, I did not use PC and > use > >> >>> > GMRES KSP to do the test. > >> >>> > > >> >>> > It seems like that the condition number keep increasing during the > >> >>> > iterative solution. Does this mean the matrix is ill-conditioned? > >> >>> > > >> >>> > >> >>> Generally yes. Krylov methods take a long time to resolve the > >> >>> smallest > >> >>> eigenvalues, so this approximation is not great. > >> >>> > >> >>> > >> >>> > For this test, it did not achieve convergence with 10000 > >> >>> > iterations. > >> >>> > > >> >>> > When I use BJOCABI PC and BICGSTAB KSP, it generally takes about > >> >>> > 600 > >> >>> > times iteration to get the iteration convergent. > >> >>> > > >> >>> > Any suggestion for improving the convergence rate will be much > >> >>> > appreciated. The solution of this equation has been the > bottleneck > >> of > >> >>> > my code, it takes more than 90% of the total time. > >> >>> > > >> >>> > >> >>> Try ML or GAMG. > >> >>> > >> >>> Matt > >> >>> > >> >>> > >> >>> > Thank you very much. > >> >>> > > >> >>> > Best Regards, > >> >>> > Kai > >> >>> > > >> >>> > >> >>> > >> >>> > >> > > >> > > >> > -- > >> > What most experimenters take for granted before they begin their > >> > experiments is infinitely more interesting than any results to which > >> their > >> > experiments lead. > >> > -- Norbert Wiener > >> > -------------- next part -------------- > >> > An HTML attachment was scrubbed... > >> > URL: > >> > < > >> > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120721/eaf7b2ee/attachment-0001.html > >> > > >> > > >> > ------------------------------ > >> > > >> > > -------------- next part -------------- > > An HTML attachment was scrubbed... > > URL: > > < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120722/7f782dd6/attachment.html > > > > > > ------------------------------ > > > > _______________________________________________ > > petsc-users mailing list > > petsc-users at mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > > > > End of petsc-users Digest, Vol 43, Issue 61 > > ******************************************* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sun Jul 22 09:49:26 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 22 Jul 2012 16:49:26 +0200 Subject: [petsc-users] incomplete cholesky with a drop tolerance Message-ID: <500C12F6.1040903@tudelft.nl> Dear all, I am testing some iterative methods with MATLAB and aside with PETSc however I have a question which might be answered in the documentation however I could not find that? In MATLAB, at least on recent versions, one can specify a drop tolerance for the incomplete cholesky preconditioner, I was wondering if the same is possible with PETSc or not? One more question, is the condition number order 1e+6(estimated with condest in MATLAB) rather high for an iterative method? With icc, with a drop tolerance of 1e-3 or 1e-4, as a preconditioner to pcg, I can get decent iteration numbers to convergence in MATLAB, it is sometimes even faster than solving the system with the available factorization information and I was wondering if I can make it faster with some other options in PETSc or not? Best regards, Umut From jedbrown at mcs.anl.gov Sun Jul 22 10:14:57 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 10:14:57 -0500 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: <500C12F6.1040903@tudelft.nl> References: <500C12F6.1040903@tudelft.nl> Message-ID: On Sun, Jul 22, 2012 at 9:49 AM, Umut Tabak wrote: > Dear all, > > I am testing some iterative methods with MATLAB and aside with PETSc > however I have a question which might be answered in the documentation > however I could not find that? > > In MATLAB, at least on recent versions, one can specify a drop tolerance > for the incomplete cholesky preconditioner, I was wondering if the same is > possible with PETSc or not? > You can get an ILUT using MatSuperluSetILUDropTol (-mat_superlu_ilu_droptol) or from Hypre's PILUT (-pc_hypre_pilut_tol). > One more question, is the condition number order 1e+6(estimated with > condest in MATLAB) rather high for an iterative method? With icc, with a > drop tolerance of 1e-3 or 1e-4, as a preconditioner to pcg, I can get > decent iteration numbers to convergence in MATLAB, it is sometimes even > faster than solving the system with the available factorization information > and I was wondering if I can make it faster with some other options in > PETSc or not? > What continuum equations are you solving? What discretization? -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sun Jul 22 10:17:21 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 22 Jul 2012 17:17:21 +0200 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: References: <500C12F6.1040903@tudelft.nl> Message-ID: <500C1981.2080605@tudelft.nl> On 07/22/2012 05:14 PM, Jed Brown wrote: > On Sun, Jul 22, 2012 at 9:49 AM, Umut Tabak > wrote: > > Dear all, > > I am testing some iterative methods with MATLAB and aside with > PETSc however I have a question which might be answered in the > documentation however I could not find that? > > In MATLAB, at least on recent versions, one can specify a drop > tolerance for the incomplete cholesky preconditioner, I was > wondering if the same is possible with PETSc or not? > > > You can get an ILUT using MatSuperluSetILUDropTol > (-mat_superlu_ilu_droptol) or from Hypre's PILUT (-pc_hypre_pilut_tol).. Dear Jed Thanks for this tip, I will try... > > > One more question, is the condition number order 1e+6(estimated > with condest in MATLAB) rather high for an iterative method? With > icc, with a drop tolerance of 1e-3 or 1e-4, as a preconditioner to > pcg, I can get decent iteration numbers to convergence in MATLAB, > it is sometimes even faster than solving the system with the > available factorization information and I was wondering if I can > make it faster with some other options in PETSc or not? > > > What continuum equations are you solving? What discretization? Helmholtz equation, 3d discretization of a fluid domain, basically the operator is singular however for my problem I can delete one of the rows of the matrix, for this case, I and get a non-singular operator that I can continue my operations, basically, I am getting a matrix with size n-1, where original problem size is n. However, this application is pretty problem specific, then I can use this full-rank matrix in linear solutions. The condition number estimate belongs to this full-rank matrix that is extracted from the original singular operator... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Jul 22 10:28:20 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 10:28:20 -0500 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: <500C1981.2080605@tudelft.nl> References: <500C12F6.1040903@tudelft.nl> <500C1981.2080605@tudelft.nl> Message-ID: On Sun, Jul 22, 2012 at 10:17 AM, Umut Tabak wrote: > Helmholtz equation, 3d discretization of a fluid domain, Do you mean indefinite Helmholtz (frequency-domain) or time-domain (definite)? Sorry, I have to ask... What is the wave number? How many grid points per wavelength? > basically the operator is singular however for my problem I can delete one > of the rows of the matrix, for this case, I and get a non-singular > operator that I can continue my operations, basically, I am getting a > matrix with size n-1, where original problem size is n. This is often bad for iterative solvers. See the User's Manual section on solving singular systems. What is the condition number of the original operator minus the zero eigenvalue (instead of "pinning" on point)? > However, this application is pretty problem specific, then I can use this > full-rank matrix in linear solutions. The condition number estimate belongs > to this full-rank matrix that is extracted from the original singular > operator... -------------- next part -------------- An HTML attachment was scrubbed... URL: From flo.44 at gmx.de Sun Jul 22 11:17:15 2012 From: flo.44 at gmx.de (Florian) Date: Sun, 22 Jul 2012 18:17:15 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? Message-ID: <1342973835.13757.7.camel@F-UB> Hi, >Please reply to email rather than starting a new thread. Your email >client might be dropping headers. sorry but what I don't know how to start a new thread? Maybe I have to change some settings in my mailing list account? Back to my problem, I think I have a problem with the initialisation of the petsc library and therefore I'm not able to create vectors in the right way. I call PetscInitialize() as in the examples shown. Are there some special functions to check if everthing is initialized correct? My example works fine except the destroying of the vectors, for the first testing I run it serial. Can that cause my problem? From jedbrown at mcs.anl.gov Sun Jul 22 11:28:18 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 11:28:18 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <1342973835.13757.7.camel@F-UB> References: <1342973835.13757.7.camel@F-UB> Message-ID: On Sun, Jul 22, 2012 at 11:17 AM, Florian wrote: > Hi, > > >Please reply to email rather than starting a new thread. Your email > >client might be dropping headers. > > sorry but what I don't know how to start a new thread? Maybe I have to > change some settings in my mailing list account? > Your "replies" do not have an "in-reply-to" field, thus they start a new thread. Properly functioning mail clients will set the headers correctly when you "reply". > > Back to my problem, I think I have a problem with the initialisation of > the petsc library and therefore I'm not able to create vectors in the > right way. I call PetscInitialize() as in the examples shown. Are there > some special functions to check if everthing is initialized correct? > > My example works fine except the destroying of the vectors, for the > first testing I run it serial. Can that cause my problem? > > Can you send us a reduced test case to demonstrate the problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sun Jul 22 13:11:32 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 22 Jul 2012 20:11:32 +0200 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: References: <500C12F6.1040903@tudelft.nl> <500C1981.2080605@tudelft.nl> Message-ID: <500C4254.6060806@tudelft.nl> On 07/22/2012 05:28 PM, Jed Brown wrote: > On Sun, Jul 22, 2012 at 10:17 AM, Umut Tabak > wrote: > > Helmholtz equation, 3d discretization of a fluid domain, > > > Do you mean indefinite Helmholtz (frequency-domain) or time-domain > (definite)? Sorry, I have to ask... > > What is the wave number? How many grid points per wavelength? Well, basically, I am not interested in time domain response. What I would like to do is to find the eigenvalues/vectors of the system so it is in the frequency domain. What I was doing it generally is the fact that I first factorize the operator matrix with the normal factorization operation and use it to do multiple solves in my Block Lanczos eigenvalue solver. Then in my performance evaluations I saw that this is the point that I should make faster, then I realized that I could solve this particular system, that is pinned in your words, faster with iterative methods almost %20 percent faster. And this is the reason why I am trying to dig under. > > basically the operator is singular however for my problem I can > delete one of the rows of the matrix, for this case, I and get a > non-singular operator that I can continue my operations, > basically, I am getting a matrix with size n-1, where original > problem size is n. > > > This is often bad for iterative solvers. See the User's Manual section > on solving singular systems. What is the condition number of the > original operator minus the zero eigenvalue (instead of "pinning" on > point)? This is not clear to me... You mean something like projecting the original operator on the on the zero eigenvector, some kind of a deflation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Jul 22 13:17:03 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 13:17:03 -0500 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: <500C4254.6060806@tudelft.nl> References: <500C12F6.1040903@tudelft.nl> <500C1981.2080605@tudelft.nl> <500C4254.6060806@tudelft.nl> Message-ID: On Sun, Jul 22, 2012 at 1:11 PM, Umut Tabak wrote: > Well, basically, I am not interested in time domain response. What I would > like to do is to find the eigenvalues/vectors of the system so it is in the > frequency domain. What I was doing it generally is the fact that I first > factorize the operator matrix with the normal factorization operation and > use it to do multiple solves in my Block Lanczos eigenvalue solver. Then in > my performance evaluations I saw that this is the point that I should make > faster, then I realized that I could solve this particular system, that is > pinned in your words, faster with iterative methods almost %20 percent > faster. And this is the reason why I am trying to dig under. > How many grid points per wavelength? > > >> basically the operator is singular however for my problem I can delete >> one of the rows of the matrix, for this case, I and get a non-singular >> operator that I can continue my operations, basically, I am getting a >> matrix with size n-1, where original problem size is n. > > > This is often bad for iterative solvers. See the User's Manual section > on solving singular systems. What is the condition number of the original > operator minus the zero eigenvalue (instead of "pinning" on point)? > > This is not clear to me... You mean something like projecting the original > operator on the on the zero eigenvector, some kind of a deflation. > See the User's Manual section. As long as the preconditioner is stable, convergence is as good as for the nonsingular problem by removing the null space on each iteration. -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sun Jul 22 13:16:51 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 22 Jul 2012 20:16:51 +0200 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: References: <500C12F6.1040903@tudelft.nl> <500C1981.2080605@tudelft.nl> <500C4254.6060806@tudelft.nl> Message-ID: <500C4393.7030201@tudelft.nl> On 07/22/2012 08:17 PM, Jed Brown wrote: > On Sun, Jul 22, 2012 at 1:11 PM, Umut Tabak > wrote: > > Well, basically, I am not interested in time domain response. What > I would like to do is to find the eigenvalues/vectors of the > system so it is in the frequency domain. What I was doing it > generally is the fact that I first factorize the operator matrix > with the normal factorization operation and use it to do multiple > solves in my Block Lanczos eigenvalue solver. Then in my > performance evaluations I saw that this is the point that I should > make faster, then I realized that I could solve this particular > system, that is pinned in your words, faster with iterative > methods almost %20 percent faster. And this is the reason why I am > trying to dig under. > > > How many grid points per wavelength? I am not sure at the moment I should check it further but the mesh is fine enough that this should not be a problem in the frequency range of interest. > >> basically the operator is singular however for my problem I >> can delete one of the rows of the matrix, for this case, I >> and get a non-singular operator that I can continue my >> operations, basically, I am getting a matrix with size n-1, >> where original problem size is n. >> >> >> This is often bad for iterative solvers. See the User's Manual >> section on solving singular systems. What is the condition number >> of the original operator minus the zero eigenvalue (instead of >> "pinning" on point)? > This is not clear to me... You mean something like projecting the > original operator on the on the zero eigenvector, some kind of a > deflation. > > > See the User's Manual section. As long as the preconditioner is > stable, convergence is as good as for the nonsingular problem by > removing the null space on each iteration. Ok I will see that part, Thx. U. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Jul 22 13:22:33 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 22 Jul 2012 13:22:33 -0500 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: <500C4393.7030201@tudelft.nl> References: <500C12F6.1040903@tudelft.nl> <500C1981.2080605@tudelft.nl> <500C4254.6060806@tudelft.nl> <500C4393.7030201@tudelft.nl> Message-ID: On Sun, Jul 22, 2012 at 1:16 PM, Umut Tabak wrote: > I am not sure at the moment I should check it further but the mesh is fine > enough that this should not be a problem in the frequency range of interest. If you are using the minimum to resolve the waves, then multigrid won't buy you much (unless you use very technical coarse spaces for which there is no particular software support). But if your waves are lower frequency (e.g. due to geometric/coefficient structure), there may be more benefit to using multigrid. Note that there is also a body of literature for solving Helmholtz using multigrid preconditioners for Krylov methods by introducing a complex shift. -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Sun Jul 22 13:26:17 2012 From: u.tabak at tudelft.nl (Umut Tabak) Date: Sun, 22 Jul 2012 20:26:17 +0200 Subject: [petsc-users] incomplete cholesky with a drop tolerance In-Reply-To: References: <500C12F6.1040903@tudelft.nl> <500C1981.2080605@tudelft.nl> <500C4254.6060806@tudelft.nl> <500C4393.7030201@tudelft.nl> Message-ID: <500C45C9.8000400@tudelft.nl> On 07/22/2012 08:22 PM, Jed Brown wrote: > On Sun, Jul 22, 2012 at 1:16 PM, Umut Tabak > wrote: > > I am not sure at the moment I should check it further but the mesh > is fine enough that this should not be a problem in the frequency > range of interest. > > > If you are using the minimum to resolve the waves, then multigrid > won't buy you much (unless you use very technical coarse spaces for > which there is no particular software support). But if your waves are > lower frequency (e.g. due to geometric/coefficient structure), there > may be more benefit to using multigrid. > > Note that there is also a body of literature for solving Helmholtz > using multigrid preconditioners for Krylov methods by introducing a > complex shift. I should check these, however the easiest(for the moment) using icc and cg is buying important cost savings at the moment, %20 percent is not that bad for a start even for that ill-conditioned system... I will try to read a bit more on these, thx. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wumengda at gmail.com Sun Jul 22 14:45:24 2012 From: wumengda at gmail.com (Mengda Wu) Date: Sun, 22 Jul 2012 15:45:24 -0400 Subject: [petsc-users] Error compiling Petsc-3.3 and metis using Visual C++ 2008 Message-ID: Hi All, I am trying to compiling Petsc-3.3 using VC2008. The command I am using under cygwin command window is ./config/configure.py --with-cc='cl' --with-fc=0 --with-cxx='cl' --with-debugging=0 --download-f2cblaslapack --download-metis --with-sowing=0 --with-c2html -CFLAGS='-MD -wd4996' -CXXFLAGS='-MD -wd4996' f2cblaslapack is compiled fine. But the metis step has some errors (extracted from configure.log). BTW, I am using Cygwin's cmake. =========================================================================================== Error running configure on METIS: Could not execute "cd /cygdrive/c/Library/Petsc/petsc-3.3/externalpackages/metis-5.0.2-p3/arch-mswin-c-opt && /usr/bin/cmake .. -DCMAKE_INSTALL_PREFIX=/cygdrive/c/Library/Petsc/petsc-3.3/arch-mswin-c-opt -DCMAKE_VERBOSE_MAKEFILE=1 -DGKLIB_PATH=../GKlib -DCMAKE_C_COMPILER="/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl" -DCMAKE_C_FLAGS:STRING="-MD -wd4996 -O2 " -DMETIS_USE_DOUBLEPRECISION=1": -- The C compiler identification is unknown -- The CXX compiler identification is MSVC -- Check for working C compiler: /cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl -- Check for working C compiler: /cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl -- broken -- Configuring incomplete, errors occurred! CMake Warning at /usr/share/cmake-2.8.7/Modules/Platform/CYGWIN.cmake:15 (message): CMake no longer defines WIN32 on Cygwin! (1) If you are just trying to build this project, ignore this warning or quiet it by setting CMAKE_LEGACY_CYGWIN_WIN32=0 in your environment or in the CMake cache. If later configuration or build errors occur then this project may have been written under the assumption that Cygwin is WIN32. In that case, set CMAKE_LEGACY_CYGWIN_WIN32=1 instead. (2) If you are developing this project, add the line set(CMAKE_LEGACY_CYGWIN_WIN32 0) # Remove when CMake >= 2.8.4 is required at the top of your top-level CMakeLists.txt file or set the minimum required version of CMake to 2.8.4 or higher. Then teach your project to build on Cygwin without WIN32. Call Stack (most recent call first): /usr/share/cmake-2.8.7/Modules/CMakeSystemSpecificInformation.cmake:36 (INCLUDE) CMakeLists.txt:2 (project) CMake Error: your C compiler: "/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: Internal CMake error, TryCompile configure of cmake failed CMake Error at /usr/share/cmake-2.8.7/Modules/CMakeTestCCompiler.cmake:52 (MESSAGE): The C compiler "/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl" is not able to compile a simple test program. It fails with the following output: CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:2 (project) CMake Error: your C compiler: "/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. ******************************************************************************* File "./config/configure.py", line 311, in petsc_configure framework.configure(out = sys.stdout) File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/framework.py", line 933, in configure child.configure() File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", line 526, in configure self.executeTest(self.configureLibrary) File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/base.py", line 115, in executeTest ret = apply(test, args,kargs) File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", line 453, in configureLibrary for location, directory, lib, incl in self.generateGuesses(): File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", line 229, in generateGuesses d = self.checkDownload(1) File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", line 320, in checkDownload return self.getInstallDir() File "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", line 184, in getInstallDir return os.path.abspath(self.Install()) File "/cygdrive/c/Library/Petsc/petsc-3.3/config/PETSc/packages/metis.py", line 76, in Install raise RuntimeError('Error running configure on METIS: '+str(e)) Mengda -------------- next part -------------- An HTML attachment was scrubbed... URL: From wumengda at gmail.com Sun Jul 22 14:51:15 2012 From: wumengda at gmail.com (Mengda Wu) Date: Sun, 22 Jul 2012 15:51:15 -0400 Subject: [petsc-users] Petsc for solving a sparse under-determined system Message-ID: Hi all, How should I use Petsc to solve a sparse under-determined system? Basically, I need a quick solution to solve Ax=b, where is A is a sparse (possibly nonsymmetric) matrix of size m x n, where m From knepley at gmail.com Sun Jul 22 17:38:28 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 22 Jul 2012 17:38:28 -0500 Subject: [petsc-users] Petsc for solving a sparse under-determined system In-Reply-To: References: Message-ID: On Sun, Jul 22, 2012 at 2:51 PM, Mengda Wu wrote: > Hi all, > > How should I use Petsc to solve a sparse under-determined system? > Basically, I need a quick solution to solve > Ax=b, where is A is a sparse (possibly nonsymmetric) matrix of size m x n, > where m > I know SPQR(http://www.cise.ufl.edu/research/sparse/SPQR/) can do > this. But if Petsc has some functions or > wrappers, it will be great to stick with Petsc. > Have you tried LSQR, SYMMLQ, or CGNE? Matt > Thanks, > Mengda > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sun Jul 22 21:02:50 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 22 Jul 2012 21:02:50 -0500 (CDT) Subject: [petsc-users] Error compiling Petsc-3.3 and metis using Visual C++ 2008 In-Reply-To: References: Message-ID: On Sun, 22 Jul 2012, Mengda Wu wrote: > Hi All, > > I am trying to compiling Petsc-3.3 using VC2008. The command I am using > under cygwin command window is > > ./config/configure.py --with-cc='cl' --with-fc=0 --with-cxx='cl' Thats probably --with-cc='win32fe cl' --with-cxx='win32fe cl' [for petsc build to work] However cmake does not like the notation 'win32fe cl'. So --download-metis is likely to fail. I'm not sure if its possible to install metis/parmetis this way. Perhaps its possible to install Metis separately with MS compiler using non-cygwin cmake - and then specify -with-metis-include --with-metis-lib options. [I haven't checked to see if this would work or not] BTW: PETSc does not use metis directly. Do you really need metis? Satish > --with-debugging=0 --download-f2cblaslapack --download-metis > --with-sowing=0 --with-c2html -CFLAGS='-MD -wd4996' -CXXFLAGS='-MD -wd4996' > > f2cblaslapack is compiled fine. But the metis step has some errors > (extracted from configure.log). BTW, I am using Cygwin's cmake. > > =========================================================================================== > Error running configure on METIS: Could not execute "cd > /cygdrive/c/Library/Petsc/petsc-3.3/externalpackages/metis-5.0.2-p3/arch-mswin-c-opt > && /usr/bin/cmake .. > -DCMAKE_INSTALL_PREFIX=/cygdrive/c/Library/Petsc/petsc-3.3/arch-mswin-c-opt > -DCMAKE_VERBOSE_MAKEFILE=1 -DGKLIB_PATH=../GKlib > -DCMAKE_C_COMPILER="/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe > cl" -DCMAKE_C_FLAGS:STRING="-MD -wd4996 -O2 " > -DMETIS_USE_DOUBLEPRECISION=1": > -- The C compiler identification is unknown > -- The CXX compiler identification is MSVC > -- Check for working C compiler: > /cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl > -- Check for working C compiler: > /cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl -- broken > -- Configuring incomplete, errors occurred! > CMake Warning at /usr/share/cmake-2.8.7/Modules/Platform/CYGWIN.cmake:15 > (message): > CMake no longer defines WIN32 on Cygwin! > > (1) If you are just trying to build this project, ignore this warning or > quiet it by setting CMAKE_LEGACY_CYGWIN_WIN32=0 in your environment or in > the CMake cache. If later configuration or build errors occur then this > project may have been written under the assumption that Cygwin is WIN32. > In that case, set CMAKE_LEGACY_CYGWIN_WIN32=1 instead. > > (2) If you are developing this project, add the line > > set(CMAKE_LEGACY_CYGWIN_WIN32 0) # Remove when CMake >= 2.8.4 is > required > > at the top of your top-level CMakeLists.txt file or set the minimum > required version of CMake to 2.8.4 or higher. Then teach your project to > build on Cygwin without WIN32. > Call Stack (most recent call first): > /usr/share/cmake-2.8.7/Modules/CMakeSystemSpecificInformation.cmake:36 > (INCLUDE) > CMakeLists.txt:2 (project) > > > CMake Error: your C compiler: > "/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl" was not > found. Please set CMAKE_C_COMPILER to a valid compiler path or name. > CMake Error: Internal CMake error, TryCompile configure of cmake failed > CMake Error at /usr/share/cmake-2.8.7/Modules/CMakeTestCCompiler.cmake:52 > (MESSAGE): > The C compiler "/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe > cl" > is not able to compile a simple test program. > > It fails with the following output: > > > > > > CMake will not be able to correctly generate this project. > Call Stack (most recent call first): > CMakeLists.txt:2 (project) > > > CMake Error: your C compiler: > "/cygdrive/c/Library/Petsc/petsc-3.3/bin/win32fe/win32fe cl" was not > found. Please set CMAKE_C_COMPILER to a valid compiler path or name. > ******************************************************************************* > File "./config/configure.py", line 311, in petsc_configure > framework.configure(out = sys.stdout) > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/framework.py", > line 933, in configure > child.configure() > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", > line 526, in configure > self.executeTest(self.configureLibrary) > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/base.py", > line 115, in executeTest > ret = apply(test, args,kargs) > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", > line 453, in configureLibrary > for location, directory, lib, incl in self.generateGuesses(): > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", > line 229, in generateGuesses > d = self.checkDownload(1) > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", > line 320, in checkDownload > return self.getInstallDir() > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/BuildSystem/config/package.py", > line 184, in getInstallDir > return os.path.abspath(self.Install()) > File > "/cygdrive/c/Library/Petsc/petsc-3.3/config/PETSc/packages/metis.py", line > 76, in Install > raise RuntimeError('Error running configure on METIS: '+str(e)) > > > > > Mengda > From juhaj at iki.fi Sun Jul 22 21:22:35 2012 From: juhaj at iki.fi (Juha =?iso-8859-1?q?J=E4ykk=E4?=) Date: Mon, 23 Jul 2012 04:22:35 +0200 Subject: [petsc-users] sigsegvs Message-ID: <201207230422.41344.juhaj@iki.fi> Hi list! Petsc3.2-p7, make test fails and at a closer look, src/snes/examples/tutorials/ex19 gives: orterun -n 1 ./ex19 -dmmg_nlevels 4 -snes_monitor_short - on_error_attach_debugger lid velocity = 0.0016, prandtl # = 1, grashof # = 1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc- as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] VecNorm_Seq line 236 src/vec/vec/impls/seq/bvec2.c [0]PETSC ERROR: [0] VecNormBegin line 479 src/vec/vec/utils/comb.c [0]PETSC ERROR: [0] SNESSolve_LS line 142 src/snes/impls/ls/ls.c [0]PETSC ERROR: [0] SNESSolve line 2647 src/snes/interface/snes.c [0]PETSC ERROR: [0] DMMGSolveSNES line 538 src/snes/utils/damgsnes.c [0]PETSC ERROR: [0] DMMGSolve line 303 src/snes/utils/damg.c [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- Any idea where this could come from? Worst of all, if I run this in debugger, it never sigsegvs, likewise under valgrind. Sometimes (about 1/3) it also succeeds without a debugger, but even when not segfaulting, it gives wrong results (so I assume not segfaulting is just good (or bad?) luck): 2,5c2,3 < 0 SNES Function norm 0.0406612 < 1 SNES Function norm 3.33175e-06 < 2 SNES Function norm 1.092e-11 < Number of Newton iterations = 2 --- > 0 SNES Function norm < 1.e-11 > Number of Newton iterations = 0 7,10c5,6 < 0 SNES Function norm 0.0406612 < 1 SNES Function norm 3.33175e-06 < 2 SNES Function norm 1.092e-11 < Number of Newton iterations = 2 --- > 0 SNES Function norm < 1.e-11 > Number of Newton iterations = 0 Any help appreciated, thanks. Juha -- ----------------------------------------------- | Juha J?ykk?, juhaj at iki.fi | | http://koti.kapsi.fi/~juhaj/ | ----------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From knepley at gmail.com Sun Jul 22 21:41:06 2012 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 22 Jul 2012 21:41:06 -0500 Subject: [petsc-users] sigsegvs In-Reply-To: <201207230422.41344.juhaj@iki.fi> References: <201207230422.41344.juhaj@iki.fi> Message-ID: On Sun, Jul 22, 2012 at 9:22 PM, Juha J?ykk? wrote: > Hi list! > > Petsc3.2-p7, make test fails and at a closer look, > src/snes/examples/tutorials/ex19 gives: > Is this complex? There was a problem with the complex dot product for some BLAS. If this is the problem, you can reconfigure with --download-f-blas-lapack. Also, upgrading should fix it. Matt > orterun -n 1 ./ex19 -dmmg_nlevels 4 -snes_monitor_short - > on_error_attach_debugger > lid velocity = 0.0016, prandtl # = 1, grashof # = 1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably > memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc- > as/documentation/faq.html#valgrind[0]PETSC ERROR: or try > http://valgrind.org > on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecNorm_Seq line 236 src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNormBegin line 479 src/vec/vec/utils/comb.c > [0]PETSC ERROR: [0] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [0]PETSC ERROR: [0] SNESSolve line 2647 src/snes/interface/snes.c > [0]PETSC ERROR: [0] DMMGSolveSNES line 538 src/snes/utils/damgsnes.c > [0]PETSC ERROR: [0] DMMGSolve line 303 src/snes/utils/damg.c > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown > file > -------------------------------------------------------------------------- > > Any idea where this could come from? Worst of all, if I run this in > debugger, > it never sigsegvs, likewise under valgrind. Sometimes (about 1/3) it also > succeeds without a debugger, but even when not segfaulting, it gives wrong > results (so I assume not segfaulting is just good (or bad?) luck): > > 2,5c2,3 > < 0 SNES Function norm 0.0406612 > < 1 SNES Function norm 3.33175e-06 > < 2 SNES Function norm 1.092e-11 > < Number of Newton iterations = 2 > --- > > 0 SNES Function norm < 1.e-11 > > Number of Newton iterations = 0 > 7,10c5,6 > < 0 SNES Function norm 0.0406612 > < 1 SNES Function norm 3.33175e-06 > < 2 SNES Function norm 1.092e-11 > < Number of Newton iterations = 2 > --- > > 0 SNES Function norm < 1.e-11 > > Number of Newton iterations = 0 > > Any help appreciated, thanks. > Juha > > -- > ----------------------------------------------- > | Juha J?ykk?, juhaj at iki.fi | > | http://koti.kapsi.fi/~juhaj/ | > ----------------------------------------------- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Jul 22 22:04:13 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 22 Jul 2012 22:04:13 -0500 Subject: [petsc-users] sigsegvs In-Reply-To: References: <201207230422.41344.juhaj@iki.fi> Message-ID: <15BEDC85-72ED-4D5B-A6D5-F44942F83A0D@mcs.anl.gov> On Jul 22, 2012, at 9:41 PM, Matthew Knepley wrote: > On Sun, Jul 22, 2012 at 9:22 PM, Juha J?ykk? wrote: > Hi list! > > Petsc3.2-p7, make test fails and at a closer look, > src/snes/examples/tutorials/ex19 gives: > > Is this complex? There was a problem with the complex dot product for some > BLAS. If this is the problem, you can reconfigure with --download-f-blas-lapack. > Also, upgrading to petsc-3.3 > should fix it. Since we changed the code for norm with complex. Barry > > Matt > > orterun -n 1 ./ex19 -dmmg_nlevels 4 -snes_monitor_short - > on_error_attach_debugger > lid velocity = 0.0016, prandtl # = 1, grashof # = 1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably > memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc- > as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org > on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] VecNorm_Seq line 236 src/vec/vec/impls/seq/bvec2.c > [0]PETSC ERROR: [0] VecNormBegin line 479 src/vec/vec/utils/comb.c > [0]PETSC ERROR: [0] SNESSolve_LS line 142 src/snes/impls/ls/ls.c > [0]PETSC ERROR: [0] SNESSolve line 2647 src/snes/interface/snes.c > [0]PETSC ERROR: [0] DMMGSolveSNES line 538 src/snes/utils/damgsnes.c > [0]PETSC ERROR: [0] DMMGSolve line 303 src/snes/utils/damg.c > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown > file > -------------------------------------------------------------------------- > > Any idea where this could come from? Worst of all, if I run this in debugger, > it never sigsegvs, likewise under valgrind. Sometimes (about 1/3) it also > succeeds without a debugger, but even when not segfaulting, it gives wrong > results (so I assume not segfaulting is just good (or bad?) luck): > > 2,5c2,3 > < 0 SNES Function norm 0.0406612 > < 1 SNES Function norm 3.33175e-06 > < 2 SNES Function norm 1.092e-11 > < Number of Newton iterations = 2 > --- > > 0 SNES Function norm < 1.e-11 > > Number of Newton iterations = 0 > 7,10c5,6 > < 0 SNES Function norm 0.0406612 > < 1 SNES Function norm 3.33175e-06 > < 2 SNES Function norm 1.092e-11 > < Number of Newton iterations = 2 > --- > > 0 SNES Function norm < 1.e-11 > > Number of Newton iterations = 0 > > Any help appreciated, thanks. > Juha > > -- > ----------------------------------------------- > | Juha J?ykk?, juhaj at iki.fi | > | http://koti.kapsi.fi/~juhaj/ | > ----------------------------------------------- > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From juhaj at iki.fi Mon Jul 23 04:35:53 2012 From: juhaj at iki.fi (Juha =?utf-8?q?J=C3=A4ykk=C3=A4?=) Date: Mon, 23 Jul 2012 11:35:53 +0200 Subject: [petsc-users] sigsegvs In-Reply-To: References: <201207230422.41344.juhaj@iki.fi> Message-ID: <201207231135.59492.juhaj@iki.fi> > Is this complex? There was a problem with the complex dot product for some > BLAS. If this is the problem, you can reconfigure with > --download-f-blas-lapack. > Also, upgrading should fix it. I did not specify the scalar type, so I assume it is real since configure says --with-scalar-type= Specify real or complex numbers current: real But I will retry with --with-scalar-type=real and --download-f-blas-lapack anyway. This happens with two different versions of blas/lapack: acml and mkl, though. Cheers, Juha -- ----------------------------------------------- | Juha J?ykk?, juhaj at iki.fi | | http://koti.kapsi.fi/~juhaj/ | ----------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From john.fettig at gmail.com Mon Jul 23 08:29:27 2012 From: john.fettig at gmail.com (John Fettig) Date: Mon, 23 Jul 2012 09:29:27 -0400 Subject: [petsc-users] Error compiling Petsc-3.3 and metis using Visual C++ 2008 In-Reply-To: References: Message-ID: On Sun, Jul 22, 2012 at 10:02 PM, Satish Balay wrote: > Perhaps its possible to install Metis separately with MS compiler > using non-cygwin cmake - and then specify -with-metis-include > --with-metis-lib options. [I haven't checked to see if this would work > or not] I can confirm that this method does work. John From adam.stanier-2 at postgrad.manchester.ac.uk Mon Jul 23 12:03:51 2012 From: adam.stanier-2 at postgrad.manchester.ac.uk (Adam Stanier) Date: Mon, 23 Jul 2012 17:03:51 +0000 Subject: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F Message-ID: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk> Hello, I am getting the following error message when linking a code: ../solver_3.2/libsel.a(p2_snes.o): In function `__p2_snes_mod__p2_snes_error': /home/stanier/hifi/solver_3.2/p2_snes.F:450: undefined reference to `__local_mod__petsc_null_function' collect2: ld returned 1 exit status I have noticed that src/snes/examples/tests/ex14f.F also gives an error message related to PETSC_NULL_FUNCTION, and make ex14f fails to make ex14f.o: In function `MAIN__': /home/stanier/soft/petsc-3.2-p7/src/snes/examples/tests/ex14f.F:182: undefined reference to `__petscmod__petsc_null_function' collect2: ld returned 1 exit status Also, have checked this with another PETSc user with version 3.2-p7 and version 3.3- the same error occurs. My configure options are ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich --download-blacs --download-scalapack --download-parmetis --download-mumps --download-superlu --download-hypre I was wondering if anyone else has had this problem, and knew how to get around it. Thanks, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Jul 23 12:36:23 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 23 Jul 2012 12:36:23 -0500 (CDT) Subject: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F In-Reply-To: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk> References: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk> Message-ID: On Mon, 23 Jul 2012, Adam Stanier wrote: > Hello, > > I am getting the following error message when linking a code: > > ../solver_3.2/libsel.a(p2_snes.o): In function `__p2_snes_mod__p2_snes_error': > /home/stanier/hifi/solver_3.2/p2_snes.F:450: undefined reference to `__local_mod__petsc_null_function' > collect2: ld returned 1 exit status > > I have noticed that src/snes/examples/tests/ex14f.F also gives an error message related to PETSC_NULL_FUNCTION, and make ex14f fails to make > > ex14f.o: In function `MAIN__': > /home/stanier/soft/petsc-3.2-p7/src/snes/examples/tests/ex14f.F:182: undefined reference to `__petscmod__petsc_null_function' > collect2: ld returned 1 exit status > > Also, have checked this with another PETSc user with version 3.2-p7 and version 3.3- the same error occurs. With petsc-3.3 - there is one compile error which the following patch fixes. But i don't see the error you mention. diff --git a/src/snes/examples/tests/ex14f.F b/src/snes/examples/tests/ex14f.F --- a/src/snes/examples/tests/ex14f.F +++ b/src/snes/examples/tests/ex14f.F @@ -137,7 +137,7 @@ call VecDuplicate(x,U,ierr) call PetscObjectSetName(U,'Exact Solution',ierr) - call MatCreateAIJ(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,N, & + call MatCreateAIJ(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,N, & & N,i3,PETSC_NULL_INTEGER,i0,PETSC_NULL_INTEGER,J,ierr) call MatGetType(J,matrixname,ierr) >>>>>>>>>>>>>>>>> balay at petsc:/sandbox/balay/petsc33 $ ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich --download-blacs --download-scalapack --download-parmetis --download-mumps --download-superlu --download-hypre PETSC_ARCH=arch-test --download-metis balay at petsc:/sandbox/balay/petsc33 $ make PETSC_DIR=/sandbox/balay/petsc33 PETSC_ARCH=arch-test all Running test examples to verify correct installation Using PETSC_DIR=/sandbox/balay/petsc33 and PETSC_ARCH=arch-test C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process Completed test examples balay at petsc:/sandbox/balay/petsc33 $ cd src/snes/examples/tests/ balay at petsc:/sandbox/balay/petsc33/src/snes/examples/tests $ make ex14f /sandbox/balay/petsc33/arch-test/bin/mpif90 -c -Wall -Wno-unused-variable -g -I/sandbox/balay/petsc33/include -I/sandbox/balay/petsc33/arch-test/include -o ex14f.o ex14f.F /sandbox/balay/petsc33/arch-test/bin/mpif90 -Wall -Wno-unused-variable -g -o ex14f ex14f.o -L/sandbox/balay/petsc33/arch-test/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/sandbox/balay/petsc33/arch-test/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lblacs -lparmetis -lmetis -lHYPRE -L/usr/lib/gcc/x86_64-linux-gnu/4.4.3 -L/usr/lib/x86_64-linux-gnu -lmpichcxx -lstdc++ -lsuperlu_4.3 -lflapack -lfblas -lmpichf90 -lgfortran -lm -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl /bin/rm -f ex14f.o balay at petsc:/sandbox/balay/petsc33/src/snes/examples/tests $ ./ex14f Jacobian is built ... Jacobian is built ... Jacobian is built ... Jacobian is built ... Number of SNES iterations = 7 balay at petsc:/sandbox/balay/petsc33/src/snes/examples/tests $ <<<<<<<<<<<<<<<<<<<<<<<<< Satish > > My configure options are ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich --download-blacs --download-scalapack --download-parmetis --download-mumps --download-superlu --download-hypre > > I was wondering if anyone else has had this problem, and knew how to get around it. > > Thanks, > Adam > > From juhaj at iki.fi Mon Jul 23 18:05:56 2012 From: juhaj at iki.fi (Juha =?utf-8?q?J=C3=A4ykk=C3=A4?=) Date: Tue, 24 Jul 2012 01:05:56 +0200 Subject: [petsc-users] sigsegvs In-Reply-To: References: <201207230422.41344.juhaj@iki.fi> Message-ID: <201207240106.00251.juhaj@iki.fi> > --download-f-blas-lapack. This, or rather --download-f2cblaslapack=1 did the trick. So is this a strong indication that both acml and mkl are somehow broken? If it is, I will pester the cluster admins to fix them. Thanks for the help. -Juha -- ----------------------------------------------- | Juha J?ykk?, juhaj at iki.fi | | http://koti.kapsi.fi/~juhaj/ | ----------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From knepley at gmail.com Mon Jul 23 18:12:50 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Jul 2012 18:12:50 -0500 Subject: [petsc-users] sigsegvs In-Reply-To: <201207240106.00251.juhaj@iki.fi> References: <201207230422.41344.juhaj@iki.fi> <201207240106.00251.juhaj@iki.fi> Message-ID: On Mon, Jul 23, 2012 at 6:05 PM, Juha J?ykk? wrote: > > --download-f-blas-lapack. > > This, or rather --download-f2cblaslapack=1 did the trick. So is this a > strong > indication that both acml and mkl are somehow broken? If it is, I will > pester > the cluster admins to fix them. > Its likely that the breakage was the thing I refered to in the last mail. Upgrading to 3.3 should fix it. Matt > Thanks for the help. > -Juha > > -- > ----------------------------------------------- > | Juha J?ykk?, juhaj at iki.fi | > | http://koti.kapsi.fi/~juhaj/ | > ----------------------------------------------- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From npezolano at gmail.com Mon Jul 23 20:55:16 2012 From: npezolano at gmail.com (Nicholas Pezolano) Date: Mon, 23 Jul 2012 21:55:16 -0400 Subject: [petsc-users] Fwd: matrix dot product multiplication with petsc In-Reply-To: References: Message-ID: Hi has any one been able to come up with an efficient work around to do a matrix matrix dot product with petsc? Other then pulling every column into vectors and using vecdot on them -- Nicholas Pezolano Department of Applied Math & Statistics State University of New York at Stony Brook -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 23 20:57:32 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Jul 2012 20:57:32 -0500 Subject: [petsc-users] Fwd: matrix dot product multiplication with petsc In-Reply-To: References: Message-ID: On Mon, Jul 23, 2012 at 8:55 PM, Nicholas Pezolano wrote: > Hi has any one been able to come up with an efficient work around to do a > matrix matrix dot product with petsc? Other then pulling every column into > vectors and using vecdot on them http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMatMult.html? Matt > -- > Nicholas Pezolano > Department of Applied Math & Statistics > State University of New York at Stony Brook > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 23 20:58:17 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 20:58:17 -0500 Subject: [petsc-users] Fwd: matrix dot product multiplication with petsc In-Reply-To: References: Message-ID: On Mon, Jul 23, 2012 at 8:55 PM, Nicholas Pezolano wrote: > Hi has any one been able to come up with an efficient work around to do a > matrix matrix dot product with petsc? Other then pulling every column into > vectors and using vecdot on them What is the higher level thing you are trying to do? Do the matrices have the same sparsity pattern? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Mon Jul 23 21:08:32 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Mon, 23 Jul 2012 19:08:32 -0700 Subject: [petsc-users] Pthread support Message-ID: <1343095712.2963.10.camel@DeathStar> Hello! I have a question about pthread support in PETSC. I could not find it in the development version. Is it going to be supported in the new release? The reason I tried development version is because in petsc-3.3 compiled with pthreadclasses=1 all jobs would be sent to one core, if I start executables separately. Same for MPI, mpirun -np 4 would show up as 4 processes with 25% CPU usage each, in the "top" output. Without pthreadclasses all is ok. I would have processes with each utilizing separate core at 100%. Have you come across such a behavior? I ran ex2 from ksp tutorials. Thank you! Alexander. From npezolano at gmail.com Mon Jul 23 21:18:07 2012 From: npezolano at gmail.com (Nicholas Pezolano) Date: Mon, 23 Jul 2012 22:18:07 -0400 Subject: [petsc-users] Fwd: matrix dot product multiplication with petsc Message-ID: @Matt, thanks but this is just general matrix multiplication , not the dot product multiplication's for a matrix @Jed Brown I am trying to draw random numbers from a complex multivariate mixture distribution needed for a monte carlo simulation. X:= mu + Lambda.* W + gamma.*sqrt(W) , where they are all NxN dense matrices of equal size where N ranges from 100,000 to 2,000,000. I need to do the dot product multiplication and addition for every part of this distribution. Would petsc be the right tool for this or should i just stick to C and mpi. I've already used many petsc functions that have made life much eaiser -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 23 21:20:31 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 23 Jul 2012 21:20:31 -0500 Subject: [petsc-users] Fwd: matrix dot product multiplication with petsc In-Reply-To: References: Message-ID: On Mon, Jul 23, 2012 at 9:18 PM, Nicholas Pezolano wrote: > @Matt, > > thanks but this is just general matrix multiplication , not the dot > product multiplication's for a matrix > Please explain to me the difference. Matt > > @Jed Brown > > I am trying to draw random numbers from a complex multivariate mixture > distribution needed for a monte carlo simulation. X:= mu + Lambda.* W + > gamma.*sqrt(W) , where they are all NxN dense matrices of equal size where > N ranges from 100,000 to 2,000,000. I need to do the dot product > multiplication and addition for every part of this distribution. Would > petsc be the right tool for this or should i just stick to C and mpi. I've > already used many petsc functions that have made life much eaiser > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 23 21:41:53 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 21:41:53 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343095712.2963.10.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> Message-ID: On Mon, Jul 23, 2012 at 9:08 PM, Alexander Goncharov wrote: > Hello! > > I have a question about pthread support in PETSC. I could not find it in > the development version. Is it going to be supported in the new release? > > The reason I tried development version is because in petsc-3.3 compiled > with pthreadclasses=1 all jobs would be sent to one core, if I start > executables separately. Same for MPI, mpirun -np 4 would show up as 4 > processes with 25% CPU usage each, in the "top" output. Without > pthreadclasses all is ok. I would have processes with each utilizing > separate core at 100%. Have you come across such a behavior? > Sounds like an affinity problem. When running with multiple MPI processes and threads, you generally have to specify the number of threads manually. The threading code is in the process of being overhauled to a cleaner and more flexible design. It's not ready for use yet, but should be in a couple months. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 23 21:47:48 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 21:47:48 -0500 Subject: [petsc-users] Fwd: matrix dot product multiplication with petsc In-Reply-To: References: Message-ID: On Mon, Jul 23, 2012 at 9:18 PM, Nicholas Pezolano wrote: > I am trying to draw random numbers from a complex multivariate mixture > distribution needed for a monte carlo simulation. X:= mu + Lambda.* W + > gamma.*sqrt(W) , where they are all NxN dense matrices of equal size where > N ranges from 100,000 to 2,000,000. Are these really matrices in the sense of linear operators? Are you also doing operations like MatMult or KSPSolve with them? If not, then from PETSc's perspective, these things are vectors and should be stored that way. If you store them as MatDense, you could use MatGetArray. I assume that you have 100 TB of memory or whatever is required to store these enormous dense beasts? > I need to do the dot product multiplication and addition for every part of > this distribution. Would petsc be the right tool for this or should i just > stick to C and mpi. I've already used many petsc functions that have made > life much eaiser -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Mon Jul 23 22:00:54 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Mon, 23 Jul 2012 20:00:54 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> Message-ID: <1343098854.2963.21.camel@DeathStar> Hi Jed, thank you for a quick reply. If I run ex2 with -mat_type seqaijpthread -nthreads 4, everything runs fine. I see 400% utilization. The problem persists if I just start two instances of the ex2 from the console without seqaijpthread/seqpthread options. Just two single processor jobs. They show up as two 50% jobs in the "top" output and run slower. I hope I made my problem more clear. Thank you, Alexander. On Mon, 2012-07-23 at 21:41 -0500, Jed Brown wrote: > On Mon, Jul 23, 2012 at 9:08 PM, Alexander Goncharov > wrote: > Hello! > > I have a question about pthread support in PETSC. I could not > find it in > the development version. Is it going to be supported in the > new release? > > The reason I tried development version is because in petsc-3.3 > compiled > with pthreadclasses=1 all jobs would be sent to one core, if I > start > executables separately. Same for MPI, mpirun -np 4 would show > up as 4 > processes with 25% CPU usage each, in the "top" output. > Without > pthreadclasses all is ok. I would have processes with each > utilizing > separate core at 100%. Have you come across such a behavior? > > > Sounds like an affinity problem. When running with multiple MPI > processes and threads, you generally have to specify the number of > threads manually. > > > The threading code is in the process of being overhauled to a cleaner > and more flexible design. It's not ready for use yet, but should be in > a couple months. From jedbrown at mcs.anl.gov Mon Jul 23 22:04:22 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 22:04:22 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343098854.2963.21.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> Message-ID: On Mon, Jul 23, 2012 at 10:00 PM, Alexander Goncharov wrote: > thank you for a quick reply. If I run ex2 with -mat_type seqaijpthread > -nthreads 4, everything runs fine. I see 400% utilization. > > The problem persists if I just start two instances of the ex2 from the > console without seqaijpthread/seqpthread options. Just two single > processor jobs. They show up as two 50% jobs in the "top" output and run > slower. I hope I made my problem more clear. > Does -nthreads 1 fix it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Mon Jul 23 22:13:47 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Mon, 23 Jul 2012 20:13:47 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> Message-ID: <1343099627.2963.24.camel@DeathStar> No, it did not fix it. Two instances 50% each. thank you. On Mon, 2012-07-23 at 22:04 -0500, Jed Brown wrote: > On Mon, Jul 23, 2012 at 10:00 PM, Alexander Goncharov > wrote: > thank you for a quick reply. If I run ex2 with -mat_type > seqaijpthread > -nthreads 4, everything runs fine. I see 400% utilization. > > The problem persists if I just start two instances of the ex2 > from the > console without seqaijpthread/seqpthread options. Just two > single > processor jobs. They show up as two 50% jobs in the "top" > output and run > slower. I hope I made my problem more clear. > > > Does -nthreads 1 fix it? From jedbrown at mcs.anl.gov Mon Jul 23 22:14:26 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 22:14:26 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343099627.2963.24.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> Message-ID: On Mon, Jul 23, 2012 at 10:13 PM, Alexander Goncharov wrote: > No, it did not fix it. Two instances 50% each. What affinity is your mpiexec setting? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Mon Jul 23 22:18:01 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Mon, 23 Jul 2012 20:18:01 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> Message-ID: <1343099881.2963.27.camel@DeathStar> I do not use MPI. I am just launching two instances of ex2 manually. On Mon, 2012-07-23 at 22:14 -0500, Jed Brown wrote: > On Mon, Jul 23, 2012 at 10:13 PM, Alexander Goncharov > wrote: > No, it did not fix it. Two instances 50% each. > > What affinity is your mpiexec setting? From bsmith at mcs.anl.gov Mon Jul 23 22:19:56 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Jul 2012 22:19:56 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343099881.2963.27.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: Yes, but is PETSc built with MPI? PETSc does not decide on which core to run the program, but MPI might. Barry On Jul 23, 2012, at 10:18 PM, Alexander Goncharov wrote: > I do not use MPI. I am just launching two instances of ex2 manually. > > On Mon, 2012-07-23 at 22:14 -0500, Jed Brown wrote: >> On Mon, Jul 23, 2012 at 10:13 PM, Alexander Goncharov >> wrote: >> No, it did not fix it. Two instances 50% each. >> >> What affinity is your mpiexec setting? > > From jedbrown at mcs.anl.gov Mon Jul 23 22:20:28 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 22:20:28 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343099881.2963.27.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: On Mon, Jul 23, 2012 at 10:18 PM, Alexander Goncharov wrote: > I do not use MPI. I am just launching two instances of ex2 manually. This is an essential detail. You need to set them to have different affinity. We can't know about the affinity of other random processes on your machine. mpiexec provides a way to prevent MPI processes from conflicting. We do our best to prevent our own threads from conflicting. But after that, you're on your own. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 23 22:22:52 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 22:22:52 -0500 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith wrote: > Yes, but is PETSc built with MPI? PETSc does not decide on which core to > run the program, but MPI might. Well, PETSc is setting affinity if it can when you use threads. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Mon Jul 23 22:32:54 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Mon, 23 Jul 2012 20:32:54 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: <1343100774.2963.33.camel@DeathStar> Does it mean that a single process job should be started using mpiexec? Before I never used it for a sequential code. I just tried mpiexec -n 2 --bind-to-core ./ex2. Again had two processes 50% each. 4 jobs show up as 25% and so on. I have same problem on AMD quad-core, and Intel quad-core. On Mon, 2012-07-23 at 22:22 -0500, Jed Brown wrote: > On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith > wrote: > Yes, but is PETSc built with MPI? PETSc does not decide on > which core to run the program, but MPI might. > > Well, PETSc is setting affinity if it can when you use threads. From bsmith at mcs.anl.gov Mon Jul 23 22:33:23 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 23 Jul 2012 22:33:23 -0500 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: On Jul 23, 2012, at 10:22 PM, Jed Brown wrote: > On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith wrote: > Yes, but is PETSc built with MPI? PETSc does not decide on which core to run the program, but MPI might. > > Well, PETSc is setting affinity if it can when you use threads. Jed, So maybe that is the problem. PETSc is forcing affinity to a hardware number core regardless of whether another PETSc program is running and has also forced that same core? Barry From alexvg77 at gmail.com Mon Jul 23 22:38:07 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Mon, 23 Jul 2012 20:38:07 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: <1343101087.2963.35.camel@DeathStar> Barry, I think this is what I am experiencing. Whether I use mpiexcec or start jobs manually, they all end up on the same core (0). Btw, this does not happen if compiled --wtih-pthreadclasses=0 Alex. On Mon, 2012-07-23 at 22:33 -0500, Barry Smith wrote: > On Jul 23, 2012, at 10:22 PM, Jed Brown wrote: > > > On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith wrote: > > Yes, but is PETSc built with MPI? PETSc does not decide on which core to run the program, but MPI might. > > > > Well, PETSc is setting affinity if it can when you use threads. > > Jed, > > So maybe that is the problem. PETSc is forcing affinity to a hardware number core regardless of whether another PETSc program is running and has also forced that same core? > > Barry > > > > From jedbrown at mcs.anl.gov Mon Jul 23 22:42:21 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 23 Jul 2012 22:42:21 -0500 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: On Mon, Jul 23, 2012 at 10:33 PM, Barry Smith wrote: > So maybe that is the problem. PETSc is forcing affinity to a hardware > number core regardless of whether another PETSc program is running and has > also forced that same core? Yes. Now what's the right solution? Not set any affinity by default? Inherit the process affinity and subdivide it among threads? There is currently a somewhat crude mechanism in -threadcomm_affinities. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhyshr at mcs.anl.gov Tue Jul 24 01:30:04 2012 From: abhyshr at mcs.anl.gov (Shri) Date: Mon, 23 Jul 2012 23:30:04 -0700 Subject: [petsc-users] Pthread support In-Reply-To: <1343095712.2963.10.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> Message-ID: Alex, We've taken out the old pthread code since it was experimental and we plan to have a more generic interface that can support different threading models rather than sticking to one. All the thread stuff can now be accessed through the threadcomm interface. If you run your code with --help you'll see the available options available with threadcomm. Shri Shri On Jul 23, 2012, at 7:08 PM, Alexander Goncharov wrote: > Hello! > > I have a question about pthread support in PETSC. I could not find it in > the development version. Is it going to be supported in the new release? > > The reason I tried development version is because in petsc-3.3 compiled > with pthreadclasses=1 all jobs would be sent to one core, if I start > executables separately. Same for MPI, mpirun -np 4 would show up as 4 > processes with 25% CPU usage each, in the "top" output. Without > pthreadclasses all is ok. I would have processes with each utilizing > separate core at 100%. Have you come across such a behavior? > > I ran ex2 from ksp tutorials. > > Thank you! > Alexander. > From abhyshr at mcs.anl.gov Tue Jul 24 01:32:12 2012 From: abhyshr at mcs.anl.gov (Shri) Date: Mon, 23 Jul 2012 23:32:12 -0700 Subject: [petsc-users] Pthread support In-Reply-To: <1343098854.2963.21.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> Message-ID: On Jul 23, 2012, at 8:00 PM, Alexander Goncharov wrote: > Hi Jed, > > thank you for a quick reply. If I run ex2 with -mat_type seqaijpthread > -nthreads 4, everything runs fine. I see 400% utilization. I've taken out all this code from the development version. Are you using petsc-3.3 while running this? > > The problem persists if I just start two instances of the ex2 from the > console without seqaijpthread/seqpthread options. Just two single > processor jobs. They show up as two 50% jobs in the "top" output and run > slower. I hope I made my problem more clear. > > Thank you, > Alexander. > > On Mon, 2012-07-23 at 21:41 -0500, Jed Brown wrote: >> On Mon, Jul 23, 2012 at 9:08 PM, Alexander Goncharov >> wrote: >> Hello! >> >> I have a question about pthread support in PETSC. I could not >> find it in >> the development version. Is it going to be supported in the >> new release? >> >> The reason I tried development version is because in petsc-3.3 >> compiled >> with pthreadclasses=1 all jobs would be sent to one core, if I >> start >> executables separately. Same for MPI, mpirun -np 4 would show >> up as 4 >> processes with 25% CPU usage each, in the "top" output. >> Without >> pthreadclasses all is ok. I would have processes with each >> utilizing >> separate core at 100%. Have you come across such a behavior? >> >> >> Sounds like an affinity problem. When running with multiple MPI >> processes and threads, you generally have to specify the number of >> threads manually. >> >> >> The threading code is in the process of being overhauled to a cleaner >> and more flexible design. It's not ready for use yet, but should be in >> a couple months. > > From abhyshr at mcs.anl.gov Tue Jul 24 01:46:46 2012 From: abhyshr at mcs.anl.gov (Shri) Date: Mon, 23 Jul 2012 23:46:46 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> Message-ID: <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> The current "crude" default affinity setting is that for each process the first thread gets assigned to core number 0, second to core number 1, and so on (assuming that the OS numbers the cores sequentially). However, the OS could number the cores differently. For example on Intel Nehalem Architecture, cores on one socket have even numbers (0,2,4,6) and that on the other socket have odd (1,3,5,7). We currently don't have any mechanism by which we can figure out how the OS numbers the cores and set affinity accordingly. Shri On Jul 23, 2012, at 8:33 PM, Barry Smith wrote: > > On Jul 23, 2012, at 10:22 PM, Jed Brown wrote: > >> On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith wrote: >> Yes, but is PETSc built with MPI? PETSc does not decide on which core to run the program, but MPI might. >> >> Well, PETSc is setting affinity if it can when you use threads. > > Jed, > > So maybe that is the problem. PETSc is forcing affinity to a hardware number core regardless of whether another PETSc program is running and has also forced that same core? > > Barry > > > > From adam.stanier-2 at postgrad.manchester.ac.uk Tue Jul 24 07:11:39 2012 From: adam.stanier-2 at postgrad.manchester.ac.uk (Adam Stanier) Date: Tue, 24 Jul 2012 12:11:39 +0000 Subject: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F In-Reply-To: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk> References: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk> Message-ID: <7A7F2577D3A89844AB5C35FDD60F77EA061F67@MBXP07.ds.man.ac.uk> Satish, Thanks for the quick reply, I had corrected that bug in 3.3 you mentioned before getting the linking bug. I have also tried the same configure line (and also a simple configure: ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich) on another cluster and got the same problem. I think the problem may be related to gfortran version Red Hat 4.1.2 (there is 4.1.2-14 on one cluster and 4.1.2-46 on another cluster). Another petsc user found ex14f to compile fine with red hat 4.4.6 gfortran. The only reference to PETSC_NULL_FUNCTION I can find is where it is declared with External PETSC_NULL_FUNCTION in petscsys.h. Is this set to zero somewhere, or is it just expected to return zero if it is not found? Thanks, Adam ________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Adam Stanier [adam.stanier-2 at postgrad.manchester.ac.uk] Sent: 23 July 2012 18:03 To: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F Hello, I am getting the following error message when linking a code: ../solver_3.2/libsel.a(p2_snes.o): In function `__p2_snes_mod__p2_snes_error': /home/stanier/hifi/solver_3.2/p2_snes.F:450: undefined reference to `__local_mod__petsc_null_function' collect2: ld returned 1 exit status I have noticed that src/snes/examples/tests/ex14f.F also gives an error message related to PETSC_NULL_FUNCTION, and make ex14f fails to make ex14f.o: In function `MAIN__': /home/stanier/soft/petsc-3.2-p7/src/snes/examples/tests/ex14f.F:182: undefined reference to `__petscmod__petsc_null_function' collect2: ld returned 1 exit status Also, have checked this with another PETSc user with version 3.2-p7 and version 3.3- the same error occurs. My configure options are ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich --download-blacs --download-scalapack --download-parmetis --download-mumps --download-superlu --download-hypre I was wondering if anyone else has had this problem, and knew how to get around it. Thanks, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.stanier-2 at postgrad.manchester.ac.uk Tue Jul 24 09:35:12 2012 From: adam.stanier-2 at postgrad.manchester.ac.uk (Adam Stanier) Date: Tue, 24 Jul 2012 14:35:12 +0000 Subject: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F In-Reply-To: <7A7F2577D3A89844AB5C35FDD60F77EA061F67@MBXP07.ds.man.ac.uk> References: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk>, <7A7F2577D3A89844AB5C35FDD60F77EA061F67@MBXP07.ds.man.ac.uk> Message-ID: <7A7F2577D3A89844AB5C35FDD60F77EA061FDF@MBXP07.ds.man.ac.uk> I got gcc and gfortran on the cluster updated to Red Hat 4.4.6-3. This solves the problem. Many Thanks, Adam ________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Adam Stanier [adam.stanier-2 at postgrad.manchester.ac.uk] Sent: 24 July 2012 13:11 To: PETSc users list Subject: Re: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F Satish, Thanks for the quick reply, I had corrected that bug in 3.3 you mentioned before getting the linking bug. I have also tried the same configure line (and also a simple configure: ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich) on another cluster and got the same problem. I think the problem may be related to gfortran version Red Hat 4.1.2 (there is 4.1.2-14 on one cluster and 4.1.2-46 on another cluster). Another petsc user found ex14f to compile fine with red hat 4.4.6 gfortran. The only reference to PETSC_NULL_FUNCTION I can find is where it is declared with External PETSC_NULL_FUNCTION in petscsys.h. Is this set to zero somewhere, or is it just expected to return zero if it is not found? Thanks, Adam ________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Adam Stanier [adam.stanier-2 at postgrad.manchester.ac.uk] Sent: 23 July 2012 18:03 To: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F Hello, I am getting the following error message when linking a code: ../solver_3.2/libsel.a(p2_snes.o): In function `__p2_snes_mod__p2_snes_error': /home/stanier/hifi/solver_3.2/p2_snes.F:450: undefined reference to `__local_mod__petsc_null_function' collect2: ld returned 1 exit status I have noticed that src/snes/examples/tests/ex14f.F also gives an error message related to PETSC_NULL_FUNCTION, and make ex14f fails to make ex14f.o: In function `MAIN__': /home/stanier/soft/petsc-3.2-p7/src/snes/examples/tests/ex14f.F:182: undefined reference to `__petscmod__petsc_null_function' collect2: ld returned 1 exit status Also, have checked this with another PETSc user with version 3.2-p7 and version 3.3- the same error occurs. My configure options are ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich --download-blacs --download-scalapack --download-parmetis --download-mumps --download-superlu --download-hypre I was wondering if anyone else has had this problem, and knew how to get around it. Thanks, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From Flo.44 at gmx.de Tue Jul 24 10:24:43 2012 From: Flo.44 at gmx.de (Florian Beck) Date: Tue, 24 Jul 2012 17:24:43 +0200 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: References: <20120718105010.148120@gmx.net> Message-ID: <20120724152443.58640@gmx.net> On Sun Jul 22 11:28:18 CDT 2012 Jed Brown jedbrown at mcs.anl.gov wrote: >> >> Back to my problem, I think I have a problem with the initialisation of >> the petsc library and therefore I'm not able to create vectors in the >> right way. I call PetscInitialize() as in the examples shown. Are there >> some special functions to check if everthing is initialized correct? >> >> My example works fine except the destroying of the vectors, for the >> first testing I run it serial. Can that cause my problem? >> >> >Can you send us a reduced test case to demonstrate the problem? Hi I think I have solved my problem, but I don't know why. The things I have change to my previous version are the following. In my previous version I moved my installed petsc library. I read that this could cause some problems. And I had the following construct: executable load shared library A with dlopen. I linked the petsc library to library B and B to A. Now I have change it to: executable load lib A. And I have linked petsc to A. I try to reproduce it in a smaller example. If the smaller example has the same problems I will send it to you. Or can you say without the example what the problem is? From knepley at gmail.com Tue Jul 24 10:36:02 2012 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 24 Jul 2012 10:36:02 -0500 Subject: [petsc-users] How to use petsc in a dynamically loaded shared library? In-Reply-To: <20120724152443.58640@gmx.net> References: <20120718105010.148120@gmx.net> <20120724152443.58640@gmx.net> Message-ID: On Tue, Jul 24, 2012 at 10:24 AM, Florian Beck wrote: > On Sun Jul 22 11:28:18 CDT 2012 Jed Brown jedbrown at mcs.anl.gov wrote: > > >> > >> Back to my problem, I think I have a problem with the initialisation of > >> the petsc library and therefore I'm not able to create vectors in the > >> right way. I call PetscInitialize() as in the examples shown. Are there > >> some special functions to check if everthing is initialized correct? > >> > >> My example works fine except the destroying of the vectors, for the > >> first testing I run it serial. Can that cause my problem? > >> > >> > >Can you send us a reduced test case to demonstrate the problem? > > Hi I think I have solved my problem, but I don't know why. The things I > have change to my previous version are the following. In my previous > version I moved my installed petsc library. I read that this could cause > some problems. And I had the following construct: executable load shared > library A with dlopen. I linked the petsc library to library B and B to A. > Now I have change it to: executable load lib A. And I have linked petsc to > A. > > I try to reproduce it in a smaller example. If the smaller example has the > same problems I will send it to you. Or can you say without the example > what the problem is? > You have to be careful with global symbols and dlopen(). If you do not link the shared library, you have to make sure all parts of your code see the same globals from the library. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jul 24 10:58:22 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 24 Jul 2012 10:58:22 -0500 (CDT) Subject: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F In-Reply-To: <7A7F2577D3A89844AB5C35FDD60F77EA061FDF@MBXP07.ds.man.ac.uk> References: <7A7F2577D3A89844AB5C35FDD60F77EA061E84@MBXP07.ds.man.ac.uk>, <7A7F2577D3A89844AB5C35FDD60F77EA061F67@MBXP07.ds.man.ac.uk> <7A7F2577D3A89844AB5C35FDD60F77EA061FDF@MBXP07.ds.man.ac.uk> Message-ID: Adam, Thanks for confirming its a gfortran-4.1 issue. Satish On Tue, 24 Jul 2012, Adam Stanier wrote: > I got gcc and gfortran on the cluster updated to Red Hat 4.4.6-3. This solves the problem. > > Many Thanks, > Adam > ________________________________ > From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Adam Stanier [adam.stanier-2 at postgrad.manchester.ac.uk] > Sent: 24 July 2012 13:11 > To: PETSc users list > Subject: Re: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F > > Satish, > > Thanks for the quick reply, I had corrected that bug in 3.3 you mentioned before getting the linking bug. I have also tried the same configure line (and also a simple configure: ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich) on another cluster and got the same problem. I think the problem may be related to gfortran version Red Hat 4.1.2 (there is 4.1.2-14 on one cluster and 4.1.2-46 on another cluster). Another petsc user found ex14f to compile fine with red hat 4.4.6 gfortran. > > The only reference to PETSC_NULL_FUNCTION I can find is where it is declared with External PETSC_NULL_FUNCTION in petscsys.h. Is this set to zero somewhere, or is it just expected to return zero if it is not found? > > Thanks, > Adam > ________________________________ > From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Adam Stanier [adam.stanier-2 at postgrad.manchester.ac.uk] > Sent: 23 July 2012 18:03 > To: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] PETSC_NULL_FUNCTION in petsc-3.2-p7/src/snes/examples/tests/ex14f.F > > Hello, > > I am getting the following error message when linking a code: > > ../solver_3.2/libsel.a(p2_snes.o): In function `__p2_snes_mod__p2_snes_error': > /home/stanier/hifi/solver_3.2/p2_snes.F:450: undefined reference to `__local_mod__petsc_null_function' > collect2: ld returned 1 exit status > > I have noticed that src/snes/examples/tests/ex14f.F also gives an error message related to PETSC_NULL_FUNCTION, and make ex14f fails to make > > ex14f.o: In function `MAIN__': > /home/stanier/soft/petsc-3.2-p7/src/snes/examples/tests/ex14f.F:182: undefined reference to `__petscmod__petsc_null_function' > collect2: ld returned 1 exit status > > Also, have checked this with another PETSc user with version 3.2-p7 and version 3.3- the same error occurs. > > My configure options are ./configure --with-cc=gcc --with-fc=gfortran --download-f-blas-lapack --download-mpich --download-blacs --download-scalapack --download-parmetis --download-mumps --download-superlu --download-hypre > > I was wondering if anyone else has had this problem, and knew how to get around it. > > Thanks, > Adam > > From chris.eldred at gmail.com Tue Jul 24 12:31:26 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Tue, 24 Jul 2012 11:31:26 -0600 Subject: [petsc-users] Unstructured meshes in PETSC using Sieve Message-ID: Hey PETSC/Sieve Developers, I am building a nonlinear shallow water testbed model (along with an associated eigensolver for the linear equations) intended to work on unstructured Voronoi meshes and cubed-sphere grids (with arbitrary block-structured refinement)- it will be a 2-D code. There will NOT be any adaptive mesh refinement- the mesh is defined once at the start of the application. It will support finite difference, finite volume and finite element-type (spectral elements and Discontinuous Galerkin) schemes- so variables will be defined on edges, cells and vertexes. I would like to use PETSC/SLEPC (currently limited to v3.2 for both since that is the latest version of SLEPC) for the spare linear algebra and eigenvalue solvers. This is intended as a useful tool for researchers in atmospheric model development- it will allow easy inter-comparison of different grids and schemes under a common framework. Right now I have a serial version (written in Fortran 90) that implements a few different finite-difference schemes (along with a multigrid solver for square and hexagonal meshes) on unstructured Voronoi meshes and I would like to move to a parallel version (also using Fortran 90). The Sieve framework seems like an excellent fit for defining the unstructured mesh, managing variables defined on edges/faces/vertices and handling scatter/gather options between processes. I was planning on doing parallel partitioning using ParMetis. My understanding is that DMMesh handles mesh topology (interconnections, etc) while Sections define variables and mesh geometry (edge lengths, areas, etc.). Sections can be created over different depths/heights (chains of points in Sieve) in order to define variables on vertices/edges/cells. I am looking for documentation and examples of code use. I found: http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex12.c.html Are there other examples/documentation available? Also, I was wondering what the difference is between DMMesh and DMComplex- it appears that they both implement the Sieve framework? Thanks, Chris Eldred -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Tue Jul 24 13:26:10 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Tue, 24 Jul 2012 11:26:10 -0700 Subject: [petsc-users] Pthread support In-Reply-To: <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> Message-ID: <1343154370.2963.65.camel@DeathStar> Shri, thank you for your answers. I think the current affinity settings explain my problem. It basically means that I can run only one job per node if I compile with pthreadclasses=1. It is not optimal if I have motherboard with two CPU sockets, isn't it? Are you planning to change it in the next release. How can I know, when it is changed in the petsc-dev? I used petsc-3.3. best regards, Alex. On Mon, 2012-07-23 at 23:46 -0700, Shri wrote: > The current "crude" default affinity setting is that for each process the first thread gets assigned to core number 0, second to core number 1, and so on (assuming that the OS numbers the cores sequentially). However, the OS could number the cores differently. For example on Intel Nehalem Architecture, cores on one socket have even numbers (0,2,4,6) and that on the other socket have odd (1,3,5,7). We currently don't have any mechanism by which we can figure out how the OS numbers the cores and set affinity accordingly. > > Shri > > On Jul 23, 2012, at 8:33 PM, Barry Smith wrote: > > > > > On Jul 23, 2012, at 10:22 PM, Jed Brown wrote: > > > >> On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith wrote: > >> Yes, but is PETSc built with MPI? PETSc does not decide on which core to run the program, but MPI might. > >> > >> Well, PETSc is setting affinity if it can when you use threads. > > > > Jed, > > > > So maybe that is the problem. PETSc is forcing affinity to a hardware number core regardless of whether another PETSc program is running and has also forced that same core? > > > > Barry > > > > > > > > > From bsmith at mcs.anl.gov Tue Jul 24 13:44:59 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Jul 2012 13:44:59 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> Message-ID: <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> Shri, For now you could simply turn off setting affinities when only a single thread is requested. Thanks Barry On Jul 24, 2012, at 1:46 AM, Shri wrote: > The current "crude" default affinity setting is that for each process the first thread gets assigned to core number 0, second to core number 1, and so on (assuming that the OS numbers the cores sequentially). However, the OS could number the cores differently. For example on Intel Nehalem Architecture, cores on one socket have even numbers (0,2,4,6) and that on the other socket have odd (1,3,5,7). We currently don't have any mechanism by which we can figure out how the OS numbers the cores and set affinity accordingly. > > Shri > > On Jul 23, 2012, at 8:33 PM, Barry Smith wrote: > >> >> On Jul 23, 2012, at 10:22 PM, Jed Brown wrote: >> >>> On Mon, Jul 23, 2012 at 10:19 PM, Barry Smith wrote: >>> Yes, but is PETSc built with MPI? PETSc does not decide on which core to run the program, but MPI might. >>> >>> Well, PETSc is setting affinity if it can when you use threads. >> >> Jed, >> >> So maybe that is the problem. PETSc is forcing affinity to a hardware number core regardless of whether another PETSc program is running and has also forced that same core? >> >> Barry >> >> >> >> > From jedbrown at mcs.anl.gov Tue Jul 24 13:55:07 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 24 Jul 2012 13:55:07 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> Message-ID: On Tue, Jul 24, 2012 at 1:44 PM, Barry Smith wrote: > For now you could simply turn off setting affinities when only a single > thread is requested. Longer term, I think we want to use CPU_COUNT, then use CPU_ISSET to fill in the current set. Then we'll partition that set among the threads in the pool. That's what I meant by "inherit" the process affinity. We should never be doing CPU_ZERO and adding in counting from zero. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 24 13:59:14 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Jul 2012 13:59:14 -0500 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> Message-ID: <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> On Jul 24, 2012, at 1:55 PM, Jed Brown wrote: > On Tue, Jul 24, 2012 at 1:44 PM, Barry Smith wrote: > For now you could simply turn off setting affinities when only a single thread is requested. > > Longer term, I think we want to use CPU_COUNT, then use CPU_ISSET to fill in the current set. Then we'll partition that set among the threads in the pool. That's what I meant by "inherit" the process affinity. We should never be doing CPU_ZERO and adding in counting from zero. True But how do you know what cores the other processes on the machine are using? Couldn't they be anything? Barry From jedbrown at mcs.anl.gov Tue Jul 24 14:06:39 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 24 Jul 2012 14:06:39 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> Message-ID: On Tue, Jul 24, 2012 at 1:59 PM, Barry Smith wrote: > But how do you know what cores the other processes on the machine are > using? Couldn't they be anything? Yes, but a person running a single-process multi-thread job should be doing something like (suppose a 32-core machine) $ taskset 0x000000ff ./job1 -nthreads 8 # use all slots $ taskset 0xffffff00 ./job2 -nthreads 12 # use 12 of 24 "slots" arbitrarily Now the second job might set affinity by splitting the 24 slots between the 12 threads, or it might group them into three groups per CPU die, setting 0xff00000000, 0x00ff0000, and 0x0000ff00 for all four threads in each of the three groups respectively. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 24 14:10:32 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Jul 2012 14:10:32 -0500 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> Message-ID: On Jul 24, 2012, at 2:06 PM, Jed Brown wrote: > On Tue, Jul 24, 2012 at 1:59 PM, Barry Smith wrote: > But how do you know what cores the other processes on the machine are using? Couldn't they be anything? > > Yes, but a person running a single-process multi-thread job should be doing something like (suppose a 32-core machine) > > $ taskset 0x000000ff ./job1 -nthreads 8 # use all slots > $ taskset 0xffffff00 ./job2 -nthreads 12 # use 12 of 24 "slots" arbitrarily Cool. Do we have properly documented in the FAQ or some place how the user should be running things with MPICH, OpenMPI etc etc. This issue is going to keep coming up and my rather useless response can be "yikes, I don't know, look at xxxx". Thanks Barry > > > Now the second job might set affinity by splitting the 24 slots between the 12 threads, or it might group them into three groups per CPU die, setting 0xff00000000, 0x00ff0000, and 0x0000ff00 for all four threads in each of the three groups respectively. From jedbrown at mcs.anl.gov Tue Jul 24 14:12:21 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 24 Jul 2012 14:12:21 -0500 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> Message-ID: On Tue, Jul 24, 2012 at 2:10 PM, Barry Smith wrote: > Cool. Do we have properly documented in the FAQ or some place how the > user should be running things with MPICH, OpenMPI etc etc. This issue is > going to keep coming up and my rather useless response can be "yikes, I > don't know, look at xxxx". http://www.mcs.anl.gov/petsc/documentation/faq.html#computers This should of course be updated once threadcomm does affinity "right". -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexvg77 at gmail.com Tue Jul 24 14:13:41 2012 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Tue, 24 Jul 2012 12:13:41 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> Message-ID: <1343157221.2963.68.camel@DeathStar> Jed, I actually tried taskset utility, and it did not help. alex. On Tue, 2012-07-24 at 14:06 -0500, Jed Brown wrote: > On Tue, Jul 24, 2012 at 1:59 PM, Barry Smith > wrote: > But how do you know what cores the other processes on the > machine are using? Couldn't they be anything? > > Yes, but a person running a single-process multi-thread job should be > doing something like (suppose a 32-core machine) > > > $ taskset 0x000000ff ./job1 -nthreads 8 # use all slots > $ taskset 0xffffff00 ./job2 -nthreads 12 # use 12 of 24 "slots" > arbitrarily > > > > > Now the second job might set affinity by splitting the 24 slots > between the 12 threads, or it might group them into three groups per > CPU die, setting 0xff00000000, 0x00ff0000, and 0x0000ff00 for all four > threads in each of the three groups respectively. From jedbrown at mcs.anl.gov Tue Jul 24 14:16:17 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 24 Jul 2012 14:16:17 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343157221.2963.68.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> <1343157221.2963.68.camel@DeathStar> Message-ID: On Tue, Jul 24, 2012 at 2:13 PM, Alexander Goncharov wrote: > I actually tried taskset utility, and it did not help. As explained in the previous email, affinity is naively done by calling CPU_ZERO and then adding in one core per thread. Whenever my suggestion gets implemented, threads will inherit from taskset. For now, you can use -threadcomm_affinities ..., but that's low-level enough to be painful. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 24 14:17:31 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Jul 2012 14:17:31 -0500 Subject: [petsc-users] Pthread support In-Reply-To: <1343157221.2963.68.camel@DeathStar> References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> <1343157221.2963.68.camel@DeathStar> Message-ID: <84247A8A-9921-467B-9485-9E4D1683866D@mcs.anl.gov> On Jul 24, 2012, at 2:13 PM, Alexander Goncharov wrote: > Jed, > > I actually tried taskset utility, and it did not help. Right, it didn't help because we have a bug where we always start with the globally lowest core for each process hence get overlap. This will be fixed shortly in petsc-dev. The threaded code is still preliminary and not production ready, only useful for eager early adoptors. Barry > > alex. > > On Tue, 2012-07-24 at 14:06 -0500, Jed Brown wrote: >> On Tue, Jul 24, 2012 at 1:59 PM, Barry Smith >> wrote: >> But how do you know what cores the other processes on the >> machine are using? Couldn't they be anything? >> >> Yes, but a person running a single-process multi-thread job should be >> doing something like (suppose a 32-core machine) >> >> >> $ taskset 0x000000ff ./job1 -nthreads 8 # use all slots >> $ taskset 0xffffff00 ./job2 -nthreads 12 # use 12 of 24 "slots" >> arbitrarily >> >> >> >> >> Now the second job might set affinity by splitting the 24 slots >> between the 12 threads, or it might group them into three groups per >> CPU die, setting 0xff00000000, 0x00ff0000, and 0x0000ff00 for all four >> threads in each of the three groups respectively. > > From juhaj at iki.fi Tue Jul 24 17:48:10 2012 From: juhaj at iki.fi (Juha =?utf-8?q?J=C3=A4ykk=C3=A4?=) Date: Wed, 25 Jul 2012 00:48:10 +0200 Subject: [petsc-users] sigsegvs In-Reply-To: References: <201207230422.41344.juhaj@iki.fi> <201207240106.00251.juhaj@iki.fi> Message-ID: <201207250048.18417.juhaj@iki.fi> > > indication that both acml and mkl are somehow broken? If it is, I will > Its likely that the breakage was the thing I refered to in the last mail. > Upgrading to > 3.3 should fix it. Unfortunately, it does not. The behaviour is still exactly the same (unless I download blas/lapack during PETSc install): either it gets wildly incorrect answers or it segfaults. This is still ex19 in src/snes/examples/tutorials. I think it is time to go whine to the admins... Thanks for the assistance! Cheers, Juha -- ----------------------------------------------- | Juha J?ykk?, juhaj at iki.fi | | http://koti.kapsi.fi/~juhaj/ | ----------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From bsmith at mcs.anl.gov Tue Jul 24 20:21:53 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 24 Jul 2012 20:21:53 -0500 Subject: [petsc-users] Shell matrices and complex NHEP SVD problems In-Reply-To: <20147D14-4D06-4A59-ACBD-E9354414D01E@dsic.upv.es> References: <20147D14-4D06-4A59-ACBD-E9354414D01E@dsic.upv.es> Message-ID: <50DDFA41-8AA0-4A44-A23F-9A4EE92558CC@mcs.anl.gov> On Jul 15, 2012, at 3:35 AM, Jose E. Roman wrote: > > El 15/07/2012, a las 01:41, Matthew Knepley escribi >> Miguel. >> > > True. I will change it for the release. Thanks. > > By the way, the MatOperation names are not very consistent: MATOP_MULT_TRANSPOSE and MATOP_MULTHERMITIANTRANSPOSE. Shouldn't it be MATOP_MULT_HERMITIAN_TRANSPOSE? > > Jose > I believe I have fixed these all. Truncated to 31 char. For a couple I had to replace TRANSPOSE with TRANS so that 31 character truncation led to unique names. Thanks for letting us know about this error, Barry From bourdin at lsu.edu Tue Jul 24 21:59:40 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Tue, 24 Jul 2012 21:59:40 -0500 Subject: [petsc-users] pc_gamg_type geo broken in 3.3? Message-ID: <3278B0DE-4120-4D52-AAA6-61C438D7942D@lsu.edu> Hi, is pc_gamg_type geo broken in petsc-3.3? I get the following when I make runex54 in src/ksp/ksp/examples/tutorials: iMac:tutorials blaise$ make runex54 1,28c1,128 < 0 KSP Residual norm 132.598 < 1 KSP Residual norm 39.159 < 2 KSP Residual norm 15.7856 < 3 KSP Residual norm 8.91321 < 4 KSP Residual norm 6.95961 < 5 KSP Residual norm 15.1387 < 6 KSP Residual norm 12.5547 < 7 KSP Residual norm 3.78056 < 8 KSP Residual norm 2.38412 < 9 KSP Residual norm 3.91952 < 10 KSP Residual norm 1.3192 < 11 KSP Residual norm 1.70681 < 12 KSP Residual norm 1.74052 < 13 KSP Residual norm 0.482779 < 14 KSP Residual norm 0.634571 < 15 KSP Residual norm 0.264686 < 16 KSP Residual norm 0.240607 < 17 KSP Residual norm 0.10998 < 18 KSP Residual norm 0.0853072 < 19 KSP Residual norm 0.0551939 < 20 KSP Residual norm 0.0231032 < 21 KSP Residual norm 0.0286234 < 22 KSP Residual norm 0.0110345 < 23 KSP Residual norm 0.00582651 < 24 KSP Residual norm 0.00256534 < 25 KSP Residual norm 0.00176112 < 26 KSP Residual norm 0.000730267 < Linear solve converged due to CONVERGED_RTOL iterations 26 --- > [2]PETSC ERROR: ------------------------------------------------------------------------ > [2]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [2]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: [1] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [1]PETSC ERROR: [1] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [1]PETSC ERROR: [1] PCSetUp_GTry option -start_in_debugger or -on_error_attach_debugger > [2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [2]PETSC ERROR: likely location of problem given in stack below > [2]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [2]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [2]PETSC ERROR: INSTEAD the line number of the start of the function > [2]PETSC ERROR: is given. > [2]PETSC ERROR: [2] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [2]PETSC ERROR: [2] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [2]PETSC ERROR: [2] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c > [2]PETSC ERROR: [2] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c > [2]PETSC ERROR: [2] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp[3]PETSC ERROR: ------------------------------------------------------------------------ > [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [3]PETSC ERROR: likely location of problem given in stack below > [3]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [3]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [3]PETSC ERROR: INSTEAD the line number of the start of the function > [3]PETSC ERROR: is given. > [3]PETSC ERROR: [3] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [3]PETSC ERROR: [3] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [3]PETSC ERROR: [3] PCSetUp_Glikely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [0]PETSC ERROR: [0] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c > [0]PETSC ERROR: [0] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c > [0]PETSC ERROR: [0] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: [0] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: [0] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ----------------AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c > [1]PETSC ERROR: [1] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: [1] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: [1] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 > [1]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi/ksp/interface/itfunc.c > [2]PETSC ERROR: [2] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [2]PETSC ERROR: --------------------- Error Message ------------------------------------ > [2]PETSC ERROR: Signal received! > [2]PETSC ERROR: ------------------------------------------------------------------------ > [2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown > [2]PETSC ERROR: See docs/changes/index.html for recent updates. > [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [2]PETSC ERROR: See docs/index.html for manual pages. > [2]PETSC ERROR: ------------------------------------------------------------------------ > [2]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 > [2]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib > [2]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 > [2]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c > [3]PETSC ERROR: [3] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c > [3]PETSC ERROR: [3] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [3]PETSC ERROR: [3] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c > [3]PETSC ERROR: --------------------- Error Message ------------------------------------ > [3]PETSC ERROR: Signal received! > [3]PETSC ERROR: ------------------------------------------------------------------------ > [3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown > [3]PETSC ERROR: See docs/changes/index.html for recent updates. > [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [3]PETSC ERROR: See docs/index.html for manual pages. > [3]PETSC ERROR: ------------------------------------------------------------------------ > [3]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 > [3]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi-------------------------------------------------------- > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 > [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib > [0]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 > [0]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAn-intel11.1-g/lib > [1]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 > [1]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 > [cli_1]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 > --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g > [2]PETSC ERROR: ------------------------------------------------------------------------ > [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 > [cli_2]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 > n-intel11.1-g/lib > [3]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 > [3]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g > [3]PETSC ERROR: ------------------------------------------------------------------------ > [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 > [cli_3]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 > GS= PETSC_ARCH=Darwin-intel11.1-g > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 /opt/HPC/petsc-3.3/src/ksp/ksp/examples/tutorials Possible problem with with ex54_0, diffs above ========================================= Also, I though there was a suggestion for the default parameters for GAMG for symmetric elliptic problems in the documentation or the FAQ, but I can't find it. Any suggestion? Regards, Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin From abhyshr at mcs.anl.gov Wed Jul 25 00:32:32 2012 From: abhyshr at mcs.anl.gov (Shri) Date: Tue, 24 Jul 2012 22:32:32 -0700 Subject: [petsc-users] Pthread support In-Reply-To: References: <1343095712.2963.10.camel@DeathStar> <1343098854.2963.21.camel@DeathStar> <1343099627.2963.24.camel@DeathStar> <1343099881.2963.27.camel@DeathStar> <80227B1A-197C-41EF-B5DF-D2F141C85103@mcs.anl.gov> <3BA99EE2-8CC4-49E1-A401-97A44E196960@mcs.anl.gov> <7ADDF657-4004-4B82-B401-3566F4066B3A@mcs.anl.gov> <1343157221.2963.68.camel@DeathStar> Message-ID: I'm at a conference in San Diego so slow in responding to emails. I'll try to get this in tomorrow. Shri On Jul 24, 2012, at 12:16 PM, Jed Brown wrote: > On Tue, Jul 24, 2012 at 2:13 PM, Alexander Goncharov wrote: > I actually tried taskset utility, and it did not help. > > As explained in the previous email, affinity is naively done by calling CPU_ZERO and then adding in one core per thread. Whenever my suggestion gets implemented, threads will inherit from taskset. For now, you can use -threadcomm_affinities ..., but that's low-level enough to be painful. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Wed Jul 25 08:56:32 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 25 Jul 2012 09:56:32 -0400 Subject: [petsc-users] pc_gamg_type geo broken in 3.3? In-Reply-To: <3278B0DE-4120-4D52-AAA6-61C438D7942D@lsu.edu> References: <3278B0DE-4120-4D52-AAA6-61C438D7942D@lsu.edu> Message-ID: <61979584-89A5-431D-A8C4-E9EC7D96FC33@columbia.edu> Humm, I did check petsc-3.3 before it was released and petsc-dev works. Could you attache the debugger and get a line number? Mark On Jul 24, 2012, at 10:59 PM, Blaise Bourdin wrote: > Hi, > > is pc_gamg_type geo broken in petsc-3.3? I get the following when I make runex54 in src/ksp/ksp/examples/tutorials: > > iMac:tutorials blaise$ make runex54 > 1,28c1,128 > < 0 KSP Residual norm 132.598 > < 1 KSP Residual norm 39.159 > < 2 KSP Residual norm 15.7856 > < 3 KSP Residual norm 8.91321 > < 4 KSP Residual norm 6.95961 > < 5 KSP Residual norm 15.1387 > < 6 KSP Residual norm 12.5547 > < 7 KSP Residual norm 3.78056 > < 8 KSP Residual norm 2.38412 > < 9 KSP Residual norm 3.91952 > < 10 KSP Residual norm 1.3192 > < 11 KSP Residual norm 1.70681 > < 12 KSP Residual norm 1.74052 > < 13 KSP Residual norm 0.482779 > < 14 KSP Residual norm 0.634571 > < 15 KSP Residual norm 0.264686 > < 16 KSP Residual norm 0.240607 > < 17 KSP Residual norm 0.10998 > < 18 KSP Residual norm 0.0853072 > < 19 KSP Residual norm 0.0551939 > < 20 KSP Residual norm 0.0231032 > < 21 KSP Residual norm 0.0286234 > < 22 KSP Residual norm 0.0110345 > < 23 KSP Residual norm 0.00582651 > < 24 KSP Residual norm 0.00256534 > < 25 KSP Residual norm 0.00176112 > < 26 KSP Residual norm 0.000730267 > < Linear solve converged due to CONVERGED_RTOL iterations 26 > --- >> [2]PETSC ERROR: ------------------------------------------------------------------------ >> [2]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [2]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [0]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [1]PETSC ERROR: likely location of problem given in stack below >> [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >> [1]PETSC ERROR: INSTEAD the line number of the start of the function >> [1]PETSC ERROR: is given. >> [1]PETSC ERROR: [1] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [1]PETSC ERROR: [1] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [1]PETSC ERROR: [1] PCSetUp_GTry option -start_in_debugger or -on_error_attach_debugger >> [2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [2]PETSC ERROR: likely location of problem given in stack below >> [2]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [2]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >> [2]PETSC ERROR: INSTEAD the line number of the start of the function >> [2]PETSC ERROR: is given. >> [2]PETSC ERROR: [2] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [2]PETSC ERROR: [2] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [2]PETSC ERROR: [2] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >> [2]PETSC ERROR: [2] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >> [2]PETSC ERROR: [2] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp[3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >> [3]PETSC ERROR: likely location of problem given in stack below >> [3]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [3]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >> [3]PETSC ERROR: INSTEAD the line number of the start of the function >> [3]PETSC ERROR: is given. >> [3]PETSC ERROR: [3] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [3]PETSC ERROR: [3] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [3]PETSC ERROR: [3] PCSetUp_Glikely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: [0] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [0]PETSC ERROR: [0] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >> [0]PETSC ERROR: [0] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >> [0]PETSC ERROR: [0] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: [0] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: [0] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [0]PETSC ERROR: Signal received! >> [0]PETSC ERROR: ----------------AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >> [1]PETSC ERROR: [1] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >> [1]PETSC ERROR: [1] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [1]PETSC ERROR: [1] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [1]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [1]PETSC ERROR: Signal received! >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >> [1]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi/ksp/interface/itfunc.c >> [2]PETSC ERROR: [2] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [2]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [2]PETSC ERROR: Signal received! >> [2]PETSC ERROR: ------------------------------------------------------------------------ >> [2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >> [2]PETSC ERROR: See docs/changes/index.html for recent updates. >> [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [2]PETSC ERROR: See docs/index.html for manual pages. >> [2]PETSC ERROR: ------------------------------------------------------------------------ >> [2]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >> [2]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib >> [2]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >> [2]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >> [3]PETSC ERROR: [3] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >> [3]PETSC ERROR: [3] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [3]PETSC ERROR: [3] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >> [3]PETSC ERROR: --------------------- Error Message ------------------------------------ >> [3]PETSC ERROR: Signal received! >> [3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >> [3]PETSC ERROR: See docs/changes/index.html for recent updates. >> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [3]PETSC ERROR: See docs/index.html for manual pages. >> [3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >> [3]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi-------------------------------------------------------- >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >> [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib >> [0]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >> [0]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAn-intel11.1-g/lib >> [1]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >> [1]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >> [1]PETSC ERROR: ------------------------------------------------------------------------ >> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >> [cli_1]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >> --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >> [2]PETSC ERROR: ------------------------------------------------------------------------ >> [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 >> [cli_2]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 >> n-intel11.1-g/lib >> [3]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >> [3]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >> [3]PETSC ERROR: ------------------------------------------------------------------------ >> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 >> [cli_3]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 >> GS= PETSC_ARCH=Darwin-intel11.1-g >> [0]PETSC ERROR: ------------------------------------------------------------------------ >> [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> [cli_0]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > /opt/HPC/petsc-3.3/src/ksp/ksp/examples/tutorials > Possible problem with with ex54_0, diffs above > ========================================= > > Also, I though there was a suggestion for the default parameters for GAMG for symmetric elliptic problems in the documentation or the FAQ, but I can't find it. Any suggestion? > > Regards, > > Blaise > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin > > > > > > > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin > > > > > > > > From bourdin at lsu.edu Wed Jul 25 09:19:00 2012 From: bourdin at lsu.edu (Blaise Bourdin) Date: Wed, 25 Jul 2012 09:19:00 -0500 Subject: [petsc-users] pc_gamg_type geo broken in 3.3? In-Reply-To: <61979584-89A5-431D-A8C4-E9EC7D96FC33@columbia.edu> References: <3278B0DE-4120-4D52-AAA6-61C438D7942D@lsu.edu> <61979584-89A5-431D-A8C4-E9EC7D96FC33@columbia.edu> Message-ID: <3BFB2E36-255C-434B-983A-42F58D77082D@lsu.edu> Sorry, I should have done that with the initial bug report. I Here is stack trace I get when using the intel 11.1 compilers. The segfault appears to be in triangle. WHen using gcc, everything seems to work fine. Blaise Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000001402e000 0x000000010f1e55aa in poolinit (pool=0x10fc31580, bytecount=32, itemcount=4092, wtype=FLOATINGPOINT, alignment=0) at src/triangle.c:3227 3227 *(pool->firstblock) = (VOID *) NULL; (gdb) where #0 0x000000010f1e55aa in poolinit (pool=0x10fc31580, bytecount=32, itemcount=4092, wtype=FLOATINGPOINT, alignment=0) at src/triangle.c:3227 #1 0x000000010f1e5da7 in initializepointpool () at src/triangle.c:3547 #2 0x000000010f203462 in transfernodes (pointlist=0x7fb408aa2370, pointattriblist=0x0, pointmarkerlist=0x0, numberofpoints=2307, numberofpointattribs=0) at src/triangle.c:11486 #3 0x000000010f205f69 in triangulate (triswitches=0x7fff6d9429c8 "npczQ", in=0x7fff6d9429d0, out=0x7fff6d942828, vorout=0x0) at src/triangle.c:12965 #4 0x000000010e46c325 in triangulateAndFormProl (selected_2=0x7fb408a9ad70, data_stride=12100, coords=0x113b0c770, nselected_1=2307, clid_lid_1=0x7fb408ac8f70, agg_lists_1=0x7fb408ab9b70, crsGID=0x7fb408b00770, bs=1, a_Prol=0x7fb408abb370, a_worst_best=0x7fff6d942dd0) at geo.c:253 #5 0x000000010e47420d in PCGAMGProlongator_GEO (pc=0x7fb408a07370, Amat=0x7fb40889cb70, Gmat=0x7fb408b4d770, agg_lists=0x7fb408ab9b70, a_P_out=0x7fff6d944420) at geo.c:848 #6 0x000000010e429526 in PCSetUp_GAMG (pc=0x7fb408a07370) at gamg.c:658 #7 0x000000010ecc05d8 in PCSetUp (pc=0x7fb408a07370) at precon.c:832 #8 0x000000010e5df6ed in KSPSetUp (ksp=0x7fb4089ccf70) at itfunc.c:278 #9 0x000000010e5e12b3 in KSPSolve (ksp=0x7fb4089ccf70, b=0x7fb408968570, x=0x7fb40894cd70) at itfunc.c:402 #10 0x000000010dd4c632 in main () On Jul 25, 2012, at 8:56 AM, Mark F. Adams wrote: > Humm, I did check petsc-3.3 before it was released and petsc-dev works. > > Could you attache the debugger and get a line number? > > Mark > > On Jul 24, 2012, at 10:59 PM, Blaise Bourdin wrote: > >> Hi, >> >> is pc_gamg_type geo broken in petsc-3.3? I get the following when I make runex54 in src/ksp/ksp/examples/tutorials: >> >> iMac:tutorials blaise$ make runex54 >> 1,28c1,128 >> < 0 KSP Residual norm 132.598 >> < 1 KSP Residual norm 39.159 >> < 2 KSP Residual norm 15.7856 >> < 3 KSP Residual norm 8.91321 >> < 4 KSP Residual norm 6.95961 >> < 5 KSP Residual norm 15.1387 >> < 6 KSP Residual norm 12.5547 >> < 7 KSP Residual norm 3.78056 >> < 8 KSP Residual norm 2.38412 >> < 9 KSP Residual norm 3.91952 >> < 10 KSP Residual norm 1.3192 >> < 11 KSP Residual norm 1.70681 >> < 12 KSP Residual norm 1.74052 >> < 13 KSP Residual norm 0.482779 >> < 14 KSP Residual norm 0.634571 >> < 15 KSP Residual norm 0.264686 >> < 16 KSP Residual norm 0.240607 >> < 17 KSP Residual norm 0.10998 >> < 18 KSP Residual norm 0.0853072 >> < 19 KSP Residual norm 0.0551939 >> < 20 KSP Residual norm 0.0231032 >> < 21 KSP Residual norm 0.0286234 >> < 22 KSP Residual norm 0.0110345 >> < 23 KSP Residual norm 0.00582651 >> < 24 KSP Residual norm 0.00256534 >> < 25 KSP Residual norm 0.00176112 >> < 26 KSP Residual norm 0.000730267 >> < Linear solve converged due to CONVERGED_RTOL iterations 26 >> --- >>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>> [2]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [2]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [0]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [1]PETSC ERROR: likely location of problem given in stack below >>> [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [1]PETSC ERROR: INSTEAD the line number of the start of the function >>> [1]PETSC ERROR: is given. >>> [1]PETSC ERROR: [1] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [1]PETSC ERROR: [1] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [1]PETSC ERROR: [1] PCSetUp_GTry option -start_in_debugger or -on_error_attach_debugger >>> [2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [2]PETSC ERROR: likely location of problem given in stack below >>> [2]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [2]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [2]PETSC ERROR: INSTEAD the line number of the start of the function >>> [2]PETSC ERROR: is given. >>> [2]PETSC ERROR: [2] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [2]PETSC ERROR: [2] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [2]PETSC ERROR: [2] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>> [2]PETSC ERROR: [2] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>> [2]PETSC ERROR: [2] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp[3]PETSC ERROR: ------------------------------------------------------------------------ >>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>> [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>> [3]PETSC ERROR: likely location of problem given in stack below >>> [3]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [3]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [3]PETSC ERROR: INSTEAD the line number of the start of the function >>> [3]PETSC ERROR: is given. >>> [3]PETSC ERROR: [3] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [3]PETSC ERROR: [3] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [3]PETSC ERROR: [3] PCSetUp_Glikely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: [0] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [0]PETSC ERROR: [0] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>> [0]PETSC ERROR: [0] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>> [0]PETSC ERROR: [0] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: [0] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: [0] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: ----------------AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>> [1]PETSC ERROR: [1] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>> [1]PETSC ERROR: [1] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [1]PETSC ERROR: [1] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [1]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [1]PETSC ERROR: Signal received! >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [1]PETSC ERROR: See docs/index.html for manual pages. >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>> [1]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi/ksp/interface/itfunc.c >>> [2]PETSC ERROR: [2] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [2]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [2]PETSC ERROR: Signal received! >>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>> [2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>> [2]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [2]PETSC ERROR: See docs/index.html for manual pages. >>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>> [2]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>> [2]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib >>> [2]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>> [2]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>> [3]PETSC ERROR: [3] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>> [3]PETSC ERROR: [3] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [3]PETSC ERROR: [3] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>> [3]PETSC ERROR: --------------------- Error Message ------------------------------------ >>> [3]PETSC ERROR: Signal received! >>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>> [3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>> [3]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [3]PETSC ERROR: See docs/index.html for manual pages. >>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>> [3]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>> [3]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi-------------------------------------------------------- >>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>> [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib >>> [0]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>> [0]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAn-intel11.1-g/lib >>> [1]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>> [1]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>> [cli_1]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>> --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>> [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 >>> [cli_2]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 >>> n-intel11.1-g/lib >>> [3]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>> [3]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 >>> [cli_3]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 >>> GS= PETSC_ARCH=Darwin-intel11.1-g >>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>> [cli_0]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> /opt/HPC/petsc-3.3/src/ksp/ksp/examples/tutorials >> Possible problem with with ex54_0, diffs above >> ========================================= >> >> Also, I though there was a suggestion for the default parameters for GAMG for symmetric elliptic problems in the documentation or the FAQ, but I can't find it. Any suggestion? >> >> Regards, >> >> Blaise >> >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> >> > -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin From mark.adams at columbia.edu Wed Jul 25 09:46:43 2012 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 25 Jul 2012 10:46:43 -0400 Subject: [petsc-users] pc_gamg_type geo broken in 3.3? In-Reply-To: <3BFB2E36-255C-434B-983A-42F58D77082D@lsu.edu> References: <3278B0DE-4120-4D52-AAA6-61C438D7942D@lsu.edu> <61979584-89A5-431D-A8C4-E9EC7D96FC33@columbia.edu> <3BFB2E36-255C-434B-983A-42F58D77082D@lsu.edu> Message-ID: Note, the geo solver is really not useful. It is there as an experiment. But if you want to pursue this I'd blow away exernalpackages/Triangle and even you $PETSC_ARCH directory and reconfigure and build. Mark On Jul 25, 2012, at 10:19 AM, Blaise Bourdin wrote: > Sorry, I should have done that with the initial bug report. I > > Here is stack trace I get when using the intel 11.1 compilers. The segfault appears to be in triangle. WHen using gcc, everything seems to work fine. > > Blaise > > Program received signal EXC_BAD_ACCESS, Could not access memory. > Reason: KERN_INVALID_ADDRESS at address: 0x000000001402e000 > 0x000000010f1e55aa in poolinit (pool=0x10fc31580, bytecount=32, itemcount=4092, wtype=FLOATINGPOINT, alignment=0) at src/triangle.c:3227 > 3227 *(pool->firstblock) = (VOID *) NULL; > (gdb) where > #0 0x000000010f1e55aa in poolinit (pool=0x10fc31580, bytecount=32, itemcount=4092, wtype=FLOATINGPOINT, alignment=0) at src/triangle.c:3227 > #1 0x000000010f1e5da7 in initializepointpool () at src/triangle.c:3547 > #2 0x000000010f203462 in transfernodes (pointlist=0x7fb408aa2370, pointattriblist=0x0, pointmarkerlist=0x0, numberofpoints=2307, numberofpointattribs=0) at src/triangle.c:11486 > #3 0x000000010f205f69 in triangulate (triswitches=0x7fff6d9429c8 "npczQ", in=0x7fff6d9429d0, out=0x7fff6d942828, vorout=0x0) at src/triangle.c:12965 > #4 0x000000010e46c325 in triangulateAndFormProl (selected_2=0x7fb408a9ad70, data_stride=12100, coords=0x113b0c770, nselected_1=2307, clid_lid_1=0x7fb408ac8f70, agg_lists_1=0x7fb408ab9b70, crsGID=0x7fb408b00770, bs=1, a_Prol=0x7fb408abb370, a_worst_best=0x7fff6d942dd0) at geo.c:253 > #5 0x000000010e47420d in PCGAMGProlongator_GEO (pc=0x7fb408a07370, Amat=0x7fb40889cb70, Gmat=0x7fb408b4d770, agg_lists=0x7fb408ab9b70, a_P_out=0x7fff6d944420) at geo.c:848 > #6 0x000000010e429526 in PCSetUp_GAMG (pc=0x7fb408a07370) at gamg.c:658 > #7 0x000000010ecc05d8 in PCSetUp (pc=0x7fb408a07370) at precon.c:832 > #8 0x000000010e5df6ed in KSPSetUp (ksp=0x7fb4089ccf70) at itfunc.c:278 > #9 0x000000010e5e12b3 in KSPSolve (ksp=0x7fb4089ccf70, b=0x7fb408968570, x=0x7fb40894cd70) at itfunc.c:402 > #10 0x000000010dd4c632 in main () > > > > On Jul 25, 2012, at 8:56 AM, Mark F. Adams wrote: > >> Humm, I did check petsc-3.3 before it was released and petsc-dev works. >> >> Could you attache the debugger and get a line number? >> >> Mark >> >> On Jul 24, 2012, at 10:59 PM, Blaise Bourdin wrote: >> >>> Hi, >>> >>> is pc_gamg_type geo broken in petsc-3.3? I get the following when I make runex54 in src/ksp/ksp/examples/tutorials: >>> >>> iMac:tutorials blaise$ make runex54 >>> 1,28c1,128 >>> < 0 KSP Residual norm 132.598 >>> < 1 KSP Residual norm 39.159 >>> < 2 KSP Residual norm 15.7856 >>> < 3 KSP Residual norm 8.91321 >>> < 4 KSP Residual norm 6.95961 >>> < 5 KSP Residual norm 15.1387 >>> < 6 KSP Residual norm 12.5547 >>> < 7 KSP Residual norm 3.78056 >>> < 8 KSP Residual norm 2.38412 >>> < 9 KSP Residual norm 3.91952 >>> < 10 KSP Residual norm 1.3192 >>> < 11 KSP Residual norm 1.70681 >>> < 12 KSP Residual norm 1.74052 >>> < 13 KSP Residual norm 0.482779 >>> < 14 KSP Residual norm 0.634571 >>> < 15 KSP Residual norm 0.264686 >>> < 16 KSP Residual norm 0.240607 >>> < 17 KSP Residual norm 0.10998 >>> < 18 KSP Residual norm 0.0853072 >>> < 19 KSP Residual norm 0.0551939 >>> < 20 KSP Residual norm 0.0231032 >>> < 21 KSP Residual norm 0.0286234 >>> < 22 KSP Residual norm 0.0110345 >>> < 23 KSP Residual norm 0.00582651 >>> < 24 KSP Residual norm 0.00256534 >>> < 25 KSP Residual norm 0.00176112 >>> < 26 KSP Residual norm 0.000730267 >>> < Linear solve converged due to CONVERGED_RTOL iterations 26 >>> --- >>>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>>> [2]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [2]PETSC ERROR: [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>>> [0]PETSC ERROR: [1]PETSC ERROR: ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>> [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>>> [1]PETSC ERROR: likely location of problem given in stack below >>>> [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>>> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>>> [1]PETSC ERROR: INSTEAD the line number of the start of the function >>>> [1]PETSC ERROR: is given. >>>> [1]PETSC ERROR: [1] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [1]PETSC ERROR: [1] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [1]PETSC ERROR: [1] PCSetUp_GTry option -start_in_debugger or -on_error_attach_debugger >>>> [2]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[2]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>>> [2]PETSC ERROR: likely location of problem given in stack below >>>> [2]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>>> [2]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>>> [2]PETSC ERROR: INSTEAD the line number of the start of the function >>>> [2]PETSC ERROR: is given. >>>> [2]PETSC ERROR: [2] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [2]PETSC ERROR: [2] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [2]PETSC ERROR: [2] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>>> [2]PETSC ERROR: [2] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>>> [2]PETSC ERROR: [2] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp[3]PETSC ERROR: ------------------------------------------------------------------------ >>>> [3]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range >>>> [3]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >>>> [3]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[3]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors >>>> [3]PETSC ERROR: likely location of problem given in stack below >>>> [3]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>>> [3]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>>> [3]PETSC ERROR: INSTEAD the line number of the start of the function >>>> [3]PETSC ERROR: is given. >>>> [3]PETSC ERROR: [3] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [3]PETSC ERROR: [3] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [3]PETSC ERROR: [3] PCSetUp_Glikely location of problem given in stack below >>>> [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ >>>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, >>>> [0]PETSC ERROR: INSTEAD the line number of the start of the function >>>> [0]PETSC ERROR: is given. >>>> [0]PETSC ERROR: [0] triangulateAndFormProl line 180 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [0]PETSC ERROR: [0] PCGAMGProlongator_GEO line 756 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/geo.c >>>> [0]PETSC ERROR: [0] PCSetUp_GAMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>>> [0]PETSC ERROR: [0] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: [0] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: [0] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [0]PETSC ERROR: Signal received! >>>> [0]PETSC ERROR: ----------------AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>>> [1]PETSC ERROR: [1] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>>> [1]PETSC ERROR: [1] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [1]PETSC ERROR: [1] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [1]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [1]PETSC ERROR: Signal received! >>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>>> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [1]PETSC ERROR: See docs/index.html for manual pages. >>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>>> [1]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi/ksp/interface/itfunc.c >>>> [2]PETSC ERROR: [2] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [2]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [2]PETSC ERROR: Signal received! >>>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>>> [2]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>>> [2]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [2]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [2]PETSC ERROR: See docs/index.html for manual pages. >>>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>>> [2]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>>> [2]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib >>>> [2]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>>> [2]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 AMG line 560 /opt/HPC/petsc-3.3/src/ksp/pc/impls/gamg/gamg.c >>>> [3]PETSC ERROR: [3] PCSetUp line 810 /opt/HPC/petsc-3.3/src/ksp/pc/interface/precon.c >>>> [3]PETSC ERROR: [3] KSPSetUp line 182 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [3]PETSC ERROR: [3] KSPSolve line 351 /opt/HPC/petsc-3.3/src/ksp/ksp/interface/itfunc.c >>>> [3]PETSC ERROR: --------------------- Error Message ------------------------------------ >>>> [3]PETSC ERROR: Signal received! >>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>> [3]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>>> [3]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [3]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [3]PETSC ERROR: See docs/index.html for manual pages. >>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>> [3]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>>> [3]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwi-------------------------------------------------------- >>>> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, unknown >>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [0]PETSC ERROR: See docs/index.html for manual pages. >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: ./ex54 on a Darwin-in named iMac.local by blaise Tue Jul 24 21:51:38 2012 >>>> [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-intel11.1-g/lib >>>> [0]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>>> [0]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAn-intel11.1-g/lib >>>> [1]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>>> [1]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >>>> [1]PETSC ERROR: ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>>> [cli_1]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>>> --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >>>> [2]PETSC ERROR: ------------------------------------------------------------------------ >>>> [2]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 >>>> [cli_2]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 2 >>>> n-intel11.1-g/lib >>>> [3]PETSC ERROR: Configure run at Mon Jul 23 20:49:44 2012 >>>> [3]PETSC ERROR: Configure options --download-hdf5=1 --download-metis=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-blas-lapack-dir=/opt/intel/Compiler/11.1/091/Frameworks/mkl --with-cmake=cmake --with-debugging=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-intel11.1 --with-pic --with-shared-libraries=1 --with-vendor-compilers=intel --with-x11=1 CFLAGS= CXXFLAGS= PETSC_ARCH=Darwin-intel11.1-g >>>> [3]PETSC ERROR: ------------------------------------------------------------------------ >>>> [3]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 >>>> [cli_3]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 3 >>>> GS= PETSC_ARCH=Darwin-intel11.1-g >>>> [0]PETSC ERROR: ------------------------------------------------------------------------ >>>> [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>>> [cli_0]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>> /opt/HPC/petsc-3.3/src/ksp/ksp/examples/tutorials >>> Possible problem with with ex54_0, diffs above >>> ========================================= >>> >>> Also, I though there was a suggestion for the default parameters for GAMG for symmetric elliptic problems in the documentation or the FAQ, but I can't find it. Any suggestion? >>> >>> Regards, >>> >>> Blaise >>> >>> -- >>> Department of Mathematics and Center for Computation & Technology >>> Louisiana State University, Baton Rouge, LA 70803, USA >>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >>> >>> >>> >>> >>> >>> >>> >>> -- >>> Department of Mathematics and Center for Computation & Technology >>> Louisiana State University, Baton Rouge, LA 70803, USA >>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin >>> >>> >>> >>> >>> >>> >>> >>> >> > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin > > > > > > > > From chris.eldred at gmail.com Wed Jul 25 10:31:13 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 09:31:13 -0600 Subject: [petsc-users] Segfault in DMMeshSetChart using Fortran Message-ID: I am getting a segfault in DMMeshSetChart and I was wondering if someone could shed a little light on it. PETSC (v3.3 p2) was compiled with gcc/gfortran/g++ (4.4.3) using the options: --with-clanguage=C++ --with-sieve --download-f-blas-lapack --download-parmetis --download-hdf5 --download-boost --download-metis. Running under gdb, the following error was obtained: Program received signal SIGSEGV, Segmentation fault. 0x083352aa in ALE::Obj >, ALE::malloc_allocator > > >::operator-> ( this=0x1c) at /home/celdred/Desktop/NTM/petsc-3.3-p2/include/sieve/ALE_mem.hh:675 675 X* operator->() const {return objPtr;}; The relevant code snippet is: call PetscInitialize(PETSC_NULL_CHARACTER,ierr) CHKERRQ(ierr) call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr) CHKERRQ(ierr) call DMMeshCreate(PETSC_COMM_WORLD,model_mesh,ierr) CHKERRQ(ierr) ncells = nx*ny nvertices = nx*ny nedges = 2*nx*ny start_index = 0 end_index = ncells + nvertices + nedges - 1 call DMMeshSetChart(model_mesh, start_index, end_index , ierr) CHKERRQ(ierr) Any ideas? -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.eldred at gmail.com Wed Jul 25 11:23:50 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 10:23:50 -0600 Subject: [petsc-users] Data shared between points in a Sieve DAG Message-ID: I was wondering if it was possible to have fields that are shared between points in a sieve DAG: For example, I would like to have data that is connected to both an edge and a cell (instead of just tied to a Section). Consider a cell with three edges (ie a triangular cell). Before I was just using a length 3 array attached to the cell with the convention that the ordering of the array matched the ordering of the edge list associated with the cell. Now, I would like an implementation that does not assume anything about the ordering of the edge list (since I am getting that from cones/supports). Thanks, -Chris Eldred -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 12:34:43 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 12:34:43 -0500 Subject: [petsc-users] sieve-dev Unstructured meshes in PETSC using Sieve In-Reply-To: References: Message-ID: On Tue, Jul 24, 2012 at 12:31 PM, Chris Eldred wrote: > Hey PETSC/Sieve Developers, > > I am building a nonlinear shallow water testbed model (along with an > associated eigensolver for the linear equations) intended to work on > unstructured Voronoi meshes and cubed-sphere grids (with arbitrary > block-structured refinement)- it will be a 2-D code. There will NOT be any > adaptive mesh refinement- the mesh is defined once at the start of the > application. It will support finite difference, finite volume and finite > element-type (spectral elements and Discontinuous Galerkin) schemes- so > variables will be defined on edges, cells and vertexes. I would like to use > PETSC/SLEPC (currently limited to v3.2 for both since that is the latest > version of SLEPC) for the spare linear algebra and eigenvalue solvers. This > is intended as a useful tool for researchers in atmospheric model > development- it will allow easy inter-comparison of different grids and > schemes under a common framework. > Cool. Use slepc-dev. > Right now I have a serial version (written in Fortran 90) that implements > a few different finite-difference schemes (along with a multigrid solver > for square and hexagonal meshes) on unstructured Voronoi meshes and I would > like to move to a parallel version (also using Fortran 90). The Sieve > framework seems like an excellent fit for defining the unstructured mesh, > managing variables defined on edges/faces/vertices and handling > scatter/gather options between processes. I was planning on doing parallel > partitioning using ParMetis. > That is definitely what it is for. > My understanding is that DMMesh handles mesh topology (interconnections, > etc) while Sections define variables and mesh geometry (edge lengths, > areas, etc.). Sections can be created over different depths/heights (chains > of points in Sieve) in order to define variables on vertices/edges/cells. > Yes. > I am looking for documentation and examples of code use. I found: > > http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html > > http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex12.c.html > > Are there other examples/documentation available? > Here is my simple tutorial: Building and Running ex62 -------------------------------------- First, configure with FEM stuff turned on: '--download-triangle', '--download-ctetgen', '--download-fiat', '--download-generator', '--download-chaco', '--download-metis', '--download-parmetis', '--download-scientificpython', I also use '--with-dynamic-loading', '--with-shared-libraries', '--download-mpich', '--download-ml', and if you want to try GPU stuff '--with-cuda', '--with-cuda-arch=sm_10', '--with-cuda-only', '--with-cudac=nvcc -m64', Then build PETSc with the Python make: python2.7 ./config/builder2.py clean python2.7 ./config/builder2.py build python2.7 ./config/builder2.py --help python2.7 ./config/builder2.py build --help python2.7 ./config/builder2.py check --help Once you have this, you should be able to build and run ex62 python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c --testnum=0 which runs the first test. You can run them all with no argument. All the options are listed at the top of ./config/builder.py. > Also, I was wondering what the difference is between DMMesh and DMComplex- > it appears that they both implement the Sieve framework? > DMMesh is the old DMComplex. I decided that C++ is a blight upon mankind and templates are its Furies, so I rewrite all of DMMesh in C, used Jed's new communication stuff, got rid of iterators, and made things integrate with the solvers much better. Thanks, Matt > Thanks, > Chris Eldred > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 12:35:36 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 12:35:36 -0500 Subject: [petsc-users] sieve-dev Segfault in DMMeshSetChart using Fortran In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 10:31 AM, Chris Eldred wrote: > I am getting a segfault in DMMeshSetChart and I was wondering if someone > could shed a little light on it. > Please use DMComplex. It is mostly the same and MUCH easier to debug. I will help with any conversion you need. Thanks, Matt > PETSC (v3.3 p2) was compiled with gcc/gfortran/g++ (4.4.3) using the > options: --with-clanguage=C++ --with-sieve --download-f-blas-lapack > --download-parmetis --download-hdf5 --download-boost --download-metis. > > Running under gdb, the following error was obtained: > Program received signal SIGSEGV, Segmentation fault. > 0x083352aa in ALE::Obj >, > ALE::malloc_allocator > > > >::operator-> ( > this=0x1c) > at /home/celdred/Desktop/NTM/petsc-3.3-p2/include/sieve/ALE_mem.hh:675 > 675 X* operator->() const {return objPtr;}; > > The relevant code snippet is: > > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > CHKERRQ(ierr) > call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr) > CHKERRQ(ierr) > call DMMeshCreate(PETSC_COMM_WORLD,model_mesh,ierr) > CHKERRQ(ierr) > > ncells = nx*ny > nvertices = nx*ny > nedges = 2*nx*ny > > start_index = 0 > end_index = ncells + nvertices + nedges - 1 > call DMMeshSetChart(model_mesh, start_index, end_index , ierr) > CHKERRQ(ierr) > > Any ideas? > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 12:39:36 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 12:39:36 -0500 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 11:23 AM, Chris Eldred wrote: > I was wondering if it was possible to have fields that are shared between > points in a sieve DAG: > > For example, I would like to have data that is connected to both an edge > and a cell (instead of just tied to a Section). Consider a cell with three > edges (ie a triangular cell). > > Before I was just using a length 3 array attached to the cell with the > convention that the ordering of the array matched the ordering of the edge > list associated with the cell. Now, I would like an implementation that > does not assume anything about the ordering of the edge list (since I am > getting that from cones/supports). > I think what you want is the Closure operation. The closure of a cell will give you all the unknowns on its edges and vertices. Does that make sense? Thanks, Matt > Thanks, > -Chris Eldred > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.eldred at gmail.com Wed Jul 25 12:48:58 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 11:48:58 -0600 Subject: [petsc-users] sieve-dev Unstructured meshes in PETSC using Sieve In-Reply-To: References: Message-ID: Thanks for the info!- I will modify my code to use DMComplex instead of DMMesh (and migrate to the latest version of petsc-dev/slepc-dev). I'll let you know if that does not get rid of the Segfault as well. On Wed, Jul 25, 2012 at 11:34 AM, Matthew Knepley wrote: > On Tue, Jul 24, 2012 at 12:31 PM, Chris Eldred wrote: > >> Hey PETSC/Sieve Developers, >> >> I am building a nonlinear shallow water testbed model (along with an >> associated eigensolver for the linear equations) intended to work on >> unstructured Voronoi meshes and cubed-sphere grids (with arbitrary >> block-structured refinement)- it will be a 2-D code. There will NOT be any >> adaptive mesh refinement- the mesh is defined once at the start of the >> application. It will support finite difference, finite volume and finite >> element-type (spectral elements and Discontinuous Galerkin) schemes- so >> variables will be defined on edges, cells and vertexes. I would like to use >> PETSC/SLEPC (currently limited to v3.2 for both since that is the latest >> version of SLEPC) for the spare linear algebra and eigenvalue solvers. This >> is intended as a useful tool for researchers in atmospheric model >> development- it will allow easy inter-comparison of different grids and >> schemes under a common framework. >> > > Cool. Use slepc-dev. > > >> Right now I have a serial version (written in Fortran 90) that implements >> a few different finite-difference schemes (along with a multigrid solver >> for square and hexagonal meshes) on unstructured Voronoi meshes and I would >> like to move to a parallel version (also using Fortran 90). The Sieve >> framework seems like an excellent fit for defining the unstructured mesh, >> managing variables defined on edges/faces/vertices and handling >> scatter/gather options between processes. I was planning on doing parallel >> partitioning using ParMetis. >> > > That is definitely what it is for. > > >> My understanding is that DMMesh handles mesh topology (interconnections, >> etc) while Sections define variables and mesh geometry (edge lengths, >> areas, etc.). Sections can be created over different depths/heights (chains >> of points in Sieve) in order to define variables on vertices/edges/cells. >> > > Yes. > > >> I am looking for documentation and examples of code use. I found: >> >> http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html >> >> http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex12.c.html >> >> Are there other examples/documentation available? >> > > Here is my simple tutorial: > > Building and Running ex62 > -------------------------------------- > > First, configure with FEM stuff turned on: > > '--download-triangle', > '--download-ctetgen', > '--download-fiat', > '--download-generator', > '--download-chaco', > '--download-metis', > '--download-parmetis', > '--download-scientificpython', > > I also use > > '--with-dynamic-loading', > '--with-shared-libraries', > '--download-mpich', > '--download-ml', > > and if you want to try GPU stuff > > '--with-cuda', > '--with-cuda-arch=sm_10', > '--with-cuda-only', > '--with-cudac=nvcc -m64', > > Then build PETSc with the Python make: > > python2.7 ./config/builder2.py clean > python2.7 ./config/builder2.py build things correctly as well> > > python2.7 ./config/builder2.py --help > python2.7 ./config/builder2.py build --help > python2.7 ./config/builder2.py check --help > > Once you have this, you should be able to build and run ex62 > > python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c > --testnum=0 > > which runs the first test. You can run them all with no argument. All the > options are listed > at the top of ./config/builder.py. > > > >> Also, I was wondering what the difference is between DMMesh and >> DMComplex- it appears that they both implement the Sieve framework? >> > > DMMesh is the old DMComplex. I decided that C++ is a blight upon mankind > and templates are its Furies, so > I rewrite all of DMMesh in C, used Jed's new communication stuff, got rid > of iterators, and made things integrate > with the solvers much better. > > Thanks, > > Matt > > >> Thanks, >> Chris Eldred >> >> -- >> Chris Eldred >> DOE Computational Science Graduate Fellow >> Graduate Student, Atmospheric Science, Colorado State University >> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >> chris.eldred at gmail.com >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.eldred at gmail.com Wed Jul 25 12:52:04 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 11:52:04 -0600 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: The closure operation makes sense, but what I want is something a little different. I have a field that is defined as follows: field(edge,cell) = blah ie it really lives on the union of cells and edges (or vertex/edges, cells/vertexs, etc.) Is this something that can be defined using DMComplex and Sections? On Wed, Jul 25, 2012 at 11:39 AM, Matthew Knepley wrote: > On Wed, Jul 25, 2012 at 11:23 AM, Chris Eldred wrote: > >> I was wondering if it was possible to have fields that are shared between >> points in a sieve DAG: >> >> For example, I would like to have data that is connected to both an edge >> and a cell (instead of just tied to a Section). Consider a cell with three >> edges (ie a triangular cell). >> >> Before I was just using a length 3 array attached to the cell with the >> convention that the ordering of the array matched the ordering of the edge >> list associated with the cell. Now, I would like an implementation that >> does not assume anything about the ordering of the edge list (since I am >> getting that from cones/supports). >> > > I think what you want is the Closure operation. The closure of a cell will > give you all the unknowns on its edges and vertices. > Does that make sense? > > Thanks, > > Matt > > >> Thanks, >> -Chris Eldred >> >> -- >> Chris Eldred >> DOE Computational Science Graduate Fellow >> Graduate Student, Atmospheric Science, Colorado State University >> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >> chris.eldred at gmail.com >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 12:57:23 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 12:57:23 -0500 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 12:52 PM, Chris Eldred wrote: > The closure operation makes sense, but what I want is something a little > different. > > I have a field that is defined as follows: > field(edge,cell) = blah > ie it really lives on the union of cells and edges (or vertex/edges, > cells/vertexs, etc.) > We need to make the language more precise. The union of the cell and edge is what closure would give you. > Is this something that can be defined using DMComplex and Sections? I cannot understand from this explanation. Can you give a small example? Thanks, Matt > On Wed, Jul 25, 2012 at 11:39 AM, Matthew Knepley wrote: > >> On Wed, Jul 25, 2012 at 11:23 AM, Chris Eldred wrote: >> >>> I was wondering if it was possible to have fields that are shared >>> between points in a sieve DAG: >>> >>> For example, I would like to have data that is connected to both an edge >>> and a cell (instead of just tied to a Section). Consider a cell with three >>> edges (ie a triangular cell). >>> >>> Before I was just using a length 3 array attached to the cell with the >>> convention that the ordering of the array matched the ordering of the edge >>> list associated with the cell. Now, I would like an implementation that >>> does not assume anything about the ordering of the edge list (since I am >>> getting that from cones/supports). >>> >> >> I think what you want is the Closure operation. The closure of a cell >> will give you all the unknowns on its edges and vertices. >> Does that make sense? >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> -Chris Eldred >>> >>> -- >>> Chris Eldred >>> DOE Computational Science Graduate Fellow >>> Graduate Student, Atmospheric Science, Colorado State University >>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>> chris.eldred at gmail.com >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.eldred at gmail.com Wed Jul 25 13:07:33 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 12:07:33 -0600 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: Lets consider the mesh from "Flexible Representation of Computational Meshes" on the LHS of figure 2. (0,1) (0,2) (0,3) and (0,4) are vertices; (0,5), (0,6), (0,7), (0,8) and (0,9) are edges; (0,10) and (0,11) are cells. My field would be defined as (for example): field ( (0,5) ; (0,10) ) = 1.0 field ( (0,6) ; (0,10) ) = 2.0 field ( (0,7) ; (0,11) ) = 1.3 etc. Does that make sense? On Wed, Jul 25, 2012 at 11:57 AM, Matthew Knepley wrote: > On Wed, Jul 25, 2012 at 12:52 PM, Chris Eldred wrote: > >> The closure operation makes sense, but what I want is something a little >> different. >> >> I have a field that is defined as follows: >> field(edge,cell) = blah >> ie it really lives on the union of cells and edges (or vertex/edges, >> cells/vertexs, etc.) >> > > We need to make the language more precise. The union of the cell and edge > is what > closure would give you. > > >> Is this something that can be defined using DMComplex and Sections? > > > I cannot understand from this explanation. Can you give a small example? > > Thanks, > > Matt > > >> On Wed, Jul 25, 2012 at 11:39 AM, Matthew Knepley wrote: >> >>> On Wed, Jul 25, 2012 at 11:23 AM, Chris Eldred wrote: >>> >>>> I was wondering if it was possible to have fields that are shared >>>> between points in a sieve DAG: >>>> >>>> For example, I would like to have data that is connected to both an >>>> edge and a cell (instead of just tied to a Section). Consider a cell with >>>> three edges (ie a triangular cell). >>>> >>>> Before I was just using a length 3 array attached to the cell with the >>>> convention that the ordering of the array matched the ordering of the edge >>>> list associated with the cell. Now, I would like an implementation that >>>> does not assume anything about the ordering of the edge list (since I am >>>> getting that from cones/supports). >>>> >>> >>> I think what you want is the Closure operation. The closure of a cell >>> will give you all the unknowns on its edges and vertices. >>> Does that make sense? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> -Chris Eldred >>>> >>>> -- >>>> Chris Eldred >>>> DOE Computational Science Graduate Fellow >>>> Graduate Student, Atmospheric Science, Colorado State University >>>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>>> chris.eldred at gmail.com >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> Chris Eldred >> DOE Computational Science Graduate Fellow >> Graduate Student, Atmospheric Science, Colorado State University >> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >> chris.eldred at gmail.com >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 25 13:29:09 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 25 Jul 2012 13:29:09 -0500 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 1:07 PM, Chris Eldred wrote: > Lets consider the mesh from "Flexible Representation of Computational > Meshes" on the LHS of figure 2. (0,1) (0,2) (0,3) and (0,4) are vertices; > (0,5), (0,6), (0,7), (0,8) and (0,9) are edges; (0,10) and (0,11) are > cells. My field would be defined as (for example): > > field ( (0,5) ; (0,10) ) = 1.0 > field ( (0,6) ; (0,10) ) = 2.0 > field ( (0,7) ; (0,11) ) = 1.3 > etc. > > Does that make sense? > Is this like a DG mesh? Where the value on a face is two-valued (different values from each side)? In that case, it is "cell data". Matt, this would be another case where the action of the symmetry group on the data may not simply inherit the action on the cone (i.e. what we talked about yesterday). -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 13:30:04 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 13:30:04 -0500 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 1:07 PM, Chris Eldred wrote: > Lets consider the mesh from "Flexible Representation of Computational > Meshes" on the LHS of figure 2. (0,1) (0,2) (0,3) and (0,4) are vertices; > (0,5), (0,6), (0,7), (0,8) and (0,9) are edges; (0,10) and (0,11) are > cells. My field would be defined as (for example): > > field ( (0,5) ; (0,10) ) = 1.0 > field ( (0,6) ; (0,10) ) = 2.0 > field ( (0,7) ; (0,11) ) = 1.3 > etc. > > Does that make sense? > Oh, are you wanting something like DG? The tying of data values to mesh points is primarily to indicate sharing of values. If you have something like DG, I would initially do something like you did, which is assign all the variables to the cell since they are not shared. Since the cone is always oriented, the association between edges and values would be guaranteed. I have thought about another way to do this, but I don't think its any easier. You could instead associate 2 values with an edge, one for one side and the other for another. You can then look at the coneOrientation for that edge in the cell to know which value to choose for the cell. I am not sure this is easier, but it does facilitate communication of the "other" value for cells with neighbors on other processes. Matt > On Wed, Jul 25, 2012 at 11:57 AM, Matthew Knepley wrote: > >> On Wed, Jul 25, 2012 at 12:52 PM, Chris Eldred wrote: >> >>> The closure operation makes sense, but what I want is something a little >>> different. >>> >>> I have a field that is defined as follows: >>> field(edge,cell) = blah >>> ie it really lives on the union of cells and edges (or vertex/edges, >>> cells/vertexs, etc.) >>> >> >> We need to make the language more precise. The union of the cell and edge >> is what >> closure would give you. >> >> >>> Is this something that can be defined using DMComplex and Sections? >> >> >> I cannot understand from this explanation. Can you give a small example? >> >> Thanks, >> >> Matt >> >> >>> On Wed, Jul 25, 2012 at 11:39 AM, Matthew Knepley wrote: >>> >>>> On Wed, Jul 25, 2012 at 11:23 AM, Chris Eldred wrote: >>>> >>>>> I was wondering if it was possible to have fields that are shared >>>>> between points in a sieve DAG: >>>>> >>>>> For example, I would like to have data that is connected to both an >>>>> edge and a cell (instead of just tied to a Section). Consider a cell with >>>>> three edges (ie a triangular cell). >>>>> >>>>> Before I was just using a length 3 array attached to the cell with the >>>>> convention that the ordering of the array matched the ordering of the edge >>>>> list associated with the cell. Now, I would like an implementation that >>>>> does not assume anything about the ordering of the edge list (since I am >>>>> getting that from cones/supports). >>>>> >>>> >>>> I think what you want is the Closure operation. The closure of a cell >>>> will give you all the unknowns on its edges and vertices. >>>> Does that make sense? >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> -Chris Eldred >>>>> >>>>> -- >>>>> Chris Eldred >>>>> DOE Computational Science Graduate Fellow >>>>> Graduate Student, Atmospheric Science, Colorado State University >>>>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>>>> chris.eldred at gmail.com >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> Chris Eldred >>> DOE Computational Science Graduate Fellow >>> Graduate Student, Atmospheric Science, Colorado State University >>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>> chris.eldred at gmail.com >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.eldred at gmail.com Wed Jul 25 14:20:07 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 13:20:07 -0600 Subject: [petsc-users] sieve-dev Data shared between points in a Sieve DAG In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 12:30 PM, Matthew Knepley wrote: > On Wed, Jul 25, 2012 at 1:07 PM, Chris Eldred wrote: > >> Lets consider the mesh from "Flexible Representation of Computational >> Meshes" on the LHS of figure 2. (0,1) (0,2) (0,3) and (0,4) are vertices; >> (0,5), (0,6), (0,7), (0,8) and (0,9) are edges; (0,10) and (0,11) are >> cells. My field would be defined as (for example): >> >> field ( (0,5) ; (0,10) ) = 1.0 >> field ( (0,6) ; (0,10) ) = 2.0 >> field ( (0,7) ; (0,11) ) = 1.3 >> etc. >> >> Does that make sense? >> > > Oh, are you wanting something like DG? The tying of data values to mesh > points is primarily to indicate > sharing of values. If you have something like DG, I would initially do > something like you did, which is > assign all the variables to the cell since they are not shared. Since the > cone is always oriented, the > association between edges and values would be guaranteed. > > I missed the little section on ConeOrientation in the DM man pages- that makes perfect sense. I could just traverse the arrays in each cell using the ConeOrientation and the association would be preserved. > I have thought about another way to do this, but I don't think its any > easier. You could instead associate > 2 values with an edge, one for one side and the other for another. You can > then look at the coneOrientation > for that edge in the cell to know which value to choose for the cell. I am > not sure this is easier, but it does > facilitate communication of the "other" value for cells with neighbors on > other processes. > > Matt > > >> On Wed, Jul 25, 2012 at 11:57 AM, Matthew Knepley wrote: >> >>> On Wed, Jul 25, 2012 at 12:52 PM, Chris Eldred wrote: >>> >>>> The closure operation makes sense, but what I want is something a >>>> little different. >>>> >>>> I have a field that is defined as follows: >>>> field(edge,cell) = blah >>>> ie it really lives on the union of cells and edges (or vertex/edges, >>>> cells/vertexs, etc.) >>>> >>> >>> We need to make the language more precise. The union of the cell and >>> edge is what >>> closure would give you. >>> >>> >>>> Is this something that can be defined using DMComplex and Sections? >>> >>> >>> I cannot understand from this explanation. Can you give a small example? >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> On Wed, Jul 25, 2012 at 11:39 AM, Matthew Knepley wrote: >>>> >>>>> On Wed, Jul 25, 2012 at 11:23 AM, Chris Eldred >>>> > wrote: >>>>> >>>>>> I was wondering if it was possible to have fields that are shared >>>>>> between points in a sieve DAG: >>>>>> >>>>>> For example, I would like to have data that is connected to both an >>>>>> edge and a cell (instead of just tied to a Section). Consider a cell with >>>>>> three edges (ie a triangular cell). >>>>>> >>>>>> Before I was just using a length 3 array attached to the cell with >>>>>> the convention that the ordering of the array matched the ordering of the >>>>>> edge list associated with the cell. Now, I would like an implementation >>>>>> that does not assume anything about the ordering of the edge list (since I >>>>>> am getting that from cones/supports). >>>>>> >>>>> >>>>> I think what you want is the Closure operation. The closure of a cell >>>>> will give you all the unknowns on its edges and vertices. >>>>> Does that make sense? >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> -Chris Eldred >>>>>> >>>>>> -- >>>>>> Chris Eldred >>>>>> DOE Computational Science Graduate Fellow >>>>>> Graduate Student, Atmospheric Science, Colorado State University >>>>>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>>>>> chris.eldred at gmail.com >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> Chris Eldred >>>> DOE Computational Science Graduate Fellow >>>> Graduate Student, Atmospheric Science, Colorado State University >>>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>>> chris.eldred at gmail.com >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> Chris Eldred >> DOE Computational Science Graduate Fellow >> Graduate Student, Atmospheric Science, Colorado State University >> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >> chris.eldred at gmail.com >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.eldred at gmail.com Wed Jul 25 14:30:58 2012 From: chris.eldred at gmail.com (Chris Eldred) Date: Wed, 25 Jul 2012 13:30:58 -0600 Subject: [petsc-users] sieve-dev Unstructured meshes in PETSC using Sieve In-Reply-To: References: Message-ID: Where are the Fortran include files for DMComplex - I checked ${PETSC_DIR}/include/finclude but they are not there. The C/C++ headers are in ${PETSC_DIR}/include/ though. On Wed, Jul 25, 2012 at 11:48 AM, Chris Eldred wrote: > Thanks for the info!- I will modify my code to use DMComplex instead of > DMMesh (and migrate to the latest version of petsc-dev/slepc-dev). I'll let > you know if that does not get rid of the Segfault as well. > > > On Wed, Jul 25, 2012 at 11:34 AM, Matthew Knepley wrote: > >> On Tue, Jul 24, 2012 at 12:31 PM, Chris Eldred wrote: >> >>> Hey PETSC/Sieve Developers, >>> >>> I am building a nonlinear shallow water testbed model (along with an >>> associated eigensolver for the linear equations) intended to work on >>> unstructured Voronoi meshes and cubed-sphere grids (with arbitrary >>> block-structured refinement)- it will be a 2-D code. There will NOT be any >>> adaptive mesh refinement- the mesh is defined once at the start of the >>> application. It will support finite difference, finite volume and finite >>> element-type (spectral elements and Discontinuous Galerkin) schemes- so >>> variables will be defined on edges, cells and vertexes. I would like to use >>> PETSC/SLEPC (currently limited to v3.2 for both since that is the latest >>> version of SLEPC) for the spare linear algebra and eigenvalue solvers. This >>> is intended as a useful tool for researchers in atmospheric model >>> development- it will allow easy inter-comparison of different grids and >>> schemes under a common framework. >>> >> >> Cool. Use slepc-dev. >> >> >>> Right now I have a serial version (written in Fortran 90) that >>> implements a few different finite-difference schemes (along with a >>> multigrid solver for square and hexagonal meshes) on unstructured Voronoi >>> meshes and I would like to move to a parallel version (also using Fortran >>> 90). The Sieve framework seems like an excellent fit for defining the >>> unstructured mesh, managing variables defined on edges/faces/vertices and >>> handling scatter/gather options between processes. I was planning on doing >>> parallel partitioning using ParMetis. >>> >> >> That is definitely what it is for. >> >> >>> My understanding is that DMMesh handles mesh topology (interconnections, >>> etc) while Sections define variables and mesh geometry (edge lengths, >>> areas, etc.). Sections can be created over different depths/heights (chains >>> of points in Sieve) in order to define variables on vertices/edges/cells. >>> >> >> Yes. >> >> >>> I am looking for documentation and examples of code use. I found: >>> >>> http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex12.c.html >>> >>> Are there other examples/documentation available? >>> >> >> Here is my simple tutorial: >> >> Building and Running ex62 >> -------------------------------------- >> >> First, configure with FEM stuff turned on: >> >> '--download-triangle', >> '--download-ctetgen', >> '--download-fiat', >> '--download-generator', >> '--download-chaco', >> '--download-metis', >> '--download-parmetis', >> '--download-scientificpython', >> >> I also use >> >> '--with-dynamic-loading', >> '--with-shared-libraries', >> '--download-mpich', >> '--download-ml', >> >> and if you want to try GPU stuff >> >> '--with-cuda', >> '--with-cuda-arch=sm_10', >> '--with-cuda-only', >> '--with-cudac=nvcc -m64', >> >> Then build PETSc with the Python make: >> >> python2.7 ./config/builder2.py clean >> python2.7 ./config/builder2.py build > things correctly as well> >> >> python2.7 ./config/builder2.py --help >> python2.7 ./config/builder2.py build --help >> python2.7 ./config/builder2.py check --help >> >> Once you have this, you should be able to build and run ex62 >> >> python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c >> --testnum=0 >> >> which runs the first test. You can run them all with no argument. All the >> options are listed >> at the top of ./config/builder.py. >> >> >> >>> Also, I was wondering what the difference is between DMMesh and >>> DMComplex- it appears that they both implement the Sieve framework? >>> >> >> DMMesh is the old DMComplex. I decided that C++ is a blight upon mankind >> and templates are its Furies, so >> I rewrite all of DMMesh in C, used Jed's new communication stuff, got rid >> of iterators, and made things integrate >> with the solvers much better. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Chris Eldred >>> >>> -- >>> Chris Eldred >>> DOE Computational Science Graduate Fellow >>> Graduate Student, Atmospheric Science, Colorado State University >>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>> chris.eldred at gmail.com >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- Chris Eldred DOE Computational Science Graduate Fellow Graduate Student, Atmospheric Science, Colorado State University B.S. Applied Computational Physics, Carnegie Mellon University, 2009 chris.eldred at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 14:42:22 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 14:42:22 -0500 Subject: [petsc-users] sieve-dev Unstructured meshes in PETSC using Sieve In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 2:30 PM, Chris Eldred wrote: > Where are the Fortran include files for DMComplex - I checked > ${PETSC_DIR}/include/finclude but they are not there. The C/C++ headers are > in ${PETSC_DIR}/include/ though. > You are the first Fortran user :) Added them. Will test later. Matt > On Wed, Jul 25, 2012 at 11:48 AM, Chris Eldred wrote: > >> Thanks for the info!- I will modify my code to use DMComplex instead of >> DMMesh (and migrate to the latest version of petsc-dev/slepc-dev). I'll let >> you know if that does not get rid of the Segfault as well. >> >> >> On Wed, Jul 25, 2012 at 11:34 AM, Matthew Knepley wrote: >> >>> On Tue, Jul 24, 2012 at 12:31 PM, Chris Eldred wrote: >>> >>>> Hey PETSC/Sieve Developers, >>>> >>>> I am building a nonlinear shallow water testbed model (along with an >>>> associated eigensolver for the linear equations) intended to work on >>>> unstructured Voronoi meshes and cubed-sphere grids (with arbitrary >>>> block-structured refinement)- it will be a 2-D code. There will NOT be any >>>> adaptive mesh refinement- the mesh is defined once at the start of the >>>> application. It will support finite difference, finite volume and finite >>>> element-type (spectral elements and Discontinuous Galerkin) schemes- so >>>> variables will be defined on edges, cells and vertexes. I would like to use >>>> PETSC/SLEPC (currently limited to v3.2 for both since that is the latest >>>> version of SLEPC) for the spare linear algebra and eigenvalue solvers. This >>>> is intended as a useful tool for researchers in atmospheric model >>>> development- it will allow easy inter-comparison of different grids and >>>> schemes under a common framework. >>>> >>> >>> Cool. Use slepc-dev. >>> >>> >>>> Right now I have a serial version (written in Fortran 90) that >>>> implements a few different finite-difference schemes (along with a >>>> multigrid solver for square and hexagonal meshes) on unstructured Voronoi >>>> meshes and I would like to move to a parallel version (also using Fortran >>>> 90). The Sieve framework seems like an excellent fit for defining the >>>> unstructured mesh, managing variables defined on edges/faces/vertices and >>>> handling scatter/gather options between processes. I was planning on doing >>>> parallel partitioning using ParMetis. >>>> >>> >>> That is definitely what it is for. >>> >>> >>>> My understanding is that DMMesh handles mesh topology >>>> (interconnections, etc) while Sections define variables and mesh geometry >>>> (edge lengths, areas, etc.). Sections can be created over different >>>> depths/heights (chains of points in Sieve) in order to define variables on >>>> vertices/edges/cells. >>>> >>> >>> Yes. >>> >>> >>>> I am looking for documentation and examples of code use. I found: >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex12.c.html >>>> >>>> Are there other examples/documentation available? >>>> >>> >>> Here is my simple tutorial: >>> >>> Building and Running ex62 >>> -------------------------------------- >>> >>> First, configure with FEM stuff turned on: >>> >>> '--download-triangle', >>> '--download-ctetgen', >>> '--download-fiat', >>> '--download-generator', >>> '--download-chaco', >>> '--download-metis', >>> '--download-parmetis', >>> '--download-scientificpython', >>> >>> I also use >>> >>> '--with-dynamic-loading', >>> '--with-shared-libraries', >>> '--download-mpich', >>> '--download-ml', >>> >>> and if you want to try GPU stuff >>> >>> '--with-cuda', >>> '--with-cuda-arch=sm_10', >>> '--with-cuda-only', >>> '--with-cudac=nvcc -m64', >>> >>> Then build PETSc with the Python make: >>> >>> python2.7 ./config/builder2.py clean >>> python2.7 ./config/builder2.py build >> things correctly as well> >>> >>> python2.7 ./config/builder2.py --help >>> python2.7 ./config/builder2.py build --help >>> python2.7 ./config/builder2.py check --help >>> >>> Once you have this, you should be able to build and run ex62 >>> >>> python2.7 ./config/builder2.py check src/snes/examples/tutorials/ex62.c >>> --testnum=0 >>> >>> which runs the first test. You can run them all with no argument. All >>> the options are listed >>> at the top of ./config/builder.py. >>> >>> >>> >>>> Also, I was wondering what the difference is between DMMesh and >>>> DMComplex- it appears that they both implement the Sieve framework? >>>> >>> >>> DMMesh is the old DMComplex. I decided that C++ is a blight upon mankind >>> and templates are its Furies, so >>> I rewrite all of DMMesh in C, used Jed's new communication stuff, got >>> rid of iterators, and made things integrate >>> with the solvers much better. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Chris Eldred >>>> >>>> -- >>>> Chris Eldred >>>> DOE Computational Science Graduate Fellow >>>> Graduate Student, Atmospheric Science, Colorado State University >>>> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >>>> chris.eldred at gmail.com >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> Chris Eldred >> DOE Computational Science Graduate Fellow >> Graduate Student, Atmospheric Science, Colorado State University >> B.S. Applied Computational Physics, Carnegie Mellon University, 2009 >> chris.eldred at gmail.com >> > > > > -- > Chris Eldred > DOE Computational Science Graduate Fellow > Graduate Student, Atmospheric Science, Colorado State University > B.S. Applied Computational Physics, Carnegie Mellon University, 2009 > chris.eldred at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.kirk-1 at nasa.gov Wed Jul 25 16:35:34 2012 From: benjamin.kirk-1 at nasa.gov (Kirk, Benjamin (JSC-EG311)) Date: Wed, 25 Jul 2012 16:35:34 -0500 Subject: [petsc-users] KSP no longer returning NaNs? Message-ID: Hello - I've been using PETSc 3.1 for quite a while now and have been hesitant to upgrade because of some new behavior I found in 3.2. Let me explain... In petsc-3.1, if the KSP encountered a NaN it would return it to the application code. We actually liked this feature because it gives us an opportunity to catch the NaN and attempt recovery, in our case by decreasing the time step and trying again. It seems in petsc-3.2, however, that PETSc itself aborts internally, so we are unable to recover from the situation. Is there any way to get the old behavior back? Thanks, -Ben From knepley at gmail.com Wed Jul 25 16:42:39 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 16:42:39 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 4:35 PM, Kirk, Benjamin (JSC-EG311) < benjamin.kirk-1 at nasa.gov> wrote: > Hello - > > I've been using PETSc 3.1 for quite a while now and have been hesitant to > upgrade because of some new behavior I found in 3.2. Let me explain... > > In petsc-3.1, if the KSP encountered a NaN it would return it to the > application code. We actually liked this feature because it gives us an > opportunity to catch the NaN and attempt recovery, in our case by > decreasing > the time step and trying again. > > It seems in petsc-3.2, however, that PETSc itself aborts internally, so we > are unable to recover from the situation. > > Is there any way to get the old behavior back? > 1) How exactly could the KSP generate a NaN if it was not injected in A or b? 2) You can always check the return value of KSPSolve() and do what you did last time. Matt > Thanks, > > -Ben > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jul 25 16:51:30 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 25 Jul 2012 16:51:30 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 4:42 PM, Matthew Knepley wrote: > 2) You can always check the return value of KSPSolve() and do what you did > last time. > Well, PetscError is called, which is not great. I think it should be able to clean up and return with a KSP_DIVERGED_NAN. (Why else would that enum be there?) Erroring by default is the right behavior, but I don't think it should be the only behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jul 25 16:58:18 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Jul 2012 16:58:18 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: As always a COMPLETE error report is useful. There are a million ways you could see this change in behavior but we can't spend time guessing. At a minimum please send the entire error output. I note in 3.3 the KSPDefaultConverged() still checks for NAN and returns a negative converged reason (not an error) so I can only guess why you are getting different behavior without more information., Barry On Jul 25, 2012, at 4:35 PM, Kirk, Benjamin (JSC-EG311) wrote: > Hello - > > I've been using PETSc 3.1 for quite a while now and have been hesitant to > upgrade because of some new behavior I found in 3.2. Let me explain... > > In petsc-3.1, if the KSP encountered a NaN it would return it to the > application code. We actually liked this feature because it gives us an > opportunity to catch the NaN and attempt recovery, in our case by decreasing > the time step and trying again. > > It seems in petsc-3.2, however, that PETSc itself aborts internally, so we > are unable to recover from the situation. > > Is there any way to get the old behavior back? > > Thanks, > > -Ben > From jedbrown at mcs.anl.gov Wed Jul 25 17:04:17 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 25 Jul 2012 17:04:17 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 4:58 PM, Barry Smith wrote: > > As always a COMPLETE error report is useful. There are a million ways > you could see this change in behavior but we can't spend time guessing. At > a minimum please send the entire error output. > > I note in 3.3 the KSPDefaultConverged() still checks for NAN and > returns a negative converged reason (not an error) so I can only guess why > you are getting different behavior without more information., > This is probably the main complaint: http://petsc.cs.iit.edu/petsc/petsc-dev/rev/7d6f5cbe67bc > > > Barry > > On Jul 25, 2012, at 4:35 PM, Kirk, Benjamin (JSC-EG311) wrote: > > > Hello - > > > > I've been using PETSc 3.1 for quite a while now and have been hesitant to > > upgrade because of some new behavior I found in 3.2. Let me explain... > > > > In petsc-3.1, if the KSP encountered a NaN it would return it to the > > application code. We actually liked this feature because it gives us an > > opportunity to catch the NaN and attempt recovery, in our case by > decreasing > > the time step and trying again. > > > > It seems in petsc-3.2, however, that PETSc itself aborts internally, so > we > > are unable to recover from the situation. > > > > Is there any way to get the old behavior back? > > > > Thanks, > > > > -Ben > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 25 17:12:28 2012 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 25 Jul 2012 17:12:28 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 5:04 PM, Jed Brown wrote: > On Wed, Jul 25, 2012 at 4:58 PM, Barry Smith wrote: > >> >> As always a COMPLETE error report is useful. There are a million ways >> you could see this change in behavior but we can't spend time guessing. At >> a minimum please send the entire error output. >> >> I note in 3.3 the KSPDefaultConverged() still checks for NAN and >> returns a negative converged reason (not an error) so I can only guess why >> you are getting different behavior without more information., >> > > This is probably the main complaint: > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/7d6f5cbe67bc > Yes, but what in PETSc code could possibly create these NaNs? Matt > > >> >> >> Barry >> >> On Jul 25, 2012, at 4:35 PM, Kirk, Benjamin (JSC-EG311) wrote: >> >> > Hello - >> > >> > I've been using PETSc 3.1 for quite a while now and have been hesitant >> to >> > upgrade because of some new behavior I found in 3.2. Let me explain... >> > >> > In petsc-3.1, if the KSP encountered a NaN it would return it to the >> > application code. We actually liked this feature because it gives us an >> > opportunity to catch the NaN and attempt recovery, in our case by >> decreasing >> > the time step and trying again. >> > >> > It seems in petsc-3.2, however, that PETSc itself aborts internally, so >> we >> > are unable to recover from the situation. >> > >> > Is there any way to get the old behavior back? >> > >> > Thanks, >> > >> > -Ben >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jul 25 17:14:01 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Jul 2012 17:14:01 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: On Jul 25, 2012, at 5:12 PM, Matthew Knepley wrote: > On Wed, Jul 25, 2012 at 5:04 PM, Jed Brown wrote: > On Wed, Jul 25, 2012 at 4:58 PM, Barry Smith wrote: > > As always a COMPLETE error report is useful. There are a million ways you could see this change in behavior but we can't spend time guessing. At a minimum please send the entire error output. > > I note in 3.3 the KSPDefaultConverged() still checks for NAN and returns a negative converged reason (not an error) so I can only guess why you are getting different behavior without more information., > > This is probably the main complaint: > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/7d6f5cbe67bc > > Yes, but what in PETSc code could possibly create these NaNs? Shell MatMult or PC :-) > > Matt > > > > > Barry > > On Jul 25, 2012, at 4:35 PM, Kirk, Benjamin (JSC-EG311) wrote: > > > Hello - > > > > I've been using PETSc 3.1 for quite a while now and have been hesitant to > > upgrade because of some new behavior I found in 3.2. Let me explain... > > > > In petsc-3.1, if the KSP encountered a NaN it would return it to the > > application code. We actually liked this feature because it gives us an > > opportunity to catch the NaN and attempt recovery, in our case by decreasing > > the time step and trying again. > > > > It seems in petsc-3.2, however, that PETSc itself aborts internally, so we > > are unable to recover from the situation. > > > > Is there any way to get the old behavior back? > > > > Thanks, > > > > -Ben > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From bsmith at mcs.anl.gov Wed Jul 25 17:15:22 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 25 Jul 2012 17:15:22 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: Ok. The goal with those changes was to allow the user to detect the problem as soon as possible in the code instead of delaying knowing where the NaN appeared. Exceptions would be one way of handling this, but we bagged exceptions in PETSc. We can add a nasty global and a PETSc option to indicate if they want immediate failure on NaN (in inner products, norms etc) or if they want the code to continue. I thought about this but never got around to it. Barry On Jul 25, 2012, at 5:04 PM, Jed Brown wrote: > On Wed, Jul 25, 2012 at 4:58 PM, Barry Smith wrote: > > As always a COMPLETE error report is useful. There are a million ways you could see this change in behavior but we can't spend time guessing. At a minimum please send the entire error output. > > I note in 3.3 the KSPDefaultConverged() still checks for NAN and returns a negative converged reason (not an error) so I can only guess why you are getting different behavior without more information., > > This is probably the main complaint: > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/7d6f5cbe67bc > > > > > Barry > > On Jul 25, 2012, at 4:35 PM, Kirk, Benjamin (JSC-EG311) wrote: > > > Hello - > > > > I've been using PETSc 3.1 for quite a while now and have been hesitant to > > upgrade because of some new behavior I found in 3.2. Let me explain... > > > > In petsc-3.1, if the KSP encountered a NaN it would return it to the > > application code. We actually liked this feature because it gives us an > > opportunity to catch the NaN and attempt recovery, in our case by decreasing > > the time step and trying again. > > > > It seems in petsc-3.2, however, that PETSc itself aborts internally, so we > > are unable to recover from the situation. > > > > Is there any way to get the old behavior back? > > > > Thanks, > > > > -Ben > > > > From benjamin.kirk-1 at nasa.gov Wed Jul 25 17:59:58 2012 From: benjamin.kirk-1 at nasa.gov (Kirk, Benjamin (JSC-EG311)) Date: Wed, 25 Jul 2012 17:59:58 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: Message-ID: On 7/25/12 5:04 PM, "Jed Brown" wrote: > > This is probably the main complaint: > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/7d6f5cbe67bc Thanks, Jed. The trace is below. Indeed, a call to VecNorm exits with an error, where previously it would have returned the NaN. So this may be in user code before handing b off to the KSP object. Is there an alternative, recommended way to ask a PETSc vector if it has NaNs without causing an abort? MPI: (gdb) #0 0x000000310240effe in waitpid () from /lib64/libpthread.so.0 MPI: #1 0x00002aaaaafaa3fc in mpi_sgi_system (header=) at sig.c:89 MPI: #2 MPI_SGI_stacktraceback (header=) at sig.c:272 MPI: #3 0x00002aaaaafaae8e in first_arriver_handler (signo=6, stack_trace_sem=0x2aaab2fc0500) at sig.c:415 MPI: #4 0x00002aaaaafab000 in slave_sig_handler (signo=6, siginfo=, extra=) at sig.c:494 MPI: #5 MPI: #6 0x0000003101c32885 in raise () from /lib64/libc.so.6 MPI: #7 0x0000003101c34065 in abort () from /lib64/libc.so.6 MPI: #8 0x00002aaaadc1b244 in PetscTraceBackErrorHandler () from /software/x86_64/petsc/3.2-p5/aerolab_hpc_l1-mpt-2.05-intel-12.1/lib/libpetsc.so MPI: #9 0x00002aaaadc1ac83 in PetscError () from /software/x86_64/petsc/3.2-p5/aerolab_hpc_l1-mpt-2.05-intel-12.1/lib/libpetsc.so MPI: #10 0x00002aaaadc7ff3f in VecNorm () from /software/x86_64/petsc/3.2-p5/aerolab_hpc_l1-mpt-2.05-intel-12.1/lib/libpetsc.so MPI: #11 0x00002aaaad0afb57 in libMesh::PetscVector::l2_norm() const () from /lustre/work/benkirk/codes/install/aerolab_hpc_l1/lib/x86_64-unknown-linux-gnu_opt/libmesh.so.0 MPI: #12 0x00002aaaad15420b in libMesh::NewtonSolver::solve() () from /lustre/work/benkirk/codes/install/aerolab_hpc_l1/lib/x86_64-unknown-linux-gnu_opt/libmesh.so.0 MPI: #13 0x00002aaaad19e72d in libMesh::NonlinearImplicitSystem::solve() () from /lustre/work/benkirk/codes/install/aerolab_hpc_l1/lib/x86_64-unknown-linux-gnu_opt/libmesh.so.0 MPI: #14 0x00002aaaab7818b7 in FINS::Physics::run(std::basic_string, std::allocator >) () from /lustre/work/benkirk/codes/install/aerolab_hpc_l1/lib/libfins.so.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at mcs.anl.gov Wed Jul 25 18:35:00 2012 From: sean at mcs.anl.gov (Sean Farley) Date: Wed, 25 Jul 2012 18:35:00 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: >> This is probably the main complaint: >> >> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/7d6f5cbe67bc > > Thanks, Jed. The trace is below. Indeed, a call to VecNorm exits with an > error, where previously it would have returned the NaN. So this may be in > user code before handing b off to the KSP object. > > Is there an alternative, recommended way to ask a PETSc vector if it has > NaNs without causing an abort? You mention in your first email that checking for NAN is due to your timestep; perhaps an alternative route would be to use the newer TS code which has some adaptive options, namely: -ts_max_snes_failures <1>: Maximum number of nonlinear solve failures (TSSetMaxSNESFailures) -ts_max_reject <10>: Maximum number of step rejections before step fails (TSSetMaxStepRejections) -ts_adapt_dt_min <1e-20>: Minimum time step considered (TSAdaptSetStepLimits) -ts_adapt_dt_max <1e+50>: Maximum time step considered (TSAdaptSetStepLimits) -ts_adapt_scale_solve_failed <0.25>: Scale step by this factor if solve fails () But this only addresses your issue in the case of adjusting the time step (perhaps you test for NAN for more reasons). Just a thought. From chaoyang.cuboulder at gmail.com Wed Jul 25 17:46:31 2012 From: chaoyang.cuboulder at gmail.com (Chao Yang) Date: Wed, 25 Jul 2012 16:46:31 -0600 Subject: [petsc-users] Manual implementation of time-stepping in PETSc-3.3 Message-ID: <42C8851A-DA61-4988-BA30-43CF6C594715@colorado.edu> Dear PETSc group, In PETSc-3.3, DMMG has been completely replaced by using SNESSetDM() and pc_mg. And it is shown in the changelog that resolution-dependent data should be avoided in the user context. However, for a time-dependent problem, if a fully implicit time-stepping is implemented by hand, then some solutions at previous time steps for both fine and coarse meshes are needed in FormFunction. If those previous solutions are not stored in the user context, where can I put/access them? In PETSc-3.2, there is an example using DMMG with time-stepping: src/snes/examples/tutorials/ex54.c Is there a substitute for it in PETS-3.3? Best wishes, Chao From jedbrown at mcs.anl.gov Wed Jul 25 19:03:26 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 25 Jul 2012 19:03:26 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: On Wed, Jul 25, 2012 at 6:35 PM, Sean Farley wrote: > You mention in your first email that checking for NAN is due to your > timestep; perhaps an alternative route would be to use the newer TS > code which has some adaptive options, namely: > > -ts_max_snes_failures <1>: Maximum number of nonlinear solve > failures (TSSetMaxSNESFailures) > -ts_max_reject <10>: Maximum number of step rejections before step > fails (TSSetMaxStepRejections) > -ts_adapt_dt_min <1e-20>: Minimum time step considered > (TSAdaptSetStepLimits) > -ts_adapt_dt_max <1e+50>: Maximum time step considered > (TSAdaptSetStepLimits) > -ts_adapt_scale_solve_failed <0.25>: Scale step by this factor if > solve fails () > These options are only relative to SNESConvergedReason. The "problem" is that PetscError is being called. If NaN arising in the solver is not an "error", just a "did not converge", then we can't be calling PetscError. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at mcs.anl.gov Wed Jul 25 19:07:42 2012 From: sean at mcs.anl.gov (Sean Farley) Date: Wed, 25 Jul 2012 19:07:42 -0500 Subject: [petsc-users] KSP no longer returning NaNs? In-Reply-To: References: Message-ID: > These options are only relative to SNESConvergedReason. The "problem" is > that PetscError is being called. If NaN arising in the solver is not an > "error", just a "did not converge", then we can't be calling PetscError. Ah, I see. From jedbrown at mcs.anl.gov Wed Jul 25 19:11:26 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 25 Jul 2012 19:11:26 -0500 Subject: [petsc-users] Manual implementation of time-stepping in PETSc-3.3 In-Reply-To: <42C8851A-DA61-4988-BA30-43CF6C594715@colorado.edu> References: <42C8851A-DA61-4988-BA30-43CF6C594715@colorado.edu> Message-ID: On Wed, Jul 25, 2012 at 5:46 PM, Chao Yang wrote: > In PETSc-3.3, DMMG has been completely replaced by using SNESSetDM() and > pc_mg. And it is shown in the changelog that resolution-dependent data > should be avoided in the user context. > > However, for a time-dependent problem, if a fully implicit time-stepping > is implemented by hand, then some solutions at previous time steps for both > fine and coarse meshes are needed in FormFunction. > How do you intend to represent these on coarse meshes? For maximum flexibility, you can follow the methodology in TSTheta (src/ts/impls/implicit/theta/theta.c) which uses DMGetNamedVector() to cache the state vectors on levels and DMRestrictHook_Theta() to update it when necessary. > In PETSc-3.2, there is an example using DMMG with time-stepping: > src/snes/examples/tutorials/ex54.c > Is there a substitute for it in PETS-3.3? > Uh, the example did not use DMMG. It did use PCMG, but only for the linear problem and only with Galerkin coarse operators, thus the user callbacks were never called on coarse grids. If you use rediscretized coarse operators or FAS, you need to be able to handle the callbacks from coarse grids. -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Fri Jul 27 15:17:08 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Fri, 27 Jul 2012 13:17:08 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X Message-ID: Hello, I am running a simple test code which takes a sparse AIJ matrix in PETSc and multiplies it by a vector. The matrix is defined as an AIJ MPI matrix. When I run the program on a single core, it runs fine. When I run it using MPI with multiple threads (I am on a 4-core, 8-thread MAC) I can get the code to run correctly for matrices under a certain size (2880 X 2880), but when the matrix is set to be larger, the code crashes with a segfault and the error says it was in the MatAssemblyEnd(). Sometimes it works with -n 2, but typically it always crashes when using multi-core. Any ideas on what it could be? Thanks, Ron Caplan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jul 27 15:19:48 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 27 Jul 2012 15:19:48 -0500 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: 1. Check for memory leaks using Valgrind. 2. Be sure to run --with-debugging=1 (the default) when trying to find the error. 3. Send the full error message and the relevant bit of code. On Fri, Jul 27, 2012 at 3:17 PM, Ronald M. Caplan wrote: > Hello, > > I am running a simple test code which takes a sparse AIJ matrix in PETSc > and multiplies it by a vector. > > The matrix is defined as an AIJ MPI matrix. > > When I run the program on a single core, it runs fine. > > When I run it using MPI with multiple threads (I am on a 4-core, 8-thread > MAC) I can get the code to run correctly for matrices under a certain size > (2880 X 2880), but when the matrix is set to be larger, the code crashes > with a segfault and the error says it was in the MatAssemblyEnd(). > Sometimes it works with -n 2, but typically it always crashes when using > multi-core. > > Any ideas on what it could be? > > Thanks, > > Ron Caplan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Fri Jul 27 15:35:33 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Fri, 27 Jul 2012 13:35:33 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: 1) Checked it, had no leaks or any other problems that I could see. 2) Ran it with debugging and without. The debugging is how I know it was in MatAssemblyEnd(). 3) Here is the matrix part of the code: !Create matrix: call MatCreate(PETSC_COMM_WORLD,A,ierr) call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N,ierr) call MatSetType(A,MATMPIAIJ,ierr) call MatSetFromOptions(A,ierr) !print*,'3nrt: ',3*nr*nt i = 16 IF(size .eq. 1) THEN j = 0 ELSE j = 8 END IF call MatMPIAIJSetPreallocation(A,i,PETSC_NULL_INTEGER, & j,PETSC_NULL_INTEGER,ierr) !Do not call this if using preallocation! !call MatSetUp(A,ierr) call MatGetOwnershipRange(A,i,j,ierr) print*,'Rank ',rank,' has range ',i,' and ',j !Get MAS matrix in CSR format (random numbers for now): IF (rank .eq. 0) THEN call GET_RAND_MAS_MATRIX(CSR_A,CSR_AI,CSR_AJ,nr,nt,np,M) print*,'Number of non-zero entries in matrix:',M !Store matrix values one-by-one (inefficient: better way ! more complicated - implement later) DO i=1,N !print*,'numofnonzerosinrowi:',CSR_AJ(i+1)-CSR_AJ(i)+1 DO j=CSR_AJ(i)+1,CSR_AJ(i+1) call MatSetValue(A,i-1,CSR_AI(j),CSR_A(j), & INSERT_VALUES,ierr) END DO END DO print*,'Done setting matrix values...' END IF !Assemble matrix A across all cores: call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) print*,'between assembly' call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) A couple things to note: a) my CSR_AJ is what most peaople would call ai etc b) my CSR array values are 0-index but the arrays are 1-indexed. Here is the run with one processor (-n 1): sumseq:PETSc sumseq$ valgrind mpiexec -n 1 ./petsctest -mat_view_info ==26297== Memcheck, a memory error detector ==26297== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==26297== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==26297== Command: mpiexec -n 1 ./petsctest -mat_view_info ==26297== UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] N: 46575 cores: 1 MPI TEST: My rank is: 0 Rank 0 has range 0 and 46575 Number of non-zero entries in matrix: 690339 Done setting matrix values... between assembly Matrix Object: 1 MPI processes type: mpiaij rows=46575, cols=46575 total: nonzeros=690339, allocated nonzeros=745200 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines PETSc y=Ax time: 367.9164 nsec/mp. PETSc y=Ax flops: 0.2251188 GFLOPS. ==26297== ==26297== HEAP SUMMARY: ==26297== in use at exit: 139,984 bytes in 65 blocks ==26297== total heap usage: 938 allocs, 873 frees, 229,722 bytes allocated ==26297== ==26297== LEAK SUMMARY: ==26297== definitely lost: 0 bytes in 0 blocks ==26297== indirectly lost: 0 bytes in 0 blocks ==26297== possibly lost: 0 bytes in 0 blocks ==26297== still reachable: 139,984 bytes in 65 blocks ==26297== suppressed: 0 bytes in 0 blocks ==26297== Rerun with --leak-check=full to see details of leaked memory ==26297== ==26297== For counts of detected and suppressed errors, rerun with: -v ==26297== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) sumseq:PETSc sumseq$ Here is the run with 2 processors (-n 2) sumseq:PETSc sumseq$ valgrind mpiexec -n 2 ./petsctest -mat_view_info ==26301== Memcheck, a memory error detector ==26301== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==26301== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==26301== Command: mpiexec -n 2 ./petsctest -mat_view_info ==26301== UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] N: 46575 cores: 2 MPI TEST: My rank is: 0 MPI TEST: My rank is: 1 Rank 0 has range 0 and 23288 Rank 1 has range 23288 and 46575 Number of non-zero entries in matrix: 690339 Done setting matrix values... between assembly between assembly [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: [1] MatStashScatterGetMesg_Private line 609 /usr/local/petsc-3.3-p2/src/mat/utils/matstash.c [1]PETSC ERROR: [1] MatAssemblyEnd_MPIAIJ line 646 /usr/local/petsc-3.3-p2/src/mat/impls/aij/mpi/mpiaij.c [1]PETSC ERROR: [1] MatAssemblyEnd line 4857 /usr/local/petsc-3.3-p2/src/mat/interface/matrix.c [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 CDT 2012 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: ./petsctest on a arch-darw named sumseq.predsci.com by sumseq Fri Jul 27 13:34:36 2012 [1]PETSC ERROR: Libraries linked from /usr/local/petsc-3.3-p2/arch-darwin-c-debug/lib [1]PETSC ERROR: Configure run at Fri Jul 27 13:28:26 2012 [1]PETSC ERROR: Configure options --with-debugging=1 [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 [cli_1]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 ==26301== ==26301== HEAP SUMMARY: ==26301== in use at exit: 139,984 bytes in 65 blocks ==26301== total heap usage: 1,001 allocs, 936 frees, 234,886 bytes allocated ==26301== ==26301== LEAK SUMMARY: ==26301== definitely lost: 0 bytes in 0 blocks ==26301== indirectly lost: 0 bytes in 0 blocks ==26301== possibly lost: 0 bytes in 0 blocks ==26301== still reachable: 139,984 bytes in 65 blocks ==26301== suppressed: 0 bytes in 0 blocks ==26301== Rerun with --leak-check=full to see details of leaked memory ==26301== ==26301== For counts of detected and suppressed errors, rerun with: -v ==26301== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) sumseq:PETSc sumseq$ - Ron On Fri, Jul 27, 2012 at 1:19 PM, Jed Brown wrote: > 1. Check for memory leaks using Valgrind. > > 2. Be sure to run --with-debugging=1 (the default) when trying to find the > error. > > 3. Send the full error message and the relevant bit of code. > > > On Fri, Jul 27, 2012 at 3:17 PM, Ronald M. Caplan wrote: > >> Hello, >> >> I am running a simple test code which takes a sparse AIJ matrix in PETSc >> and multiplies it by a vector. >> >> The matrix is defined as an AIJ MPI matrix. >> >> When I run the program on a single core, it runs fine. >> >> When I run it using MPI with multiple threads (I am on a 4-core, 8-thread >> MAC) I can get the code to run correctly for matrices under a certain size >> (2880 X 2880), but when the matrix is set to be larger, the code crashes >> with a segfault and the error says it was in the MatAssemblyEnd(). >> Sometimes it works with -n 2, but typically it always crashes when using >> multi-core. >> >> Any ideas on what it could be? >> >> Thanks, >> >> Ron Caplan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 27 15:52:06 2012 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 27 Jul 2012 15:52:06 -0500 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: On Fri, Jul 27, 2012 at 3:35 PM, Ronald M. Caplan wrote: > 1) Checked it, had no leaks or any other problems that I could see. > > 2) Ran it with debugging and without. The debugging is how I know it was > in MatAssemblyEnd(). > Its rare when valgrind does not catch something, but it happens. From here I would really like: 1) The stack trace from the fault 2) The code to run here This is one of the oldest and most used pieces of PETSc. Its difficult to believe that the bug is there rather than a result of earlier memory corruption. Thanks, Matt > 3) Here is the matrix part of the code: > > !Create matrix: > call MatCreate(PETSC_COMM_WORLD,A,ierr) > call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N,ierr) > call MatSetType(A,MATMPIAIJ,ierr) > call MatSetFromOptions(A,ierr) > !print*,'3nrt: ',3*nr*nt > i = 16 > IF(size .eq. 1) THEN > j = 0 > ELSE > j = 8 > END IF > call MatMPIAIJSetPreallocation(A,i,PETSC_NULL_INTEGER, > & j,PETSC_NULL_INTEGER,ierr) > > !Do not call this if using preallocation! > !call MatSetUp(A,ierr) > > call MatGetOwnershipRange(A,i,j,ierr) > print*,'Rank ',rank,' has range ',i,' and ',j > > !Get MAS matrix in CSR format (random numbers for now): > IF (rank .eq. 0) THEN > call GET_RAND_MAS_MATRIX(CSR_A,CSR_AI,CSR_AJ,nr,nt,np,M) > print*,'Number of non-zero entries in matrix:',M > !Store matrix values one-by-one (inefficient: better way > ! more complicated - implement later) > > DO i=1,N > !print*,'numofnonzerosinrowi:',CSR_AJ(i+1)-CSR_AJ(i)+1 > DO j=CSR_AJ(i)+1,CSR_AJ(i+1) > call MatSetValue(A,i-1,CSR_AI(j),CSR_A(j), > & INSERT_VALUES,ierr) > > END DO > END DO > print*,'Done setting matrix values...' > END IF > > !Assemble matrix A across all cores: > call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) > print*,'between assembly' > call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) > > > > A couple things to note: > a) my CSR_AJ is what most peaople would call ai etc > b) my CSR array values are 0-index but the arrays are 1-indexed. > > > > Here is the run with one processor (-n 1): > > sumseq:PETSc sumseq$ valgrind mpiexec -n 1 ./petsctest -mat_view_info > ==26297== Memcheck, a memory error detector > ==26297== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. > ==26297== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info > ==26297== Command: mpiexec -n 1 ./petsctest -mat_view_info > ==26297== > UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] > N: 46575 > cores: 1 > MPI TEST: My rank is: 0 > Rank 0 has range 0 and 46575 > Number of non-zero entries in matrix: 690339 > Done setting matrix values... > between assembly > Matrix Object: 1 MPI processes > type: mpiaij > rows=46575, cols=46575 > total: nonzeros=690339, allocated nonzeros=745200 > total number of mallocs used during MatSetValues calls =0 > not using I-node (on process 0) routines > PETSc y=Ax time: 367.9164 nsec/mp. > PETSc y=Ax flops: 0.2251188 GFLOPS. > ==26297== > ==26297== HEAP SUMMARY: > ==26297== in use at exit: 139,984 bytes in 65 blocks > ==26297== total heap usage: 938 allocs, 873 frees, 229,722 bytes > allocated > ==26297== > ==26297== LEAK SUMMARY: > ==26297== definitely lost: 0 bytes in 0 blocks > ==26297== indirectly lost: 0 bytes in 0 blocks > ==26297== possibly lost: 0 bytes in 0 blocks > ==26297== still reachable: 139,984 bytes in 65 blocks > ==26297== suppressed: 0 bytes in 0 blocks > ==26297== Rerun with --leak-check=full to see details of leaked memory > ==26297== > ==26297== For counts of detected and suppressed errors, rerun with: -v > ==26297== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) > sumseq:PETSc sumseq$ > > > > Here is the run with 2 processors (-n 2) > > sumseq:PETSc sumseq$ valgrind mpiexec -n 2 ./petsctest -mat_view_info > ==26301== Memcheck, a memory error detector > ==26301== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. > ==26301== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info > ==26301== Command: mpiexec -n 2 ./petsctest -mat_view_info > ==26301== > UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] > N: 46575 > cores: 2 > MPI TEST: My rank is: 0 > MPI TEST: My rank is: 1 > Rank 0 has range 0 and 23288 > Rank 1 has range 23288 and 46575 > Number of non-zero entries in matrix: 690339 > Done setting matrix values... > between assembly > between assembly > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: [1] MatStashScatterGetMesg_Private line 609 > /usr/local/petsc-3.3-p2/src/mat/utils/matstash.c > [1]PETSC ERROR: [1] MatAssemblyEnd_MPIAIJ line 646 > /usr/local/petsc-3.3-p2/src/mat/impls/aij/mpi/mpiaij.c > [1]PETSC ERROR: [1] MatAssemblyEnd line 4857 > /usr/local/petsc-3.3-p2/src/mat/interface/matrix.c > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 > CDT 2012 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: ./petsctest on a arch-darw named sumseq.predsci.com by > sumseq Fri Jul 27 13:34:36 2012 > [1]PETSC ERROR: Libraries linked from > /usr/local/petsc-3.3-p2/arch-darwin-c-debug/lib > [1]PETSC ERROR: Configure run at Fri Jul 27 13:28:26 2012 > [1]PETSC ERROR: Configure options --with-debugging=1 > [1]PETSC ERROR: > ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 > [cli_1]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 > ==26301== > ==26301== HEAP SUMMARY: > ==26301== in use at exit: 139,984 bytes in 65 blocks > ==26301== total heap usage: 1,001 allocs, 936 frees, 234,886 bytes > allocated > ==26301== > ==26301== LEAK SUMMARY: > ==26301== definitely lost: 0 bytes in 0 blocks > ==26301== indirectly lost: 0 bytes in 0 blocks > ==26301== possibly lost: 0 bytes in 0 blocks > ==26301== still reachable: 139,984 bytes in 65 blocks > ==26301== suppressed: 0 bytes in 0 blocks > ==26301== Rerun with --leak-check=full to see details of leaked memory > ==26301== > ==26301== For counts of detected and suppressed errors, rerun with: -v > ==26301== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) > sumseq:PETSc sumseq$ > > > > - Ron > > > > > > > On Fri, Jul 27, 2012 at 1:19 PM, Jed Brown wrote: > >> 1. Check for memory leaks using Valgrind. >> >> 2. Be sure to run --with-debugging=1 (the default) when trying to find >> the error. >> >> 3. Send the full error message and the relevant bit of code. >> >> >> On Fri, Jul 27, 2012 at 3:17 PM, Ronald M. Caplan wrote: >> >>> Hello, >>> >>> I am running a simple test code which takes a sparse AIJ matrix in PETSc >>> and multiplies it by a vector. >>> >>> The matrix is defined as an AIJ MPI matrix. >>> >>> When I run the program on a single core, it runs fine. >>> >>> When I run it using MPI with multiple threads (I am on a 4-core, >>> 8-thread MAC) I can get the code to run correctly for matrices under a >>> certain size (2880 X 2880), but when the matrix is set to be larger, the >>> code crashes with a segfault and the error says it was in the >>> MatAssemblyEnd(). Sometimes it works with -n 2, but typically it always >>> crashes when using multi-core. >>> >>> Any ideas on what it could be? >>> >>> Thanks, >>> >>> Ron Caplan >>> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Fri Jul 27 16:14:13 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Fri, 27 Jul 2012 14:14:13 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: Hi, I do not know how to get the stack trace. Attached is the code and makefile. The value of npts is set to 25 which is where the code crashes with more than one core running. If I set the npts to around 10, then the code works with up to 12 processes (fast too!) but no more otherwise there is a crash as well. Thanks for your help! - Ron C On Fri, Jul 27, 2012 at 1:52 PM, Matthew Knepley wrote: > On Fri, Jul 27, 2012 at 3:35 PM, Ronald M. Caplan wrote: > >> 1) Checked it, had no leaks or any other problems that I could see. >> >> 2) Ran it with debugging and without. The debugging is how I know it was >> in MatAssemblyEnd(). >> > > Its rare when valgrind does not catch something, but it happens. From here > I would really like: > > 1) The stack trace from the fault > > 2) The code to run here > > This is one of the oldest and most used pieces of PETSc. Its difficult to > believe that the bug is there > rather than a result of earlier memory corruption. > > Thanks, > > Matt > > >> 3) Here is the matrix part of the code: >> >> !Create matrix: >> call MatCreate(PETSC_COMM_WORLD,A,ierr) >> call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N,ierr) >> call MatSetType(A,MATMPIAIJ,ierr) >> call MatSetFromOptions(A,ierr) >> !print*,'3nrt: ',3*nr*nt >> i = 16 >> IF(size .eq. 1) THEN >> j = 0 >> ELSE >> j = 8 >> END IF >> call MatMPIAIJSetPreallocation(A,i,PETSC_NULL_INTEGER, >> & j,PETSC_NULL_INTEGER,ierr) >> >> !Do not call this if using preallocation! >> !call MatSetUp(A,ierr) >> >> call MatGetOwnershipRange(A,i,j,ierr) >> print*,'Rank ',rank,' has range ',i,' and ',j >> >> !Get MAS matrix in CSR format (random numbers for now): >> IF (rank .eq. 0) THEN >> call GET_RAND_MAS_MATRIX(CSR_A,CSR_AI,CSR_AJ,nr,nt,np,M) >> print*,'Number of non-zero entries in matrix:',M >> !Store matrix values one-by-one (inefficient: better way >> ! more complicated - implement later) >> >> DO i=1,N >> !print*,'numofnonzerosinrowi:',CSR_AJ(i+1)-CSR_AJ(i)+1 >> DO j=CSR_AJ(i)+1,CSR_AJ(i+1) >> call MatSetValue(A,i-1,CSR_AI(j),CSR_A(j), >> & INSERT_VALUES,ierr) >> >> END DO >> END DO >> print*,'Done setting matrix values...' >> END IF >> >> !Assemble matrix A across all cores: >> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) >> print*,'between assembly' >> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) >> >> >> >> A couple things to note: >> a) my CSR_AJ is what most peaople would call ai etc >> b) my CSR array values are 0-index but the arrays are 1-indexed. >> >> >> >> Here is the run with one processor (-n 1): >> >> sumseq:PETSc sumseq$ valgrind mpiexec -n 1 ./petsctest -mat_view_info >> ==26297== Memcheck, a memory error detector >> ==26297== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. >> ==26297== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright >> info >> ==26297== Command: mpiexec -n 1 ./petsctest -mat_view_info >> ==26297== >> UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] >> N: 46575 >> cores: 1 >> MPI TEST: My rank is: 0 >> Rank 0 has range 0 and 46575 >> Number of non-zero entries in matrix: 690339 >> Done setting matrix values... >> between assembly >> Matrix Object: 1 MPI processes >> type: mpiaij >> rows=46575, cols=46575 >> total: nonzeros=690339, allocated nonzeros=745200 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node (on process 0) routines >> PETSc y=Ax time: 367.9164 nsec/mp. >> PETSc y=Ax flops: 0.2251188 GFLOPS. >> ==26297== >> ==26297== HEAP SUMMARY: >> ==26297== in use at exit: 139,984 bytes in 65 blocks >> ==26297== total heap usage: 938 allocs, 873 frees, 229,722 bytes >> allocated >> ==26297== >> ==26297== LEAK SUMMARY: >> ==26297== definitely lost: 0 bytes in 0 blocks >> ==26297== indirectly lost: 0 bytes in 0 blocks >> ==26297== possibly lost: 0 bytes in 0 blocks >> ==26297== still reachable: 139,984 bytes in 65 blocks >> ==26297== suppressed: 0 bytes in 0 blocks >> ==26297== Rerun with --leak-check=full to see details of leaked memory >> ==26297== >> ==26297== For counts of detected and suppressed errors, rerun with: -v >> ==26297== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) >> sumseq:PETSc sumseq$ >> >> >> >> Here is the run with 2 processors (-n 2) >> >> sumseq:PETSc sumseq$ valgrind mpiexec -n 2 ./petsctest -mat_view_info >> ==26301== Memcheck, a memory error detector >> ==26301== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. >> ==26301== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright >> info >> ==26301== Command: mpiexec -n 2 ./petsctest -mat_view_info >> ==26301== >> UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] >> N: 46575 >> cores: 2 >> MPI TEST: My rank is: 0 >> MPI TEST: My rank is: 1 >> Rank 0 has range 0 and 23288 >> Rank 1 has range 23288 and 46575 >> Number of non-zero entries in matrix: 690339 >> Done setting matrix values... >> between assembly >> between assembly >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [1]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try >> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >> corruption errors >> [1]PETSC ERROR: likely location of problem given in stack below >> [1]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [1]PETSC ERROR: INSTEAD the line number of the start of the function >> [1]PETSC ERROR: is given. >> [1]PETSC ERROR: [1] MatStashScatterGetMesg_Private line 609 >> /usr/local/petsc-3.3-p2/src/mat/utils/matstash.c >> [1]PETSC ERROR: [1] MatAssemblyEnd_MPIAIJ line 646 >> /usr/local/petsc-3.3-p2/src/mat/impls/aij/mpi/mpiaij.c >> [1]PETSC ERROR: [1] MatAssemblyEnd line 4857 >> /usr/local/petsc-3.3-p2/src/mat/interface/matrix.c >> [1]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: Signal received! >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00 >> CDT 2012 >> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [1]PETSC ERROR: See docs/index.html for manual pages. >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: ./petsctest on a arch-darw named sumseq.predsci.com by >> sumseq Fri Jul 27 13:34:36 2012 >> [1]PETSC ERROR: Libraries linked from >> /usr/local/petsc-3.3-p2/arch-darwin-c-debug/lib >> [1]PETSC ERROR: Configure run at Fri Jul 27 13:28:26 2012 >> [1]PETSC ERROR: Configure options --with-debugging=1 >> [1]PETSC ERROR: >> ------------------------------------------------------------------------ >> [1]PETSC ERROR: User provided function() line 0 in unknown directory >> unknown file >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >> [cli_1]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >> ==26301== >> ==26301== HEAP SUMMARY: >> ==26301== in use at exit: 139,984 bytes in 65 blocks >> ==26301== total heap usage: 1,001 allocs, 936 frees, 234,886 bytes >> allocated >> ==26301== >> ==26301== LEAK SUMMARY: >> ==26301== definitely lost: 0 bytes in 0 blocks >> ==26301== indirectly lost: 0 bytes in 0 blocks >> ==26301== possibly lost: 0 bytes in 0 blocks >> ==26301== still reachable: 139,984 bytes in 65 blocks >> ==26301== suppressed: 0 bytes in 0 blocks >> ==26301== Rerun with --leak-check=full to see details of leaked memory >> ==26301== >> ==26301== For counts of detected and suppressed errors, rerun with: -v >> ==26301== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) >> sumseq:PETSc sumseq$ >> >> >> >> - Ron >> >> >> >> >> >> >> On Fri, Jul 27, 2012 at 1:19 PM, Jed Brown wrote: >> >>> 1. Check for memory leaks using Valgrind. >>> >>> 2. Be sure to run --with-debugging=1 (the default) when trying to find >>> the error. >>> >>> 3. Send the full error message and the relevant bit of code. >>> >>> >>> On Fri, Jul 27, 2012 at 3:17 PM, Ronald M. Caplan wrote: >>> >>>> Hello, >>>> >>>> I am running a simple test code which takes a sparse AIJ matrix in >>>> PETSc and multiplies it by a vector. >>>> >>>> The matrix is defined as an AIJ MPI matrix. >>>> >>>> When I run the program on a single core, it runs fine. >>>> >>>> When I run it using MPI with multiple threads (I am on a 4-core, >>>> 8-thread MAC) I can get the code to run correctly for matrices under a >>>> certain size (2880 X 2880), but when the matrix is set to be larger, the >>>> code crashes with a segfault and the error says it was in the >>>> MatAssemblyEnd(). Sometimes it works with -n 2, but typically it always >>>> crashes when using multi-core. >>>> >>>> Any ideas on what it could be? >>>> >>>> Thanks, >>>> >>>> Ron Caplan >>>> >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsctest.F Type: application/octet-stream Size: 15159 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 265 bytes Desc: not available URL: From nakib.mojojojo at gmail.com Sun Jul 29 23:52:42 2012 From: nakib.mojojojo at gmail.com (Nakib Haider) Date: Mon, 30 Jul 2012 00:52:42 -0400 Subject: [petsc-users] updated version / slower execution Message-ID: <5016131A.7030706@uottawa.ca> Hello I updated from 3.0.0-p12 to 3.3-p2. After updating my code to run on the new version, I noticed that it runs significantly slower on a Intel Core i3-2330M (2.2 GHz) machine (running petsc 3.3-p1) than on an old Pentium Dual-Core CPU T4200 (2.0 GHz) machine (running petsc 3.0.0-p12). The code is not parallel. Any suggestion on why this is happening will greatly help me. Thank you -- Nakib Haider Graduate Student MCD 304 Department of Physics University of Ottawa From bsmith at mcs.anl.gov Sun Jul 29 23:56:28 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 29 Jul 2012 23:56:28 -0500 Subject: [petsc-users] updated version / slower execution In-Reply-To: <5016131A.7030706@uottawa.ca> References: <5016131A.7030706@uottawa.ca> Message-ID: You need to run both with -log_summary to see if some part of the code is going slower. But, my guess is because of the way we changed the default convergence criteria for linear solves. Run both versions with -ksp_monitor and see if they are converging the same but the new one is taking many more iterations. Barry On Jul 29, 2012, at 11:52 PM, Nakib Haider wrote: > Hello > > I updated from 3.0.0-p12 to 3.3-p2. After updating my code to run on the new version, I noticed that it runs significantly slower on a Intel Core i3-2330M (2.2 GHz) machine (running petsc 3.3-p1) than on an old Pentium Dual-Core CPU T4200 (2.0 GHz) machine (running petsc 3.0.0-p12). The code is not parallel. Any suggestion on why this is happening will greatly help me. > > Thank you > > -- > Nakib Haider > Graduate Student > MCD 304 > Department of Physics > University of Ottawa > From mrosso at uci.edu Mon Jul 30 16:06:33 2012 From: mrosso at uci.edu (Michele Rosso) Date: Mon, 30 Jul 2012 14:06:33 -0700 Subject: [petsc-users] Multigrid preconditioning Message-ID: <5016F759.4030401@uci.edu> Hi, I am solving a variable coefficients Poisson equation with periodic BCs. The equation is discretized by using the standard 5-points stencil finite differencing scheme. I managed to solve the system successfullywith the PCG method and now I would like to add a preconditioner to speed up the calculation. My idea is to use the multigrid preconditioner. Example ex22f.F implements what I think I need. If I understand correctly example ex22f.F, the subroutines "ComputeRHS" and "ComputeMatrix" define how the matrix and rhs-vector have to be computed at each level. In my case tough, both the jacobian and the rhs-vector cannot be computed "analytically", that is, they depend on variables whose values are available only at the finest grid. How can I overcome this difficulty? Thank you, Michele -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 16:18:14 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 16:18:14 -0500 Subject: [petsc-users] Multigrid preconditioning In-Reply-To: <5016F759.4030401@uci.edu> References: <5016F759.4030401@uci.edu> Message-ID: On Mon, Jul 30, 2012 at 4:06 PM, Michele Rosso wrote: > Hi, > > I am solving a variable coefficients Poisson equation with periodic BCs. > The equation is discretized by using the standard 5-points stencil finite > differencing scheme. > I managed to solve the system successfully with the PCG method and now I > would like to add > a preconditioner to speed up the calculation. My idea is to use the > multigrid preconditioner. > > Example ex22f.F implements what I think I need. > If I understand correctly example ex22f.F, the subroutines "ComputeRHS" > and "ComputeMatrix" define how the > matrix and rhs-vector have to be computed at each level. > In my case tough, both the jacobian and the rhs-vector cannot be computed > "analytically", that is, they depend on variables > whose values are available only at the finest grid. > > How can I overcome this difficulty? > Two possibilities: 1. homogenize on your own and rediscretize 2. use Galerkin coarse operators (possibly with algebraic multigrid) Option 2 is much more convenient because it never For geometric multigrid using DMDA, just use -pc_type mg -pc_mg_galerkin For algebraic multigrid, use -pc_type gamg -pc_gamg_agg_nsmooths 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadsed at gmail.com Mon Jul 30 16:19:39 2012 From: hadsed at gmail.com (Hadayat Seddiqi) Date: Mon, 30 Jul 2012 17:19:39 -0400 Subject: [petsc-users] Integrating PETSc with existing software using CMake Message-ID: Hello, I'm working on a large numerical software project whose framework has largely been developed already. We're using CMake to generate makefiles. I'm also using SLEPc (for full disclosure). The examples given by PETSc and SLEPc documentation require me to include makefiles, but I don't know of any straightforward way to command CMake to do this for me. I have looked at the FAQ's link for the CMake question: https://github.com/jedbrown/dohp But this seems very old, and in any case it doesn't exactly work. I'm not an expert on CMake, so I couldn't say what was the causing the problem, but in the end it told me it could not find the PETSc libraries. It seemed to be rather complicated-- I know PETSc will be where I need it, so I don't need all the verification that it's there and everything works. I thought, with the benefit of more intimate knowledge of how PETSc runs, that someone could show a much simpler way (it seems to me that this ought to be the case). I hope this is the right list. I have fully configured and installed PETSc/SLEPc, as well as having successfully run the examples with MPI. Thanks, Had -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 16:36:02 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 16:36:02 -0500 Subject: [petsc-users] Integrating PETSc with existing software using CMake In-Reply-To: References: Message-ID: On Mon, Jul 30, 2012 at 4:19 PM, Hadayat Seddiqi wrote: > Hello, > > I'm working on a large numerical software project whose framework has > largely been developed already. We're using CMake to generate makefiles. > I'm also using SLEPc (for full disclosure). The examples given by PETSc and > SLEPc documentation require me to include makefiles, but I don't know of > any straightforward way to command CMake to do this for me. > > I have looked at the FAQ's link for the CMake question: > https://github.com/jedbrown/dohp But this seems very old, and in any case > it doesn't exactly work. > The CMake stuff is here. https://github.com/jedbrown/cmake-modules/ I'm not an expert on CMake, so I couldn't say what was the causing the > problem, but in the end it told me it could not find the PETSc libraries. > It seemed to be rather complicated-- I know PETSc will be where I need it, > so I don't need all the verification that it's there and everything works. > I thought, with the benefit of more intimate knowledge of how PETSc runs, > that someone could show a much simpler way (it seems to me that this ought > to be the case). > The problem is that there are lots of ways that things can "not work", so its important for the FindPETSc.cmake script to really try. Also, CMake insists on taking parameters in a different way (e.g. converting command-line flags to full paths). Have you looked at the logs (CMakeFiles/CMake{Output,Error}.log Here is more active package that uses the FindPETSc.cmake script https://github.com/pism/pism -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrosso at uci.edu Mon Jul 30 16:39:22 2012 From: mrosso at uci.edu (Michele Rosso) Date: Mon, 30 Jul 2012 14:39:22 -0700 Subject: [petsc-users] Multigrid preconditioning In-Reply-To: References: <5016F759.4030401@uci.edu> Message-ID: <5016FF0A.4020408@uci.edu> Thank you, I will try to use option 2 as you suggested. I'd prefer to implement the multigrid preconditioner directly inside the code rather then using he command line options. Could you point me to an example where this (or something similar) is done? Thank you, Michele On 07/30/2012 02:18 PM, Jed Brown wrote: > On Mon, Jul 30, 2012 at 4:06 PM, Michele Rosso > wrote: > > Hi, > > I am solving a variable coefficients Poisson equation with > periodic BCs. > The equation is discretized by using the standard 5-points stencil > finite differencing scheme. > I managed to solve the system successfullywith the PCG method and > now I would like to add > a preconditioner to speed up the calculation. My idea is to use > the multigrid preconditioner. > > Example ex22f.F implements what I think I need. > If I understand correctly example ex22f.F, the subroutines > "ComputeRHS" and "ComputeMatrix" define how the > matrix and rhs-vector have to be computed at each level. > In my case tough, both the jacobian and the rhs-vector cannot be > computed "analytically", that is, they depend on variables > whose values are available only at the finest grid. > > How can I overcome this difficulty? > > > Two possibilities: > > 1. homogenize on your own and rediscretize > > 2. use Galerkin coarse operators (possibly with algebraic multigrid) > > > Option 2 is much more convenient because it never > > For geometric multigrid using DMDA, just use -pc_type mg -pc_mg_galerkin > > For algebraic multigrid, use -pc_type gamg -pc_gamg_agg_nsmooths 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 16:43:16 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 16:43:16 -0500 Subject: [petsc-users] Multigrid preconditioning In-Reply-To: <5016FF0A.4020408@uci.edu> References: <5016F759.4030401@uci.edu> <5016FF0A.4020408@uci.edu> Message-ID: On Mon, Jul 30, 2012 at 4:39 PM, Michele Rosso wrote: > Thank you, > > I will try to use option 2 as you suggested. > I'd prefer to implement the multigrid preconditioner directly inside the > code > rather then using he command line options. > Could you point me to an example where this (or something similar) is done? > You can use -help and grep to find the functions that provide the functionality in command line options. We recommend using command line options to explore which methods work. Once you have a configuration that you really like, you can put it in the code or (easier) just put it in an options file. > > Thank you, > > Michele > > > On 07/30/2012 02:18 PM, Jed Brown wrote: > > On Mon, Jul 30, 2012 at 4:06 PM, Michele Rosso wrote: > >> Hi, >> >> I am solving a variable coefficients Poisson equation with periodic BCs. >> The equation is discretized by using the standard 5-points stencil finite >> differencing scheme. >> I managed to solve the system successfully with the PCG method and now I >> would like to add >> a preconditioner to speed up the calculation. My idea is to use the >> multigrid preconditioner. >> >> Example ex22f.F implements what I think I need. >> If I understand correctly example ex22f.F, the subroutines "ComputeRHS" >> and "ComputeMatrix" define how the >> matrix and rhs-vector have to be computed at each level. >> In my case tough, both the jacobian and the rhs-vector cannot be computed >> "analytically", that is, they depend on variables >> whose values are available only at the finest grid. >> >> How can I overcome this difficulty? >> > > Two possibilities: > > 1. homogenize on your own and rediscretize > > 2. use Galerkin coarse operators (possibly with algebraic multigrid) > > > Option 2 is much more convenient because it never > > For geometric multigrid using DMDA, just use -pc_type mg -pc_mg_galerkin > > For algebraic multigrid, use -pc_type gamg -pc_gamg_agg_nsmooths 1 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irving at naml.us Mon Jul 30 16:54:20 2012 From: irving at naml.us (Geoffrey Irving) Date: Mon, 30 Jul 2012 14:54:20 -0700 Subject: [petsc-users] specifying -framework Accelerate for blas/lapack on Mac Message-ID: Hello, What's the right way to tell PETSc configure to use "-framework Accelerate" to get blas/lapack on Mac? I tried ./configure --prefix=/usr/local --with-shared-libraries=1 --PETSC_DIR=/Users/irving/otherlab/download/petsc-3.3-p2 --download-ml=yes --with-mpi-dir=/usr/local --with-blas-lapack-lib="-framework Accelerate" which failed with =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:112) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- You set a value for --with-blas-lapack-lib=, but ['-framework', 'Accelerate'] cannot be used ******************************************************************************* --with-blas-lapack-lib=Accelerate fails similarly. Thanks, Geoffrey From balay at mcs.anl.gov Mon Jul 30 16:58:59 2012 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 30 Jul 2012 16:58:59 -0500 (CDT) Subject: [petsc-users] specifying -framework Accelerate for blas/lapack on Mac In-Reply-To: References: Message-ID: -llapack -lblas on Mac are links to the libraries in accelerate framework - and configure by default will find and use these libraries by default. For this - don't sepcify any blas/lapack options. But if you really need the -framework flag - you can try the configure option: LIBS="-framework Accelerate" Satish On Mon, 30 Jul 2012, Geoffrey Irving wrote: > Hello, > > What's the right way to tell PETSc configure to use "-framework > Accelerate" to get blas/lapack on Mac? I tried > > ./configure --prefix=/usr/local --with-shared-libraries=1 > --PETSC_DIR=/Users/irving/otherlab/download/petsc-3.3-p2 > --download-ml=yes --with-mpi-dir=/usr/local > --with-blas-lapack-lib="-framework Accelerate" > > which failed with > > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: checkLib from > config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:112) > > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > ------------------------------------------------------------------------------- > You set a value for --with-blas-lapack-lib=, but ['-framework', > 'Accelerate'] cannot be used > ******************************************************************************* > > --with-blas-lapack-lib=Accelerate fails similarly. > > Thanks, > Geoffrey > From caplanr at predsci.com Mon Jul 30 17:04:41 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Mon, 30 Jul 2012 15:04:41 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: Hi everyone, I seem to have solved the problem. I was storing my entire matrix on node 0 and then calling MatAssembly (begin and end) on all nodes (which should have worked...). Apparently I was using too much space for the buffering or the like, because when I change the code so each node sets its own matrix values, than the MatAssemblyEnd does not seg fault. Why should this be the case? How many elements of a vector or matrix can a single node "set" before Assembly to distribute over all nodes? - Ron C On Fri, Jul 27, 2012 at 2:14 PM, Ronald M. Caplan wrote: > Hi, > > I do not know how to get the stack trace. > > Attached is the code and makefile. > > The value of npts is set to 25 which is where the code crashes with more > than one core running. If I set the npts to around 10, then the code > works with up to 12 processes (fast too!) but no more otherwise there is a > crash as well. > > Thanks for your help! > > - Ron C > > > On Fri, Jul 27, 2012 at 1:52 PM, Matthew Knepley wrote: > >> On Fri, Jul 27, 2012 at 3:35 PM, Ronald M. Caplan wrote: >> >>> 1) Checked it, had no leaks or any other problems that I could see. >>> >>> 2) Ran it with debugging and without. The debugging is how I know it >>> was in MatAssemblyEnd(). >>> >> >> Its rare when valgrind does not catch something, but it happens. From >> here I would really like: >> >> 1) The stack trace from the fault >> >> 2) The code to run here >> >> This is one of the oldest and most used pieces of PETSc. Its difficult to >> believe that the bug is there >> rather than a result of earlier memory corruption. >> >> Thanks, >> >> Matt >> >> >>> 3) Here is the matrix part of the code: >>> >>> !Create matrix: >>> call MatCreate(PETSC_COMM_WORLD,A,ierr) >>> call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N,ierr) >>> call MatSetType(A,MATMPIAIJ,ierr) >>> call MatSetFromOptions(A,ierr) >>> !print*,'3nrt: ',3*nr*nt >>> i = 16 >>> IF(size .eq. 1) THEN >>> j = 0 >>> ELSE >>> j = 8 >>> END IF >>> call MatMPIAIJSetPreallocation(A,i,PETSC_NULL_INTEGER, >>> & j,PETSC_NULL_INTEGER,ierr) >>> >>> !Do not call this if using preallocation! >>> !call MatSetUp(A,ierr) >>> >>> call MatGetOwnershipRange(A,i,j,ierr) >>> print*,'Rank ',rank,' has range ',i,' and ',j >>> >>> !Get MAS matrix in CSR format (random numbers for now): >>> IF (rank .eq. 0) THEN >>> call GET_RAND_MAS_MATRIX(CSR_A,CSR_AI,CSR_AJ,nr,nt,np,M) >>> print*,'Number of non-zero entries in matrix:',M >>> !Store matrix values one-by-one (inefficient: better way >>> ! more complicated - implement later) >>> >>> DO i=1,N >>> !print*,'numofnonzerosinrowi:',CSR_AJ(i+1)-CSR_AJ(i)+1 >>> DO j=CSR_AJ(i)+1,CSR_AJ(i+1) >>> call MatSetValue(A,i-1,CSR_AI(j),CSR_A(j), >>> & INSERT_VALUES,ierr) >>> >>> END DO >>> END DO >>> print*,'Done setting matrix values...' >>> END IF >>> >>> !Assemble matrix A across all cores: >>> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) >>> print*,'between assembly' >>> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) >>> >>> >>> >>> A couple things to note: >>> a) my CSR_AJ is what most peaople would call ai etc >>> b) my CSR array values are 0-index but the arrays are 1-indexed. >>> >>> >>> >>> Here is the run with one processor (-n 1): >>> >>> sumseq:PETSc sumseq$ valgrind mpiexec -n 1 ./petsctest -mat_view_info >>> ==26297== Memcheck, a memory error detector >>> ==26297== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. >>> ==26297== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright >>> info >>> ==26297== Command: mpiexec -n 1 ./petsctest -mat_view_info >>> ==26297== >>> UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] >>> N: 46575 >>> cores: 1 >>> MPI TEST: My rank is: 0 >>> Rank 0 has range 0 and 46575 >>> Number of non-zero entries in matrix: 690339 >>> Done setting matrix values... >>> between assembly >>> Matrix Object: 1 MPI processes >>> type: mpiaij >>> rows=46575, cols=46575 >>> total: nonzeros=690339, allocated nonzeros=745200 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node (on process 0) routines >>> PETSc y=Ax time: 367.9164 nsec/mp. >>> PETSc y=Ax flops: 0.2251188 GFLOPS. >>> ==26297== >>> ==26297== HEAP SUMMARY: >>> ==26297== in use at exit: 139,984 bytes in 65 blocks >>> ==26297== total heap usage: 938 allocs, 873 frees, 229,722 bytes >>> allocated >>> ==26297== >>> ==26297== LEAK SUMMARY: >>> ==26297== definitely lost: 0 bytes in 0 blocks >>> ==26297== indirectly lost: 0 bytes in 0 blocks >>> ==26297== possibly lost: 0 bytes in 0 blocks >>> ==26297== still reachable: 139,984 bytes in 65 blocks >>> ==26297== suppressed: 0 bytes in 0 blocks >>> ==26297== Rerun with --leak-check=full to see details of leaked memory >>> ==26297== >>> ==26297== For counts of detected and suppressed errors, rerun with: -v >>> ==26297== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) >>> sumseq:PETSc sumseq$ >>> >>> >>> >>> Here is the run with 2 processors (-n 2) >>> >>> sumseq:PETSc sumseq$ valgrind mpiexec -n 2 ./petsctest -mat_view_info >>> ==26301== Memcheck, a memory error detector >>> ==26301== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. >>> ==26301== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright >>> info >>> ==26301== Command: mpiexec -n 2 ./petsctest -mat_view_info >>> ==26301== >>> UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] >>> N: 46575 >>> cores: 2 >>> MPI TEST: My rank is: 0 >>> MPI TEST: My rank is: 1 >>> Rank 0 has range 0 and 23288 >>> Rank 1 has range 23288 and 46575 >>> Number of non-zero entries in matrix: 690339 >>> Done setting matrix values... >>> between assembly >>> between assembly >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [1]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [1]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try >>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>> corruption errors >>> [1]PETSC ERROR: likely location of problem given in stack below >>> [1]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [1]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [1]PETSC ERROR: is given. >>> [1]PETSC ERROR: [1] MatStashScatterGetMesg_Private line 609 >>> /usr/local/petsc-3.3-p2/src/mat/utils/matstash.c >>> [1]PETSC ERROR: [1] MatAssemblyEnd_MPIAIJ line 646 >>> /usr/local/petsc-3.3-p2/src/mat/impls/aij/mpi/mpiaij.c >>> [1]PETSC ERROR: [1] MatAssemblyEnd line 4857 >>> /usr/local/petsc-3.3-p2/src/mat/interface/matrix.c >>> [1]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [1]PETSC ERROR: Signal received! >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 >>> 15:42:00 CDT 2012 >>> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [1]PETSC ERROR: See docs/index.html for manual pages. >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: ./petsctest on a arch-darw named sumseq.predsci.com by >>> sumseq Fri Jul 27 13:34:36 2012 >>> [1]PETSC ERROR: Libraries linked from >>> /usr/local/petsc-3.3-p2/arch-darwin-c-debug/lib >>> [1]PETSC ERROR: Configure run at Fri Jul 27 13:28:26 2012 >>> [1]PETSC ERROR: Configure options --with-debugging=1 >>> [1]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [1]PETSC ERROR: User provided function() line 0 in unknown directory >>> unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>> [cli_1]: aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>> ==26301== >>> ==26301== HEAP SUMMARY: >>> ==26301== in use at exit: 139,984 bytes in 65 blocks >>> ==26301== total heap usage: 1,001 allocs, 936 frees, 234,886 bytes >>> allocated >>> ==26301== >>> ==26301== LEAK SUMMARY: >>> ==26301== definitely lost: 0 bytes in 0 blocks >>> ==26301== indirectly lost: 0 bytes in 0 blocks >>> ==26301== possibly lost: 0 bytes in 0 blocks >>> ==26301== still reachable: 139,984 bytes in 65 blocks >>> ==26301== suppressed: 0 bytes in 0 blocks >>> ==26301== Rerun with --leak-check=full to see details of leaked memory >>> ==26301== >>> ==26301== For counts of detected and suppressed errors, rerun with: -v >>> ==26301== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) >>> sumseq:PETSc sumseq$ >>> >>> >>> >>> - Ron >>> >>> >>> >>> >>> >>> >>> On Fri, Jul 27, 2012 at 1:19 PM, Jed Brown wrote: >>> >>>> 1. Check for memory leaks using Valgrind. >>>> >>>> 2. Be sure to run --with-debugging=1 (the default) when trying to find >>>> the error. >>>> >>>> 3. Send the full error message and the relevant bit of code. >>>> >>>> >>>> On Fri, Jul 27, 2012 at 3:17 PM, Ronald M. Caplan wrote: >>>> >>>>> Hello, >>>>> >>>>> I am running a simple test code which takes a sparse AIJ matrix in >>>>> PETSc and multiplies it by a vector. >>>>> >>>>> The matrix is defined as an AIJ MPI matrix. >>>>> >>>>> When I run the program on a single core, it runs fine. >>>>> >>>>> When I run it using MPI with multiple threads (I am on a 4-core, >>>>> 8-thread MAC) I can get the code to run correctly for matrices under a >>>>> certain size (2880 X 2880), but when the matrix is set to be larger, the >>>>> code crashes with a segfault and the error says it was in the >>>>> MatAssemblyEnd(). Sometimes it works with -n 2, but typically it always >>>>> crashes when using multi-core. >>>>> >>>>> Any ideas on what it could be? >>>>> >>>>> Thanks, >>>>> >>>>> Ron Caplan >>>>> >>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 17:09:46 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 15:09:46 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: On Mon, Jul 30, 2012 at 3:04 PM, Ronald M. Caplan wrote: > I seem to have solved the problem. > > I was storing my entire matrix on node 0 and then calling MatAssembly > (begin and end) on all nodes (which should have worked...). > > Apparently I was using too much space for the buffering or the like, > because when I change the code so each node sets its own matrix values, > than the MatAssemblyEnd does not seg fault. > Can you send the test case. It shouldn't seg-fault unless the machine runs out of memory (and most desktop systems have overcommit, so the system will kill arbitrary processes, not necessarily the job that did the latest malloc. In practice, you should call MatAssemblyBegin(...,MAT_FLUSH_ASSEMBLY) periodically. > > Why should this be the case? How many elements of a vector or matrix can > a single node "set" before Assembly to distribute over all nodes? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 30 17:11:49 2012 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 30 Jul 2012 17:11:49 -0500 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: On Mon, Jul 30, 2012 at 5:04 PM, Ronald M. Caplan wrote: > Hi everyone, > > I seem to have solved the problem. > > I was storing my entire matrix on node 0 and then calling MatAssembly > (begin and end) on all nodes (which should have worked...). > > Apparently I was using too much space for the buffering or the like, > because when I change the code so each node sets its own matrix values, > than the MatAssemblyEnd does not seg fault. > Hmm, it should give a nice error, not SEGV so I am still interested in the stack trace. > Why should this be the case? How many elements of a vector or matrix can > a single node "set" before Assembly to distribute over all nodes? > If you are going to set a ton of elements, consider using MAT_ASSEMBLY_FLUSH and calling Assembly a few times during the loop. Matt > - Ron C > > > > > On Fri, Jul 27, 2012 at 2:14 PM, Ronald M. Caplan wrote: > >> Hi, >> >> I do not know how to get the stack trace. >> >> Attached is the code and makefile. >> >> The value of npts is set to 25 which is where the code crashes with more >> than one core running. If I set the npts to around 10, then the code >> works with up to 12 processes (fast too!) but no more otherwise there is a >> crash as well. >> >> Thanks for your help! >> >> - Ron C >> >> >> On Fri, Jul 27, 2012 at 1:52 PM, Matthew Knepley wrote: >> >>> On Fri, Jul 27, 2012 at 3:35 PM, Ronald M. Caplan wrote: >>> >>>> 1) Checked it, had no leaks or any other problems that I could see. >>>> >>>> 2) Ran it with debugging and without. The debugging is how I know it >>>> was in MatAssemblyEnd(). >>>> >>> >>> Its rare when valgrind does not catch something, but it happens. From >>> here I would really like: >>> >>> 1) The stack trace from the fault >>> >>> 2) The code to run here >>> >>> This is one of the oldest and most used pieces of PETSc. Its difficult >>> to believe that the bug is there >>> rather than a result of earlier memory corruption. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> 3) Here is the matrix part of the code: >>>> >>>> !Create matrix: >>>> call MatCreate(PETSC_COMM_WORLD,A,ierr) >>>> call MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N,ierr) >>>> call MatSetType(A,MATMPIAIJ,ierr) >>>> call MatSetFromOptions(A,ierr) >>>> !print*,'3nrt: ',3*nr*nt >>>> i = 16 >>>> IF(size .eq. 1) THEN >>>> j = 0 >>>> ELSE >>>> j = 8 >>>> END IF >>>> call MatMPIAIJSetPreallocation(A,i,PETSC_NULL_INTEGER, >>>> & j,PETSC_NULL_INTEGER,ierr) >>>> >>>> !Do not call this if using preallocation! >>>> !call MatSetUp(A,ierr) >>>> >>>> call MatGetOwnershipRange(A,i,j,ierr) >>>> print*,'Rank ',rank,' has range ',i,' and ',j >>>> >>>> !Get MAS matrix in CSR format (random numbers for now): >>>> IF (rank .eq. 0) THEN >>>> call GET_RAND_MAS_MATRIX(CSR_A,CSR_AI,CSR_AJ,nr,nt,np,M) >>>> print*,'Number of non-zero entries in matrix:',M >>>> !Store matrix values one-by-one (inefficient: better way >>>> ! more complicated - implement later) >>>> >>>> DO i=1,N >>>> !print*,'numofnonzerosinrowi:',CSR_AJ(i+1)-CSR_AJ(i)+1 >>>> DO j=CSR_AJ(i)+1,CSR_AJ(i+1) >>>> call MatSetValue(A,i-1,CSR_AI(j),CSR_A(j), >>>> & INSERT_VALUES,ierr) >>>> >>>> END DO >>>> END DO >>>> print*,'Done setting matrix values...' >>>> END IF >>>> >>>> !Assemble matrix A across all cores: >>>> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr) >>>> print*,'between assembly' >>>> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr) >>>> >>>> >>>> >>>> A couple things to note: >>>> a) my CSR_AJ is what most peaople would call ai etc >>>> b) my CSR array values are 0-index but the arrays are 1-indexed. >>>> >>>> >>>> >>>> Here is the run with one processor (-n 1): >>>> >>>> sumseq:PETSc sumseq$ valgrind mpiexec -n 1 ./petsctest -mat_view_info >>>> ==26297== Memcheck, a memory error detector >>>> ==26297== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et >>>> al. >>>> ==26297== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright >>>> info >>>> ==26297== Command: mpiexec -n 1 ./petsctest -mat_view_info >>>> ==26297== >>>> UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] >>>> N: 46575 >>>> cores: 1 >>>> MPI TEST: My rank is: 0 >>>> Rank 0 has range 0 and 46575 >>>> Number of non-zero entries in matrix: 690339 >>>> Done setting matrix values... >>>> between assembly >>>> Matrix Object: 1 MPI processes >>>> type: mpiaij >>>> rows=46575, cols=46575 >>>> total: nonzeros=690339, allocated nonzeros=745200 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node (on process 0) routines >>>> PETSc y=Ax time: 367.9164 nsec/mp. >>>> PETSc y=Ax flops: 0.2251188 GFLOPS. >>>> ==26297== >>>> ==26297== HEAP SUMMARY: >>>> ==26297== in use at exit: 139,984 bytes in 65 blocks >>>> ==26297== total heap usage: 938 allocs, 873 frees, 229,722 bytes >>>> allocated >>>> ==26297== >>>> ==26297== LEAK SUMMARY: >>>> ==26297== definitely lost: 0 bytes in 0 blocks >>>> ==26297== indirectly lost: 0 bytes in 0 blocks >>>> ==26297== possibly lost: 0 bytes in 0 blocks >>>> ==26297== still reachable: 139,984 bytes in 65 blocks >>>> ==26297== suppressed: 0 bytes in 0 blocks >>>> ==26297== Rerun with --leak-check=full to see details of leaked memory >>>> ==26297== >>>> ==26297== For counts of detected and suppressed errors, rerun with: -v >>>> ==26297== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) >>>> sumseq:PETSc sumseq$ >>>> >>>> >>>> >>>> Here is the run with 2 processors (-n 2) >>>> >>>> sumseq:PETSc sumseq$ valgrind mpiexec -n 2 ./petsctest -mat_view_info >>>> ==26301== Memcheck, a memory error detector >>>> ==26301== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et >>>> al. >>>> ==26301== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright >>>> info >>>> ==26301== Command: mpiexec -n 2 ./petsctest -mat_view_info >>>> ==26301== >>>> UNKNOWN task message [id 3403, to mach_task_self(), reply 0x2803] >>>> N: 46575 >>>> cores: 2 >>>> MPI TEST: My rank is: 0 >>>> MPI TEST: My rank is: 1 >>>> Rank 0 has range 0 and 23288 >>>> Rank 1 has range 23288 and 46575 >>>> Number of non-zero entries in matrix: 690339 >>>> Done setting matrix values... >>>> between assembly >>>> between assembly >>>> [1]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>>> probably memory access out of range >>>> [1]PETSC ERROR: Try option -start_in_debugger or >>>> -on_error_attach_debugger >>>> [1]PETSC ERROR: or see >>>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try >>>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory >>>> corruption errors >>>> [1]PETSC ERROR: likely location of problem given in stack below >>>> [1]PETSC ERROR: --------------------- Stack Frames >>>> ------------------------------------ >>>> [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>>> available, >>>> [1]PETSC ERROR: INSTEAD the line number of the start of the >>>> function >>>> [1]PETSC ERROR: is given. >>>> [1]PETSC ERROR: [1] MatStashScatterGetMesg_Private line 609 >>>> /usr/local/petsc-3.3-p2/src/mat/utils/matstash.c >>>> [1]PETSC ERROR: [1] MatAssemblyEnd_MPIAIJ line 646 >>>> /usr/local/petsc-3.3-p2/src/mat/impls/aij/mpi/mpiaij.c >>>> [1]PETSC ERROR: [1] MatAssemblyEnd line 4857 >>>> /usr/local/petsc-3.3-p2/src/mat/interface/matrix.c >>>> [1]PETSC ERROR: --------------------- Error Message >>>> ------------------------------------ >>>> [1]PETSC ERROR: Signal received! >>>> [1]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 >>>> 15:42:00 CDT 2012 >>>> [1]PETSC ERROR: See docs/changes/index.html for recent updates. >>>> [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>>> [1]PETSC ERROR: See docs/index.html for manual pages. >>>> [1]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: ./petsctest on a arch-darw named sumseq.predsci.com by >>>> sumseq Fri Jul 27 13:34:36 2012 >>>> [1]PETSC ERROR: Libraries linked from >>>> /usr/local/petsc-3.3-p2/arch-darwin-c-debug/lib >>>> [1]PETSC ERROR: Configure run at Fri Jul 27 13:28:26 2012 >>>> [1]PETSC ERROR: Configure options --with-debugging=1 >>>> [1]PETSC ERROR: >>>> ------------------------------------------------------------------------ >>>> [1]PETSC ERROR: User provided function() line 0 in unknown directory >>>> unknown file >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>>> [cli_1]: aborting job: >>>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1 >>>> ==26301== >>>> ==26301== HEAP SUMMARY: >>>> ==26301== in use at exit: 139,984 bytes in 65 blocks >>>> ==26301== total heap usage: 1,001 allocs, 936 frees, 234,886 bytes >>>> allocated >>>> ==26301== >>>> ==26301== LEAK SUMMARY: >>>> ==26301== definitely lost: 0 bytes in 0 blocks >>>> ==26301== indirectly lost: 0 bytes in 0 blocks >>>> ==26301== possibly lost: 0 bytes in 0 blocks >>>> ==26301== still reachable: 139,984 bytes in 65 blocks >>>> ==26301== suppressed: 0 bytes in 0 blocks >>>> ==26301== Rerun with --leak-check=full to see details of leaked memory >>>> ==26301== >>>> ==26301== For counts of detected and suppressed errors, rerun with: -v >>>> ==26301== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1) >>>> sumseq:PETSc sumseq$ >>>> >>>> >>>> >>>> - Ron >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Fri, Jul 27, 2012 at 1:19 PM, Jed Brown wrote: >>>> >>>>> 1. Check for memory leaks using Valgrind. >>>>> >>>>> 2. Be sure to run --with-debugging=1 (the default) when trying to find >>>>> the error. >>>>> >>>>> 3. Send the full error message and the relevant bit of code. >>>>> >>>>> >>>>> On Fri, Jul 27, 2012 at 3:17 PM, Ronald M. Caplan >>>> > wrote: >>>>> >>>>>> Hello, >>>>>> >>>>>> I am running a simple test code which takes a sparse AIJ matrix in >>>>>> PETSc and multiplies it by a vector. >>>>>> >>>>>> The matrix is defined as an AIJ MPI matrix. >>>>>> >>>>>> When I run the program on a single core, it runs fine. >>>>>> >>>>>> When I run it using MPI with multiple threads (I am on a 4-core, >>>>>> 8-thread MAC) I can get the code to run correctly for matrices under a >>>>>> certain size (2880 X 2880), but when the matrix is set to be larger, the >>>>>> code crashes with a segfault and the error says it was in the >>>>>> MatAssemblyEnd(). Sometimes it works with -n 2, but typically it always >>>>>> crashes when using multi-core. >>>>>> >>>>>> Any ideas on what it could be? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Ron Caplan >>>>>> >>>>> >>>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Mon Jul 30 17:12:13 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Mon, 30 Jul 2012 15:12:13 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: Attached is the code. The original code which segfaults with more than one core is the code I sent last week. - Ron C On Mon, Jul 30, 2012 at 3:09 PM, Jed Brown wrote: > On Mon, Jul 30, 2012 at 3:04 PM, Ronald M. Caplan wrote: > >> I seem to have solved the problem. >> >> I was storing my entire matrix on node 0 and then calling MatAssembly >> (begin and end) on all nodes (which should have worked...). >> >> Apparently I was using too much space for the buffering or the like, >> because when I change the code so each node sets its own matrix values, >> than the MatAssemblyEnd does not seg fault. >> > > Can you send the test case. It shouldn't seg-fault unless the machine runs > out of memory (and most desktop systems have overcommit, so the system will > kill arbitrary processes, not necessarily the job that did the latest > malloc. > > In practice, you should call MatAssemblyBegin(...,MAT_FLUSH_ASSEMBLY) > periodically. > > >> >> Why should this be the case? How many elements of a vector or matrix >> can a single node "set" before Assembly to distribute over all nodes? >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petsctest.F Type: application/octet-stream Size: 19329 bytes Desc: not available URL: From irving at naml.us Mon Jul 30 17:15:58 2012 From: irving at naml.us (Geoffrey Irving) Date: Mon, 30 Jul 2012 15:15:58 -0700 Subject: [petsc-users] specifying -framework Accelerate for blas/lapack on Mac In-Reply-To: References: Message-ID: On Mon, Jul 30, 2012 at 2:58 PM, Satish Balay wrote: > -llapack -lblas on Mac are links to the libraries in accelerate > framework - and configure by default will find and use these libraries > by default. For this - don't sepcify any blas/lapack options. Ah: with no options, it appears to find ATLAS from Macports: ----------------------------------------------------------------------- tile:petsc-3.3-p2% make PETSC_DIR=/usr/local test Running test examples to verify correct installation Using PETSC_DIR=/usr/local and PETSC_ARCH=darwin10.6.0-cxx-debug Possible error running C/C++ src/snes/examples/tutorials/ex19 with 1 MPI process See http://www.mcs.anl.gov/petsc/documentation/faq.html lid velocity = 0.0016, prandtl # = 1, grashof # = 1 dyld: lazy symbol binding failed: Symbol not found: _ATL_dGetNB Referenced from: /usr/local/lib/libpetsc.dylib Expected in: dynamic lookup dyld: Symbol not found: _ATL_dGetNB Referenced from: /usr/local/lib/libpetsc.dylib Expected in: dynamic lookup ----------------------------------------------------------------------- However, I'm not sure why; otool doesn't show ATLAS, and ATLAS should be invisible in /opt/local anyways. I reran configure and rebuilt and now it fails in the build stage. I'll send make.log and configure.log to petsc-maint. Geoffrey From hadsed at gmail.com Mon Jul 30 18:27:35 2012 From: hadsed at gmail.com (Hadayat Seddiqi) Date: Mon, 30 Jul 2012 19:27:35 -0400 Subject: [petsc-users] Integrating PETSc with existing software using CMake In-Reply-To: References: Message-ID: Hi Jed, Apparently it's failing to run the test script (it says PETSC_EXECUTABLE_RUNS was not set). If I'm correct, the output of the failed test script should be in CMakeFiles/CMakeError.log . Looking at this, it gives me the following: In file included from /home/h37/petsc-3.2-p7/include/petscis.h:7, from /home/h37/petsc-3.2-p7/include/petscvec.h:9, from /home/h37/petsc-3.2-p7/include/petscmat.h:6, from /home/h37/petsc-3.2-p7/include/petscdm.h:6, from /home/h37/petsc-3.2-p7/include/petscpc.h:6, from /home/h37/petsc-3.2-p7/include/petscksp.h:6, from /home/h37/petsc-3.2-p7/include/petscsnes.h:6, from /home/h37/petsc-3.2-p7/include/petscts.h:7, from /home/h37/sapphiresimulator/FOP/build/CMakeFiles/CMakeTmp/src.c:3: /home/h37/petsc-3.2-p7/include/petscsys.h:105:17: error: mpi.h: No such file or director And of course a lot more afterwards. How is it possible that it cannot find MPI, even though all of the test cases and example/tutorial programs worked with mpi? I realize it's probably bad practice, but I also thought I would try to disable this flag by forcing PETSC_EXECUTABLE_RUNS to "YES", but it doesn't seem to work. I tried commenting out if (${${run}}) and the corresponding endif, as well as moving it outside the macro. I feel rather silly about this, but I just cannot get it to work. Thanks, Had On Mon, Jul 30, 2012 at 5:36 PM, Jed Brown wrote: > On Mon, Jul 30, 2012 at 4:19 PM, Hadayat Seddiqi wrote: > >> Hello, >> >> I'm working on a large numerical software project whose framework has >> largely been developed already. We're using CMake to generate makefiles. >> I'm also using SLEPc (for full disclosure). The examples given by PETSc and >> SLEPc documentation require me to include makefiles, but I don't know of >> any straightforward way to command CMake to do this for me. >> >> I have looked at the FAQ's link for the CMake question: >> https://github.com/jedbrown/dohp But this seems very old, and in any >> case it doesn't exactly work. >> > > The CMake stuff is here. > > https://github.com/jedbrown/cmake-modules/ > > I'm not an expert on CMake, so I couldn't say what was the causing the >> problem, but in the end it told me it could not find the PETSc libraries. >> It seemed to be rather complicated-- I know PETSc will be where I need it, >> so I don't need all the verification that it's there and everything works. >> I thought, with the benefit of more intimate knowledge of how PETSc runs, >> that someone could show a much simpler way (it seems to me that this ought >> to be the case). >> > > The problem is that there are lots of ways that things can "not work", so > its important for the FindPETSc.cmake script to really try. Also, CMake > insists on taking parameters in a different way (e.g. converting > command-line flags to full paths). > > Have you looked at the logs (CMakeFiles/CMake{Output,Error}.log > > Here is more active package that uses the FindPETSc.cmake script > > https://github.com/pism/pism > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 18:43:12 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 16:43:12 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: I ran $ mpirun -n 2 ./a.out --------------------------------------------------- Running MAS PETSc tests with: nr: 23 np: 27 nt: 25 N: 46575 num_steps: 2000 MPI cores: 2 --------------------------------------------------- MPI TEST: My rank is: 0 MPI TEST: My rank is: 1 Rank 0 has rows 0 to 23288 Rank 1 has rows 23288 to 46575 Number of non-zero entries in matrix: 690339 Computing y=Ax with RON_CSR_AX... ...done! ||y||: 821.67460825997637 y(5)= 2.7454534359667053 Storing MAS matrix into PETSc matrix... ...done! rank 1 about to call MatAssemblyEnd()... rank 0 about to call MatAssemblyEnd()... Computing y=Ax with PETSc MatMult... ...done! ||y||: 821.67460825997568 y(5)= 2.7454534359667053 RON_CSR_AX y=Ax time: 201.704498 nsec/mp. PETSc y=Ax time: 198.269424 nsec/mp. PETSc y=Ax flops: 0.417739183 GFLOPS. Did not converge! Number of iteratioins: 0 On Mon, Jul 30, 2012 at 3:12 PM, Ronald M. Caplan wrote: > Attached is the code. The original code which segfaults with more than > one core is the code I sent last week. > > - Ron C > > > On Mon, Jul 30, 2012 at 3:09 PM, Jed Brown wrote: > >> On Mon, Jul 30, 2012 at 3:04 PM, Ronald M. Caplan wrote: >> >>> I seem to have solved the problem. >>> >>> I was storing my entire matrix on node 0 and then calling MatAssembly >>> (begin and end) on all nodes (which should have worked...). >>> >>> Apparently I was using too much space for the buffering or the like, >>> because when I change the code so each node sets its own matrix values, >>> than the MatAssemblyEnd does not seg fault. >>> >> >> Can you send the test case. It shouldn't seg-fault unless the machine >> runs out of memory (and most desktop systems have overcommit, so the system >> will kill arbitrary processes, not necessarily the job that did the latest >> malloc. >> >> In practice, you should call MatAssemblyBegin(...,MAT_FLUSH_ASSEMBLY) >> periodically. >> >> >>> >>> Why should this be the case? How many elements of a vector or matrix >>> can a single node "set" before Assembly to distribute over all nodes? >>> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 18:46:50 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 16:46:50 -0700 Subject: [petsc-users] Integrating PETSc with existing software using CMake In-Reply-To: References: Message-ID: On Mon, Jul 30, 2012 at 4:27 PM, Hadayat Seddiqi wrote: > Hi Jed, > > Apparently it's failing to run the test script (it says > PETSC_EXECUTABLE_RUNS was not set). If I'm correct, the output of the > failed test script should be in CMakeFiles/CMakeError.log . Looking at > this, it gives me the following: > > In file included from /home/h37/petsc-3.2-p7/include/petscis.h:7, > from /home/h37/petsc-3.2-p7/include/petscvec.h:9, > from /home/h37/petsc-3.2-p7/include/petscmat.h:6, > from /home/h37/petsc-3.2-p7/include/petscdm.h:6, > from /home/h37/petsc-3.2-p7/include/petscpc.h:6, > from /home/h37/petsc-3.2-p7/include/petscksp.h:6, > from /home/h37/petsc-3.2-p7/include/petscsnes.h:6, > from /home/h37/petsc-3.2-p7/include/petscts.h:7, > from > /home/h37/sapphiresimulator/FOP/build/CMakeFiles/CMakeTmp/src.c:3: > /home/h37/petsc-3.2-p7/include/petscsys.h:105:17: error: mpi.h: No such > file or director > You may need to set the correct MPI wrapper compiler. If that doesn't work, can you send the full Output and Error files to petsc-maint? > > And of course a lot more afterwards. How is it possible that it cannot > find MPI, even though all of the test cases and example/tutorial programs > worked with mpi? > > I realize it's probably bad practice, but I also thought I would try to > disable this flag by forcing PETSC_EXECUTABLE_RUNS to "YES", but it doesn't > seem to work. I tried commenting out if (${${run}}) and the corresponding > endif, as well as moving it outside the macro. I feel rather silly about > this, but I just cannot get it to work. > > Thanks, > > Had > > On Mon, Jul 30, 2012 at 5:36 PM, Jed Brown wrote: > >> On Mon, Jul 30, 2012 at 4:19 PM, Hadayat Seddiqi wrote: >> >>> Hello, >>> >>> I'm working on a large numerical software project whose framework has >>> largely been developed already. We're using CMake to generate makefiles. >>> I'm also using SLEPc (for full disclosure). The examples given by PETSc and >>> SLEPc documentation require me to include makefiles, but I don't know of >>> any straightforward way to command CMake to do this for me. >>> >>> I have looked at the FAQ's link for the CMake question: >>> https://github.com/jedbrown/dohp But this seems very old, and in any >>> case it doesn't exactly work. >>> >> >> The CMake stuff is here. >> >> https://github.com/jedbrown/cmake-modules/ >> >> I'm not an expert on CMake, so I couldn't say what was the causing the >>> problem, but in the end it told me it could not find the PETSc libraries. >>> It seemed to be rather complicated-- I know PETSc will be where I need it, >>> so I don't need all the verification that it's there and everything works. >>> I thought, with the benefit of more intimate knowledge of how PETSc runs, >>> that someone could show a much simpler way (it seems to me that this ought >>> to be the case). >>> >> >> The problem is that there are lots of ways that things can "not work", so >> its important for the FindPETSc.cmake script to really try. Also, CMake >> insists on taking parameters in a different way (e.g. converting >> command-line flags to full paths). >> >> Have you looked at the logs (CMakeFiles/CMake{Output,Error}.log >> >> Here is more active package that uses the FindPETSc.cmake script >> >> https://github.com/pism/pism >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Mon Jul 30 18:54:43 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Mon, 30 Jul 2012 16:54:43 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: Yes that is correct. That is the updated code with each node storing its own values. See my previous email to Matt for the old version which segfaults with processors more than 1 and npts =25. - Ron On Mon, Jul 30, 2012 at 4:43 PM, Jed Brown wrote: > I ran > > $ mpirun -n 2 ./a.out > --------------------------------------------------- > Running MAS PETSc tests with: > nr: 23 > np: 27 > nt: 25 > N: 46575 > num_steps: 2000 > MPI cores: 2 > --------------------------------------------------- > MPI TEST: My rank is: 0 > MPI TEST: My rank is: 1 > Rank 0 has rows 0 to 23288 > Rank 1 has rows 23288 to 46575 > Number of non-zero entries in matrix: 690339 > Computing y=Ax with RON_CSR_AX... > ...done! > ||y||: 821.67460825997637 > y(5)= 2.7454534359667053 > Storing MAS matrix into PETSc matrix... > ...done! > rank 1 about to call MatAssemblyEnd()... > rank 0 about to call MatAssemblyEnd()... > Computing y=Ax with PETSc MatMult... > ...done! > ||y||: 821.67460825997568 > y(5)= 2.7454534359667053 > RON_CSR_AX y=Ax time: 201.704498 nsec/mp. > PETSc y=Ax time: 198.269424 nsec/mp. > PETSc y=Ax flops: 0.417739183 GFLOPS. > Did not converge! Number of iteratioins: 0 > > > On Mon, Jul 30, 2012 at 3:12 PM, Ronald M. Caplan wrote: > >> Attached is the code. The original code which segfaults with more than >> one core is the code I sent last week. >> >> - Ron C >> >> >> On Mon, Jul 30, 2012 at 3:09 PM, Jed Brown wrote: >> >>> On Mon, Jul 30, 2012 at 3:04 PM, Ronald M. Caplan wrote: >>> >>>> I seem to have solved the problem. >>>> >>>> I was storing my entire matrix on node 0 and then calling MatAssembly >>>> (begin and end) on all nodes (which should have worked...). >>>> >>>> Apparently I was using too much space for the buffering or the like, >>>> because when I change the code so each node sets its own matrix values, >>>> than the MatAssemblyEnd does not seg fault. >>>> >>> >>> Can you send the test case. It shouldn't seg-fault unless the machine >>> runs out of memory (and most desktop systems have overcommit, so the system >>> will kill arbitrary processes, not necessarily the job that did the latest >>> malloc. >>> >>> In practice, you should call MatAssemblyBegin(...,MAT_FLUSH_ASSEMBLY) >>> periodically. >>> >>> >>>> >>>> Why should this be the case? How many elements of a vector or matrix >>>> can a single node "set" before Assembly to distribute over all nodes? >>>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jul 30 18:57:40 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 30 Jul 2012 16:57:40 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: On Mon, Jul 30, 2012 at 4:54 PM, Ronald M. Caplan wrote: > Yes that is correct. That is the updated code with each node storing its > own values. See my previous email to Matt for the old version which > segfaults with processors more than 1 and npts =25. $ mpiexec.hydra -n 2 ./petsctest N: 46575 cores: 2 MPI TEST: My rank is: 0 MPI TEST: My rank is: 1 Rank 0 has range 0 and 23288 Rank 1 has range 23288 and 46575 Number of non-zero entries in matrix: 690339 Done setting matrix values... between assembly between assembly PETSc y=Ax time: 199.342865 nsec/mp. PETSc y=Ax flops: 0.415489674 GFLOPS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Mon Jul 30 19:10:19 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Mon, 30 Jul 2012 17:10:19 -0700 Subject: [petsc-users] segfault in MatAssemblyEnd() when using large matrices on multi-core MAC OS-X In-Reply-To: References: Message-ID: Hmmm maybe its because I am on a Mac OS X? - Ron On Mon, Jul 30, 2012 at 4:57 PM, Jed Brown wrote: > On Mon, Jul 30, 2012 at 4:54 PM, Ronald M. Caplan wrote: > >> Yes that is correct. That is the updated code with each node storing its >> own values. See my previous email to Matt for the old version which >> segfaults with processors more than 1 and npts =25. > > > $ mpiexec.hydra -n 2 ./petsctest > N: 46575 > cores: 2 > MPI TEST: My rank is: 0 > MPI TEST: My rank is: 1 > Rank 0 has range 0 and 23288 > Rank 1 has range 23288 and 46575 > Number of non-zero entries in matrix: 690339 > Done setting matrix values... > between assembly > between assembly > PETSc y=Ax time: 199.342865 nsec/mp. > PETSc y=Ax flops: 0.415489674 GFLOPS. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From caplanr at predsci.com Tue Jul 31 15:17:07 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Tue, 31 Jul 2012 13:17:07 -0700 Subject: [petsc-users] Save matrix view to postscript? Message-ID: Hi, I saw online that back in 2006 there was a way to tell PETSc to save the image in -mat_view_draw into a postscript file. Is there anyway to still do this? Or to same the image into any format? Or to save a movie of the residual images in ksp? Thanks, Ron C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 31 17:23:58 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 31 Jul 2012 17:23:58 -0500 Subject: [petsc-users] Save matrix view to postscript? In-Reply-To: References: Message-ID: Sorry the postscript never really worked and then the world went PDF. You can do what you want in PETSc 3.3 with ierr = PetscOptionsBool("-draw_save_movie","Make a movie from the images saved","PetscDrawSetSave",movie,&movie,PETSC_NULL);CHKERRQ(ierr); ierr = PetscOptionsString("-draw_save","Save graphics to file","PetscDrawSetSave",filename,filename,PETSC_MAX_PATH_LEN,&save);CHKERRQ(ierr); /*@C PetscDrawSave - Saves images produced in a PetscDraw into a file as a Gif file using AfterImage Collective on PetscDraw Input Parameter: + draw - the graphics context . filename - name of the file, if PETSC_NULL uses name of draw object - movie - produce a movie of all the images Options Database Command: + -draw_save - -draw_save_movie Level: intermediate Concepts: X windows^graphics Concepts: drawing^postscript Concepts: postscript^graphics Concepts: drawing^Microsoft Windows Notes: Requires that PETSc be configured with the option --with-afterimage .seealso: PetscDrawSetFromOptions(), PetscDrawCreate(), PetscDrawDestroy(), PetscDrawSave() Please report any problems to petsc-maint at mcs.anl.gov Note that you must make sure the windows being saved are not covered by other windows. Barry On Jul 31, 2012, at 3:17 PM, "Ronald M. Caplan" wrote: > Hi, > > I saw online that back in 2006 there was a way to tell PETSc to save the image in -mat_view_draw into a postscript file. Is there anyway to still do this? Or to same the image into any format? Or to save a movie of the residual images in ksp? > > Thanks, > > Ron C. From caplanr at predsci.com Tue Jul 31 18:13:20 2012 From: caplanr at predsci.com (Ronald M. Caplan) Date: Tue, 31 Jul 2012 16:13:20 -0700 Subject: [petsc-users] Save matrix view to postscript? In-Reply-To: References: Message-ID: Thanks! I can't seem to get it to work though. When I configure --with-afterimage it says that I need to download it. When I use --download-afterimage it says: "External package afterimage does not support --download-afterimage" I am using a MAC with OS X. Where can I download afterimage? Also, in general, if I want to add a configure to petsc with ./configure, do I have to run ./configure with ALL my options, or just with the new ones I want to add/change? Will it save my previous configure? Thanks again for your rapid responses! - Ron C On Tue, Jul 31, 2012 at 3:23 PM, Barry Smith wrote: > > Sorry the postscript never really worked and then the world went PDF. > > You can do what you want in PETSc 3.3 with > > ierr = PetscOptionsBool("-draw_save_movie","Make a movie from the > images saved","PetscDrawSetSave",movie,&movie,PETSC_NULL);CHKERRQ(ierr); > ierr = PetscOptionsString("-draw_save","Save graphics to > file","PetscDrawSetSave",filename,filename,PETSC_MAX_PATH_LEN,&save);CHKERRQ(ierr); > > /*@C > PetscDrawSave - Saves images produced in a PetscDraw into a file as a > Gif file using AfterImage > > Collective on PetscDraw > > Input Parameter: > + draw - the graphics context > . filename - name of the file, if PETSC_NULL uses name of draw object > - movie - produce a movie of all the images > > Options Database Command: > + -draw_save > - -draw_save_movie > > Level: intermediate > > Concepts: X windows^graphics > Concepts: drawing^postscript > Concepts: postscript^graphics > Concepts: drawing^Microsoft Windows > > Notes: Requires that PETSc be configured with the option > --with-afterimage > > > .seealso: PetscDrawSetFromOptions(), PetscDrawCreate(), > PetscDrawDestroy(), PetscDrawSave() > > Please report any problems to petsc-maint at mcs.anl.gov > > Note that you must make sure the windows being saved are not covered by > other windows. > > Barry > > > On Jul 31, 2012, at 3:17 PM, "Ronald M. Caplan" > wrote: > > > Hi, > > > > I saw online that back in 2006 there was a way to tell PETSc to save the > image in -mat_view_draw into a postscript file. Is there anyway to still > do this? Or to same the image into any format? Or to save a movie of the > residual images in ksp? > > > > Thanks, > > > > Ron C. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jul 31 18:20:17 2012 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 31 Jul 2012 18:20:17 -0500 Subject: [petsc-users] Save matrix view to postscript? In-Reply-To: References: Message-ID: On Jul 31, 2012, at 6:13 PM, "Ronald M. Caplan" wrote: > Thanks! > > I can't seem to get it to work though. > When I configure --with-afterimage it says that I need to download it. When I use --download-afterimage it says: > "External package afterimage does not support --download-afterimage" http://www.afterstep.org/afterimage/getcode.php Also I forgot to mention you need to install http://ffmpeg.org/download.html both of these are "./configure ; make; make install" packages and should install quickly and cleanly. Good luck otherwise. > > I am using a MAC with OS X. Where can I download afterimage? > > Also, in general, if I want to add a configure to petsc with ./configure, do I have to run ./configure with ALL my options, or just with the new ones I want to add/change? Will it save my previous configure? python ${PETSC_ARCH}/conf/reconfigure-${PETSC_ARCH}.py --with-afterimage It will use the old options plus anything new you add. It will only rebuild libraries that it has to rebuild. Barry > > Thanks again for your rapid responses! > > - Ron C > > On Tue, Jul 31, 2012 at 3:23 PM, Barry Smith wrote: > > Sorry the postscript never really worked and then the world went PDF. > > You can do what you want in PETSc 3.3 with > > ierr = PetscOptionsBool("-draw_save_movie","Make a movie from the images saved","PetscDrawSetSave",movie,&movie,PETSC_NULL);CHKERRQ(ierr); > ierr = PetscOptionsString("-draw_save","Save graphics to file","PetscDrawSetSave",filename,filename,PETSC_MAX_PATH_LEN,&save);CHKERRQ(ierr); > > /*@C > PetscDrawSave - Saves images produced in a PetscDraw into a file as a Gif file using AfterImage > > Collective on PetscDraw > > Input Parameter: > + draw - the graphics context > . filename - name of the file, if PETSC_NULL uses name of draw object > - movie - produce a movie of all the images > > Options Database Command: > + -draw_save > - -draw_save_movie > > Level: intermediate > > Concepts: X windows^graphics > Concepts: drawing^postscript > Concepts: postscript^graphics > Concepts: drawing^Microsoft Windows > > Notes: Requires that PETSc be configured with the option --with-afterimage > > > .seealso: PetscDrawSetFromOptions(), PetscDrawCreate(), PetscDrawDestroy(), PetscDrawSave() > > Please report any problems to petsc-maint at mcs.anl.gov > > Note that you must make sure the windows being saved are not covered by other windows. > > Barry > > > On Jul 31, 2012, at 3:17 PM, "Ronald M. Caplan" wrote: > > > Hi, > > > > I saw online that back in 2006 there was a way to tell PETSc to save the image in -mat_view_draw into a postscript file. Is there anyway to still do this? Or to same the image into any format? Or to save a movie of the residual images in ksp? > > > > Thanks, > > > > Ron C. > > From jedbrown at mcs.anl.gov Tue Jul 31 18:20:37 2012 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 31 Jul 2012 16:20:37 -0700 Subject: [petsc-users] Save matrix view to postscript? In-Reply-To: References: Message-ID: On Tue, Jul 31, 2012 at 4:13 PM, Ronald M. Caplan wrote: > Thanks! > > I can't seem to get it to work though. > When I configure --with-afterimage it says that I need to download it. > When I use --download-afterimage it says: > "External package afterimage does not support --download-afterimage" > > I am using a MAC with OS X. Where can I download afterimage? > http://www.afterstep.org/afterimage/ > > Also, in general, if I want to add a configure to petsc with ./configure, > do I have to run ./configure with ALL my options, or just with the new ones > I want to add/change? Will it save my previous configure? > Run $PETSC_ARCH/conf/reconfigure-*.py --new-options > > Thanks again for your rapid responses! > > - Ron C > > > On Tue, Jul 31, 2012 at 3:23 PM, Barry Smith wrote: > >> >> Sorry the postscript never really worked and then the world went PDF. >> >> You can do what you want in PETSc 3.3 with >> >> ierr = PetscOptionsBool("-draw_save_movie","Make a movie from the >> images saved","PetscDrawSetSave",movie,&movie,PETSC_NULL);CHKERRQ(ierr); >> ierr = PetscOptionsString("-draw_save","Save graphics to >> file","PetscDrawSetSave",filename,filename,PETSC_MAX_PATH_LEN,&save);CHKERRQ(ierr); >> >> /*@C >> PetscDrawSave - Saves images produced in a PetscDraw into a file as a >> Gif file using AfterImage >> >> Collective on PetscDraw >> >> Input Parameter: >> + draw - the graphics context >> . filename - name of the file, if PETSC_NULL uses name of draw object >> - movie - produce a movie of all the images >> >> Options Database Command: >> + -draw_save >> - -draw_save_movie >> >> Level: intermediate >> >> Concepts: X windows^graphics >> Concepts: drawing^postscript >> Concepts: postscript^graphics >> Concepts: drawing^Microsoft Windows >> >> Notes: Requires that PETSc be configured with the option >> --with-afterimage >> >> >> .seealso: PetscDrawSetFromOptions(), PetscDrawCreate(), >> PetscDrawDestroy(), PetscDrawSave() >> >> Please report any problems to petsc-maint at mcs.anl.gov >> >> Note that you must make sure the windows being saved are not covered by >> other windows. >> >> Barry >> >> >> On Jul 31, 2012, at 3:17 PM, "Ronald M. Caplan" >> wrote: >> >> > Hi, >> > >> > I saw online that back in 2006 there was a way to tell PETSc to save >> the image in -mat_view_draw into a postscript file. Is there anyway to >> still do this? Or to same the image into any format? Or to save a movie >> of the residual images in ksp? >> > >> > Thanks, >> > >> > Ron C. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: