From w_ang_temp at 163.com Tue Jan 1 01:12:13 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Tue, 1 Jan 2013 15:12:13 +0800 (CST) Subject: [petsc-users] Happy New Year! Blacs configure. Message-ID: <6c1b0c7c.27f7.13bf4f4105c.Coremail.w_ang_temp@163.com> Hello, Happy New Year to all! And I have a problem about the configuration. The command is: ./configure --with-mpi-dir=/home/geo/soft/mpich2/ --download-f-blas-lapack=1 --download-hypre=1 --with-x=1 --with-debugging=0 --download-superlu_dist --download-parmetis --download-mumps --download-scalapack --download-blacs When it comes to: TESTING: configureLibrary from PETSc.packages.blacs(config/BuildSystem/config/package.py:417) It seems that the process has stopped or cannot continue because it is still here after one hour. The configure.log is: ================================================================================ TEST alternateConfigureLibrary from PETSc.packages.PTScotch(/home/geo/soft/petsc/petsc-3.2-p7/config/BuildSystem/config/package.py:471) TESTING: alternateConfigureLibrary from PETSc.packages.PTScotch(config/BuildSystem/config/package.py:471) Called if --with-packagename=0; does nothing by default ================================================================================ TEST alternateConfigureLibrary from PETSc.packages.PaStiX(/home/geo/soft/petsc/petsc-3.2-p7/config/BuildSystem/config/package.py:471) TESTING: alternateConfigureLibrary from PETSc.packages.PaStiX(config/BuildSystem/config/package.py:471) Called if --with-packagename=0; does nothing by default sh: uname -s Executing: uname -s sh: Linux sh: uname -s Executing: uname -s sh: Linux Pushing language C ================================================================================ TEST configureLibrary from PETSc.packages.blacs(/home/geo/soft/petsc/petsc-3.2-p7/config/BuildSystem/config/package.py:417) TESTING: configureLibrary from PETSc.packages.blacs(config/BuildSystem/config/package.py:417) Find an installation and check if it can work with PETSc ================================================================================== Checking for a functional blacs Looking for BLACS in directory starting with blacs Could not locate an existing copy of blacs: ['hypre-2.7.0b', 'fblaslapack-3.1.1'] Downloading blacs Downloading http://ftp.mcs.anl.gov/pub/petsc/externalpackages/blacs-dev.tar.gz to /home/geo/soft/petsc/petsc-3.2-p7/externalpackages/_d_blacs.tar.gz Thanks. Happy new year, and good luck! Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 1 04:22:54 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 1 Jan 2013 04:22:54 -0600 Subject: [petsc-users] Happy New Year! Blacs configure. In-Reply-To: <6c1b0c7c.27f7.13bf4f4105c.Coremail.w_ang_temp@163.com> References: <6c1b0c7c.27f7.13bf4f4105c.Coremail.w_ang_temp@163.com> Message-ID: On Tue, Jan 1, 2013 at 1:12 AM, w_ang_temp wrote: > Hello, > > Happy New Year to all! > > And I have a problem about the configuration. The command is: > > ./configure --with-mpi-dir=/home/geo/soft/mpich2/ > --download-f-blas-lapack=1 --download-hypre=1 --with-x=1 --with-debugging=0 > --download-superlu_dist --download-parmetis --download-mumps > --download-scalapack --download-blacs > > When it comes to: > TESTING: configureLibrary from > PETSc.packages.blacs(config/BuildSystem/config/package.py:417) > > It seems that the process has stopped or cannot continue because it is > still here after one hour > Perhaps there is a problem with your connection. Download the tarball from that location and use --download-blacs=. Also note that petsc-dev has removed blacs as a dependency. Matt > > > The configure.log is: > > > > ================================================================================ > TEST alternateConfigureLibrary from > PETSc.packages.PTScotch(/home/geo/soft/petsc/petsc-3.2-p7/config/BuildSystem/config/package.py:471) > TESTING: alternateConfigureLibrary from > PETSc.packages.PTScotch(config/BuildSystem/config/package.py:471) > Called if --with-packagename=0; does nothing by default > > ================================================================================ > TEST alternateConfigureLibrary from > PETSc.packages.PaStiX(/home/geo/soft/petsc/petsc-3.2-p7/config/BuildSystem/config/package.py:471) > TESTING: alternateConfigureLibrary from > PETSc.packages.PaStiX(config/BuildSystem/config/package.py:471) > Called if --with-packagename=0; does nothing by default > sh: uname -s > Executing: uname -s > sh: Linux > > sh: uname -s > Executing: uname -s > sh: Linux > > Pushing language C > > ================================================================================ > TEST configureLibrary from > PETSc.packages.blacs(/home/geo/soft/petsc/petsc-3.2-p7/config/BuildSystem/config/package.py:417) > TESTING: configureLibrary from > PETSc.packages.blacs(config/BuildSystem/config/package.py:417) > Find an installation and check if it can work with PETSc > > ================================================================================== > Checking for a functional blacs > Looking for BLACS in directory starting with blacs > Could not locate an existing copy of blacs: > ['hypre-2 .7.0b', 'fblaslapack-3.1.1'] > Downloading blacs > Downloading > http://ftp.mcs.anl.gov/pub/petsc/externalpackages/blacs-dev.tar.gz to > /home/geo/soft/petsc/petsc-3.2-p7/externalpackages/_d_blacs.tar.gz > > > Thanks. Happy new year, and good luck! > > Jim > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Tue Jan 1 05:26:51 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 01 Jan 2013 12:26:51 +0100 Subject: [petsc-users] PetscBinaryWrite function in MATLAB interface Message-ID: <50E2C7FB.7070201@tudelft.nl> Dear all, I realized sth strange in the MATLAB interface with PetscBinaryWrite function. If my matrix is even a dense matrix, it is still written in the format of a sparse matrix. However, I think it should be written as vector as explained in the short comment in that file. I guess the boolean test on line 45 of the 'PetscBinaryWrite' function should be an '&&' not a '||'. All the best for the coming year, U. From stefan.kurzbach at tu-harburg.de Tue Jan 1 06:35:47 2013 From: stefan.kurzbach at tu-harburg.de (Stefan Kurzbach) Date: Tue, 01 Jan 2013 13:35:47 +0100 Subject: [petsc-users] Direct Schur complement domain decomposition In-Reply-To: References: <002b01cdde05$6a2d0330$3e870990$@tuhh.de> Message-ID: <50E2D823.9010509@tu-harburg.de> Dear Jed Thanks. Non-iterative or direct substructuring is my understanding as well. You said this is the same as multifrontal factorization as well. Could you point me to some source where I can see the parallels? Maybe this is obvious for people who have grown up with solving sparse systems, but not for me :) I will have to spend some more time to find out about the other hints you gave. Best regards Stefan Am 29.12.2012 19:59, schrieb Jed Brown: > Sorry for the slow reply. What you are describing _is_ multifrontal > factorization, or alternatively, (non-iterative) substructuring. It is > a direct solve and boils down to a few large dense direct solves. > Incomplete factorization is one way of preventing the Schur > complements from getting too dense, but it's not very reliable. > > There are many other ways of retaining structure in the supernodes > (i.e., avoid unstructured dense matrices), at the expense of some > error. These methods "compress" the Schur complement using low-rank > representations for long-range interaction. These are typically > combined with an iterative method. > > Multigrid and multilevel DD methods can be thought of as an alternate > way to compress (approximately) the long-range interaction coming from > inexact elimination (dimensional reduction of interfaces). > > On Wed, Dec 19, 2012 at 10:25 AM, Stefan Kurzbach > > wrote: > > Hello everybody, > > in my recent research on parallelization of a 2D unstructured flow > model code I came upon a question on domain decomposition > techniques in ?grids?. Maybe someone knows of any previous results > on this? > > Typically, when doing large simulations with many unknowns, the > problem is distributed to many computer nodes and solved in > parallel by some iterative method. Many of these iterative methods > boil down to a large number of distributed matrix-vector > multiplications (in the order of the number of iterations). This > means there are many synchronization points in the algorithms, > which makes them tightly coupled. This has been found to work well > on clusters with fast networks. > > Now my question: > > What if there is a small number of very powerful nodes (say less > than 10), which are connected by a slow network, e.g. several > computer clusters connected over the internet (some people call > this ?grid computing?). I expect that the traditional iterative > methods will not be as efficient here (any references?). > > My guess is that a solution method with fewer synchronization > points will work better, even though that method may be > computationally more expensive than traditional methods. An > example would be a domain composition approach with direct > solution of the Schur complement on the interface. This requires > that the interface size has to be small compared to the subdomain > size. As this algorithm basically works in three decoupled phases > (solve the subdomains for several right hand sides, assemble and > solve the Schur complement system, correct the subdomain results) > it should be suited well, but I have no idea how to test or > otherwise prove it. Has anybody made any thoughts on this before, > possibly dating back to the 80ies and 90ies, where slow networks > were more common? > > Best regards > > Stefan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.kurzbach at tu-harburg.de Tue Jan 1 06:56:46 2013 From: stefan.kurzbach at tu-harburg.de (Stefan Kurzbach) Date: Tue, 01 Jan 2013 13:56:46 +0100 Subject: [petsc-users] Direct Schur complement domain decomposition In-Reply-To: <9FFBA092-74B0-4CF3-AF65-45A001FDAC2E@mcs.anl.gov> References: <002b01cdde05$6a2d0330$3e870990$@tuhh.de> <9FFBA092-74B0-4CF3-AF65-45A001FDAC2E@mcs.anl.gov> Message-ID: <50E2DD0E.5010204@tu-harburg.de> Dear Barry this is the general tenor I have seen so far, but can I get a more tangible answer somewhere? Something that says "if you have more than X unkowns / Y interface unknowns / the number of (preconditioned) iterations is smaller than Z you should do an iterative solve"? Even better something that says "across slow networks"? I could not yet find anything concrete. Best regards Stefan PS. Just to make sure, I used the term "grid" to denote a loosely-coupled computer system, not a computational mesh. Am 29.12.2012 22:21, schrieb Barry Smith: > My off the cuff response is that "computing the exact Schur complements for the subdomains is sooooo expensive that it swamps out any savings in reducing the amount of communication" plus it requires soooo much memory. Thus solvers like these may make sense only when the problem is "non-standard" enough that iterative methods simply don't work (perhaps due to extreme ill-conditioning), such problems do exist but for most "PDE" problems with enough time and effort one can cook up the right combination of "block-splittings" and multilevel (multigrid) methods to get a much more efficient solver that gives you the accuracy you need long before the Schur complements have been computed. > > Barry > > On Dec 29, 2012, at 12:59 PM, Jed Brown wrote: > >> Sorry for the slow reply. What you are describing _is_ multifrontal factorization, or alternatively, (non-iterative) substructuring. It is a direct solve and boils down to a few large dense direct solves. Incomplete factorization is one way of preventing the Schur complements from getting too dense, but it's not very reliable. >> >> There are many other ways of retaining structure in the supernodes (i.e., avoid unstructured dense matrices), at the expense of some error. These methods "compress" the Schur complement using low-rank representations for long-range interaction. These are typically combined with an iterative method. >> >> Multigrid and multilevel DD methods can be thought of as an alternate way to compress (approximately) the long-range interaction coming from inexact elimination (dimensional reduction of interfaces). >> >> On Wed, Dec 19, 2012 at 10:25 AM, Stefan Kurzbach wrote: >> Hello everybody, >> >> >> >> in my recent research on parallelization of a 2D unstructured flow model code I came upon a question on domain decomposition techniques in ?grids?. Maybe someone knows of any previous results on this? >> >> >> >> Typically, when doing large simulations with many unknowns, the problem is distributed to many computer nodes and solved in parallel by some iterative method. Many of these iterative methods boil down to a large number of distributed matrix-vector multiplications (in the order of the number of iterations). This means there are many synchronization points in the algorithms, which makes them tightly coupled. This has been found to work well on clusters with fast networks. >> >> >> >> Now my question: >> >> What if there is a small number of very powerful nodes (say less than 10), which are connected by a slow network, e.g. several computer clusters connected over the internet (some people call this ?grid computing?). I expect that the traditional iterative methods will not be as efficient here (any references?). >> >> >> >> My guess is that a solution method with fewer synchronization points will work better, even though that method may be computationally more expensive than traditional methods. An example would be a domain composition approach with direct solution of the Schur complement on the interface. This requires that the interface size has to be small compared to the subdomain size. As this algorithm basically works in three decoupled phases (solve the subdomains for several right hand sides, assemble and solve the Schur complement system, correct the subdomain results) it should be suited well, but I have no idea how to test or otherwise prove it. Has anybody made any thoughts on this before, possibly dating back to the 80ies and 90ies, where slow networks were more common? >> >> >> >> Best regards >> >> Stefan >> >> >> >> >> >> From jedbrown at mcs.anl.gov Tue Jan 1 09:36:54 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Jan 2013 09:36:54 -0600 Subject: [petsc-users] PetscBinaryWrite function in MATLAB interface In-Reply-To: <50E2C7FB.7070201@tudelft.nl> References: <50E2C7FB.7070201@tudelft.nl> Message-ID: Nope, it saves all matrices the same way because the binary format is not smart enough to distinguish different formats on disk. On Tue, Jan 1, 2013 at 5:26 AM, Umut Tabak wrote: > Dear all, > > I realized sth strange in the MATLAB interface with PetscBinaryWrite > function. > > If my matrix is even a dense matrix, it is still written in the format of > a sparse matrix. However, I think it should be written as vector as > explained in the short comment in that file. > > I guess the boolean test on line 45 of the 'PetscBinaryWrite' function > should be an '&&' not a '||'. > > All the best for the coming year, > U. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Tue Jan 1 10:50:06 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 2 Jan 2013 00:50:06 +0800 (CST) Subject: [petsc-users] Happy New Year! Blacs configure. In-Reply-To: References: <6c1b0c7c.27f7.13bf4f4105c.Coremail.w_ang_temp@163.com> Message-ID: <481c4db3.4f89.13bf7051e75.Coremail.w_ang_temp@163.com> Thanks. I have solved it. As Matt says, --download-blacs=/.../blacs-dev.tar.gz. Jim >>On 2013-01-01 18:22:54?"Matthew Knepley" ??? >>On Tue, Jan 1, 2013 at 1:12 AM, w_ang_temp wrote: >>Hello, >> Happy New Year to all! >> And I have a problem about the configuration. The command is: >> ./configure --with-mpi-dir=/home/geo/soft/mpich2/ --download-f-blas-lapack=1 --download-hypre=1 --with-x=1 --with-debugging=0 --download->>superlu_dist --download-parmetis --download-mumps --download-scalapack --download-blacs >> When it comes to: >> TESTING: configureLibrary from PETSc.packages.blacs(config/BuildSystem/config/package.py:417) >> It seems that the process has stopped or cannot continue because it is still here after one hour >Perhaps there is a problem with your connection. Download the tarball from that location and use >--download-blacs=. Also note that petsc-dev has removed blacs as a dependency. Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Tue Jan 1 11:00:30 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Tue, 01 Jan 2013 18:00:30 +0100 Subject: [petsc-users] PetscBinaryWrite function in MATLAB interface In-Reply-To: References: <50E2C7FB.7070201@tudelft.nl> Message-ID: <50E3162E.1000402@tudelft.nl> On 01/01/2013 04:36 PM, Jed Brown wrote: > Nope, it saves all matrices the same way because the binary format is > not smart enough to distinguish different formats on disk. ok, while saving there is no dense matrix concept basically... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 1 11:07:57 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Jan 2013 11:07:57 -0600 Subject: [petsc-users] PetscBinaryWrite function in MATLAB interface In-Reply-To: <50E3162E.1000402@tudelft.nl> References: <50E2C7FB.7070201@tudelft.nl> <50E3162E.1000402@tudelft.nl> Message-ID: On Tue, Jan 1, 2013 at 11:00 AM, Umut Tabak wrote: > On 01/01/2013 04:36 PM, Jed Brown wrote: > > Nope, it saves all matrices the same way because the binary format is not > smart enough to distinguish different formats on disk. > > ok, while saving there is no dense matrix concept basically... > Only if you do it manually. I recommend 1. Just use sparse and don't worry about it. 2. If you feel (1) is such a crucial performance bottleneck that must be sped up by a small fraction, change your workflow to avoid writing such large matrices to disk, thus speeding up this phase by orders of magnitude. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 1 11:45:48 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 1 Jan 2013 11:45:48 -0600 Subject: [petsc-users] Direct Schur complement domain decomposition In-Reply-To: <50E2D823.9010509@tu-harburg.de> References: <002b01cdde05$6a2d0330$3e870990$@tuhh.de> <50E2D823.9010509@tu-harburg.de> Message-ID: You should be able to look at any of the standard references on multifrontal methods, e.g., by Liu, Davis, Gupta, or Duff. Substructuring as a mesh-based procedure has fallen out of favor because multifrontal does the same thing in more reusable code (purely algebraic; no reference to a mesh) with fine performance. On Tue, Jan 1, 2013 at 6:35 AM, Stefan Kurzbach < stefan.kurzbach at tu-harburg.de> wrote: > Dear Jed > > Thanks. Non-iterative or direct substructuring is my understanding as > well. You said this is the same as multifrontal factorization as well. > Could you point me to some source where I can see the parallels? Maybe this > is obvious for people who have grown up with solving sparse systems, but > not for me :) > > I will have to spend some more time to find out about the other hints you > gave. > > Best regards > Stefan > > Am 29.12.2012 19:59, schrieb Jed Brown: > > Sorry for the slow reply. What you are describing _is_ multifrontal > factorization, or alternatively, (non-iterative) substructuring. It is a > direct solve and boils down to a few large dense direct solves. Incomplete > factorization is one way of preventing the Schur complements from getting > too dense, but it's not very reliable. > > There are many other ways of retaining structure in the supernodes > (i.e., avoid unstructured dense matrices), at the expense of some error. > These methods "compress" the Schur complement using low-rank > representations for long-range interaction. These are typically combined > with an iterative method. > > Multigrid and multilevel DD methods can be thought of as an alternate way > to compress (approximately) the long-range interaction coming from inexact > elimination (dimensional reduction of interfaces). > > On Wed, Dec 19, 2012 at 10:25 AM, Stefan Kurzbach < > stefan.kurzbach at tuhh.de> wrote: > >> Hello everybody, >> >> >> >> in my recent research on parallelization of a 2D unstructured flow model >> code I came upon a question on domain decomposition techniques in ?grids?. >> Maybe someone knows of any previous results on this? >> >> >> >> Typically, when doing large simulations with many unknowns, the problem >> is distributed to many computer nodes and solved in parallel by some >> iterative method. Many of these iterative methods boil down to a large >> number of distributed matrix-vector multiplications (in the order of the >> number of iterations). This means there are many synchronization points in >> the algorithms, which makes them tightly coupled. This has been found to >> work well on clusters with fast networks. >> >> >> >> Now my question: >> >> What if there is a small number of very powerful nodes (say less than >> 10), which are connected by a slow network, e.g. several computer clusters >> connected over the internet (some people call this ?grid computing?). I >> expect that the traditional iterative methods will not be as efficient here >> (any references?). >> >> >> >> My guess is that a solution method with fewer synchronization points will >> work better, even though that method may be computationally more expensive >> than traditional methods. An example would be a domain composition approach >> with direct solution of the Schur complement on the interface. This >> requires that the interface size has to be small compared to the subdomain >> size. As this algorithm basically works in three decoupled phases (solve >> the subdomains for several right hand sides, assemble and solve the Schur >> complement system, correct the subdomain results) it should be suited well, >> but I have no idea how to test or otherwise prove it. Has anybody made any >> thoughts on this before, possibly dating back to the 80ies and 90ies, where >> slow networks were more common? >> >> >> >> Best regards >> >> Stefan >> >> >> >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jan 1 17:40:20 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 1 Jan 2013 17:40:20 -0600 (CST) Subject: [petsc-users] An error comes up while updating In-Reply-To: <50E37058.8060409@gmail.com> References: <50E37058.8060409@gmail.com> Message-ID: fixed now satish On Tue, 1 Jan 2013, Zhenglun (Alan) Wei wrote: > Dear folks, > I had a problem when I was trying to update the PETSc. It says that: > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > PETSc makefiles contain mistakes or files are missing on filesystem. > Makefile contains directory not on filesystem: src/snes/utils: ['mesh'] > Possible reasons: > 1. Files were deleted locally, try "hg update". > 2. Files were deleted from mercurial, but were not removed from > makefile. Send mail to petsc-maint at mcs.anl.gov. > 3. Someone forgot "hg add" new files. Send mail to > petsc-maint at mcs.anl.gov. > ******************************************************************************* > And, here is the configure.log attached. > > Thank you so much and Happy New Year !!! > Alan > > From zhenglun.wei at gmail.com Tue Jan 1 19:25:24 2013 From: zhenglun.wei at gmail.com (Alan) Date: Tue, 01 Jan 2013 19:25:24 -0600 Subject: [petsc-users] An error comes up while updating In-Reply-To: References: <50E37058.8060409@gmail.com> Message-ID: <50E38C84.6060008@gmail.com> Thanks, However, here I have another little question on the mercurial: Traceback (most recent call last): File "/home/zlwei/soft/mercurial/mercurial-1.8.3/hg", line 38, in ? mercurial.dispatch.run() File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/dispatch.py", line 16, in run sys.exit(dispatch(sys.argv[1:])) File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/dispatch.py", line 21, in dispatch u = uimod.ui() File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/ui.py", line 35, in __init__ for f in util.rcpath(): File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/util.py", line 1346, in rcpath _rcpath = os_rcpath() File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/util.py", line 1320, in os_rcpath path = system_rcpath() File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/posix.py", line 47, in system_rcpath path.extend(rcfiles(os.path.dirname(sys.argv[0]) + File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/posix.py", line 36, in rcfiles rcs.extend([os.path.join(rcdir, f) File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/demandimport.py", line 75, in __getattribute__ self._load() File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/demandimport.py", line 47, in _load mod = _origimport(head, globals, locals) ImportError: No module named osutil It happens when I'm trying to do 'hg pull -u'. Do you have any idea on this. thanks, Alan > fixed now > > satish > > On Tue, 1 Jan 2013, Zhenglun (Alan) Wei wrote: > >> Dear folks, >> I had a problem when I was trying to update the PETSc. It says that: >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> ------------------------------------------------------------------------------- >> PETSc makefiles contain mistakes or files are missing on filesystem. >> Makefile contains directory not on filesystem: src/snes/utils: ['mesh'] >> Possible reasons: >> 1. Files were deleted locally, try "hg update". >> 2. Files were deleted from mercurial, but were not removed from >> makefile. Send mail to petsc-maint at mcs.anl.gov. >> 3. Someone forgot "hg add" new files. Send mail to >> petsc-maint at mcs.anl.gov. >> ******************************************************************************* >> And, here is the configure.log attached. >> >> Thank you so much and Happy New Year !!! >> Alan >> >> From balay at mcs.anl.gov Tue Jan 1 19:37:28 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 1 Jan 2013 19:37:28 -0600 (CST) Subject: [petsc-users] An error comes up while updating In-Reply-To: <50E38C84.6060008@gmail.com> References: <50E37058.8060409@gmail.com> <50E38C84.6060008@gmail.com> Message-ID: looks like your mercurial install is broken [and needs a reinstall?] My system install of mercurial [2.4.1] has 'osutil' at: >>>>>>> asterix:/home/balay>rpm -ql mercurial |grep osutil /usr/lib64/python2.7/site-packages/mercurial/osutil.so asterix:/home/balay> <<<<<<< Satish On Tue, 1 Jan 2013, Alan wrote: > Thanks, > However, here I have another little question on the mercurial: > Traceback (most recent call last): > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/hg", line 38, in ? > mercurial.dispatch.run() > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/dispatch.py", > line 16, in run > sys.exit(dispatch(sys.argv[1:])) > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/dispatch.py", > line 21, in dispatch > u = uimod.ui() > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/ui.py", line 35, > in __init__ > for f in util.rcpath(): > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/util.py", line > 1346, in rcpath > _rcpath = os_rcpath() > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/util.py", line > 1320, in os_rcpath > path = system_rcpath() > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/posix.py", line > 47, in system_rcpath > path.extend(rcfiles(os.path.dirname(sys.argv[0]) + > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/posix.py", line > 36, in rcfiles > rcs.extend([os.path.join(rcdir, f) > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/demandimport.py", > line 75, in __getattribute__ > self._load() > File "/home/zlwei/soft/mercurial/mercurial-1.8.3/mercurial/demandimport.py", > line 47, in _load > mod = _origimport(head, globals, locals) > ImportError: No module named osutil > > It happens when I'm trying to do 'hg pull -u'. Do you have any idea on > this. > > thanks, > Alan > > fixed now > > > > satish > > > > On Tue, 1 Jan 2013, Zhenglun (Alan) Wei wrote: > > > > > Dear folks, > > > I had a problem when I was trying to update the PETSc. It says that: > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > details): > > > > > > ------------------------------------------------------------------------------- > > > PETSc makefiles contain mistakes or files are missing on filesystem. > > > Makefile contains directory not on filesystem: src/snes/utils: ['mesh'] > > > Possible reasons: > > > 1. Files were deleted locally, try "hg update". > > > 2. Files were deleted from mercurial, but were not removed from > > > makefile. Send mail to petsc-maint at mcs.anl.gov. > > > 3. Someone forgot "hg add" new files. Send mail to > > > petsc-maint at mcs.anl.gov. > > > ******************************************************************************* > > > And, here is the configure.log attached. > > > > > > Thank you so much and Happy New Year !!! > > > Alan > > > > > > > > From zonexo at gmail.com Wed Jan 2 02:45:46 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 02 Jan 2013 09:45:46 +0100 Subject: [petsc-users] Version of HYPRE in current or dev PETSc Message-ID: <50E3F3BA.1020103@gmail.com> Hi, May I know the version of HYPRE to be downloaded in the current or dev PETSc? I understand that the newest HYPRE is 2.9b. Thanks and happy new year! -- Yours sincerely, TAY wee-beng From hgbk2008 at gmail.com Wed Jan 2 08:09:01 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Wed, 02 Jan 2013 15:09:01 +0100 Subject: [petsc-users] problem running petsc4py Message-ID: <50E43F7D.50707@gmail.com> Hi When I ran the standard example of petsc4py, I got the error below [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. Moreover, printing out the value of Istart and Iend returns 0 (at A.getOwnershipRange()) Please advise the root cause of this problem. I compiled petsc4py-dev against petsc-3.3-p5 in debug mode. BR Giang Bui -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1.py Type: text/x-python Size: 1701 bytes Desc: not available URL: From knepley at gmail.com Wed Jan 2 09:02:06 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 2 Jan 2013 09:02:06 -0600 Subject: [petsc-users] Version of HYPRE in current or dev PETSc In-Reply-To: <50E3F3BA.1020103@gmail.com> References: <50E3F3BA.1020103@gmail.com> Message-ID: On Wed, Jan 2, 2013 at 2:45 AM, TAY wee-beng wrote: > Hi, > > May I know the version of HYPRE to be downloaded in the current or dev > PETSc? I understand that the newest HYPRE is 2.9b. > 2.8.0 in dev. Look in config/PETSc/packages/hypre.py Matt > Thanks and happy new year! > > -- > Yours sincerely, > > TAY wee-beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 2 09:33:15 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 09:33:15 -0600 Subject: [petsc-users] Version of HYPRE in current or dev PETSc In-Reply-To: References: <50E3F3BA.1020103@gmail.com> Message-ID: On Wed, Jan 2, 2013 at 9:02 AM, Matthew Knepley wrote: > On Wed, Jan 2, 2013 at 2:45 AM, TAY wee-beng wrote: > >> Hi, >> >> May I know the version of HYPRE to be downloaded in the current or dev >> PETSc? I understand that the newest HYPRE is 2.9b. >> > > 2.8.0 in dev. Look in config/PETSc/packages/hypre.py > The 2.9.0b build system appears to be broken. Emailing the hypre devs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 2 10:41:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 10:41:05 -0600 Subject: [petsc-users] problem running petsc4py In-Reply-To: <50E43F7D.50707@gmail.com> References: <50E43F7D.50707@gmail.com> Message-ID: On Wed, Jan 2, 2013 at 8:09 AM, Hoang Giang Bui wrote: > > Hi > > When I ran the standard example of petsc4py, I got the error below > > [0]PETSC ERROR: ------------------------------** > ------------------------------**------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/** > documentation/faq.html#**valgrind[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and > run > [0]PETSC ERROR: to get more information on the crash. > If you followed these directions, you'd have gotten this error message: Traceback (most recent call last): File "/home/jed/dl/ex1.py", line 27, in Istart, Iend = A.getOwnershipRange() File "Mat.pyx", line 453, in petsc4py.PETSc.Mat.getOwnershipRange (src/petsc4py.PETSc.c:83937) petsc4py.PETSc.Error: error code 73 [0] MatGetOwnershipRange() line 6025 in /home/jed/petsc/src/mat/interface/matrix.c [0] Object is in wrong state [0] Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetOwnershipRange() If you add A.setUp() or use a preallocation routine, you'll get the picture you wanted. > ------------------------------**------------------------------** > -------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > > Moreover, printing out the value of Istart and Iend returns 0 (at > A.getOwnershipRange()) > > Please advise the root cause of this problem. I compiled petsc4py-dev > against petsc-3.3-p5 in debug mode. > > BR > Giang Bui > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Wed Jan 2 10:47:16 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Wed, 02 Jan 2013 17:47:16 +0100 Subject: [petsc-users] problem running petsc4py In-Reply-To: References: <50E43F7D.50707@gmail.com> Message-ID: <50E46494.1000503@gmail.com> On 01/02/13 17:41, Jed Brown wrote: > On Wed, Jan 2, 2013 at 8:09 AM, Hoang Giang Bui > wrote: > > > Hi > > When I ran the standard example of petsc4py, I got the error below > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X > to find memory corruption errors > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, > link, and run > [0]PETSC ERROR: to get more information on the crash. > > > If you followed these directions, you'd have gotten this error message: > > Traceback (most recent call last): > File "/home/jed/dl/ex1.py", line 27, in > Istart, Iend = A.getOwnershipRange() > File "Mat.pyx", line 453, in petsc4py.PETSc.Mat.getOwnershipRange > (src/petsc4py.PETSc.c:83937) > petsc4py.PETSc.Error: error code 73 > [0] MatGetOwnershipRange() line 6025 in > /home/jed/petsc/src/mat/interface/matrix.c > [0] Object is in wrong state > [0] Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 > "mat" before MatGetOwnershipRange() > > If you add A.setUp() or use a preallocation routine, you'll get the > picture you wanted. > > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > with errorcode 59. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > > Moreover, printing out the value of Istart and Iend returns 0 (at > A.getOwnershipRange()) > > Please advise the root cause of this problem. I compiled > petsc4py-dev against petsc-3.3-p5 in debug mode. > > BR > Giang Bui > > That's great. Thank you very much. Anyway. How do you have the Traceback functionality? I already compiled petsc --with-debugging=1 but the error still shown as I haven't set it. BR Giang Bui -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 2 10:53:31 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 10:53:31 -0600 Subject: [petsc-users] problem running petsc4py In-Reply-To: <50E46494.1000503@gmail.com> References: <50E43F7D.50707@gmail.com> <50E46494.1000503@gmail.com> Message-ID: On Wed, Jan 2, 2013 at 10:47 AM, Hoang Giang Bui wrote: > That's great. Thank you very much. > > Anyway. How do you have the Traceback functionality? I already compiled > petsc --with-debugging=1 but the error still shown as I haven't set it. > You probably didn't configure petsc4py to use the debugging PETSC_ARCH. You can use ./setup.py build --petsc-arch=arch1:arch2:arch3; ./setup.py install --prefix=...; and then PETSC_ARCH=arch2 ./ex1.py to select at run-time. (Normally arch1 might be debugging while arch2 is optimized, or maybe a different compiler or MPI implementation.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hgbk2008 at gmail.com Wed Jan 2 11:26:07 2013 From: hgbk2008 at gmail.com (Hoang Giang Bui) Date: Wed, 02 Jan 2013 18:26:07 +0100 Subject: [petsc-users] problem running petsc4py In-Reply-To: References: <50E43F7D.50707@gmail.com> <50E46494.1000503@gmail.com> Message-ID: <50E46DAF.5070505@gmail.com> On 01/02/13 17:53, Jed Brown wrote: > On Wed, Jan 2, 2013 at 10:47 AM, Hoang Giang Bui > wrote: > > That's great. Thank you very much. > > Anyway. How do you have the Traceback functionality? I already > compiled petsc --with-debugging=1 but the error still shown as I > haven't set it. > > > You probably didn't configure petsc4py to use the debugging > PETSC_ARCH. You can use ./setup.py build > --petsc-arch=arch1:arch2:arch3; ./setup.py install --prefix=...; and then > > PETSC_ARCH=arch2 ./ex1.py > > to select at run-time. (Normally arch1 might be debugging while arch2 > is optimized, or maybe a different compiler or MPI implementation.) Works nicely, thanks. Giang -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhalen at gmail.com Wed Jan 2 13:31:54 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Wed, 2 Jan 2013 14:31:54 -0500 Subject: [petsc-users] MUMPS in serial Message-ID: Does MUMPS work with PETSC in serial i.e. one MPI process? I need to run in serial because I have to perform certain dense matrix multiplications which do not work in parallel. If mumps does not work, I think I will try superlu. -Nachiket From jedbrown at mcs.anl.gov Wed Jan 2 13:33:18 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 13:33:18 -0600 Subject: [petsc-users] MUMPS in serial In-Reply-To: References: Message-ID: Did you try it? Yes, it works. On Wed, Jan 2, 2013 at 1:31 PM, Nachiket Gokhale wrote: > Does MUMPS work with PETSC in serial i.e. one MPI process? I need to > run in serial because I have to perform certain dense matrix > multiplications which do not work in parallel. If mumps does not > work, I think I will try superlu. > > -Nachiket > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhalen at gmail.com Wed Jan 2 13:35:01 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Wed, 2 Jan 2013 14:35:01 -0500 Subject: [petsc-users] MUMPS in serial In-Reply-To: References: Message-ID: Yes, I did. I got this error message - the configure log shows that I installed mumps. 0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: Matrix format seqdense does not have a solver package mumps for LU. Perhaps you must ./configure with --download-mumps! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: cc5f6de4d644fb53ec2bbf114fa776073e3e8534 HG Date: Fri Dec 21 11:22:24 2012 -0600 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /home/gokhale/WAIGEN/GDEB-WAIGEN2012/bin/waiproj on a linux-gcc named asd1.wai.com by gokhale Wed Jan 2 14:29:19 2013 [0]PETSC ERROR: Libraries linked from /opt/petsc/petsc-dev/linux-gcc-gpp-mpich-mumps-complex-elemental/lib [0]PETSC ERROR: Configure run at Fri Dec 21 14:30:56 2012 [0]PETSC ERROR: Configure options --with-x=0 --with-mpi=1 --download-mpich=yes --with-x11=0 --with-debugging=0 --with-clanguage=C++ --with-shared-libraries=1 --download-mumps=yes --download-f-blas-lapack=1 --download-parmetis=1 --download-metis --download-scalapack=1 --download-blacs=1 --with-cmake=/usr/bin/cmake28 --with-scalar-type=complex --download-elemental [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatGetFactor() line 3944 in /opt/petsc/petsc-dev/src/mat/interface/matrix.c [0]PETSC ERROR: PCSetUp_LU() line 133 in /opt/petsc/petsc-dev/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: PCSetUp() line 832 in /opt/petsc/petsc-dev/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 267 in /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 376 in /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: main() line 169 in src/examples/waiproj.c application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 [cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 -Nachiket On Wed, Jan 2, 2013 at 2:33 PM, Jed Brown wrote: > Did you try it? Yes, it works. > > > On Wed, Jan 2, 2013 at 1:31 PM, Nachiket Gokhale wrote: >> >> Does MUMPS work with PETSC in serial i.e. one MPI process? I need to >> run in serial because I have to perform certain dense matrix >> multiplications which do not work in parallel. If mumps does not >> work, I think I will try superlu. >> >> -Nachiket > > From jedbrown at mcs.anl.gov Wed Jan 2 13:35:53 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 13:35:53 -0600 Subject: [petsc-users] MUMPS in serial In-Reply-To: References: Message-ID: On Wed, Jan 2, 2013 at 1:35 PM, Nachiket Gokhale wrote: > Yes, I did. I got this error message - the configure log shows that I > installed mumps. > > 0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: Matrix format seqdense does not have a solver package > mumps for LU. MUMPS is not a dense solver. > Perhaps you must ./configure with --download-mumps! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: > cc5f6de4d644fb53ec2bbf114fa776073e3e8534 HG Date: Fri Dec 21 11:22:24 > 2012 -0600 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: /home/gokhale/WAIGEN/GDEB-WAIGEN2012/bin/waiproj on a > linux-gcc named asd1.wai.com by gokhale Wed Jan 2 14:29:19 2013 > [0]PETSC ERROR: Libraries linked from > /opt/petsc/petsc-dev/linux-gcc-gpp-mpich-mumps-complex-elemental/lib > [0]PETSC ERROR: Configure run at Fri Dec 21 14:30:56 2012 > [0]PETSC ERROR: Configure options --with-x=0 --with-mpi=1 > --download-mpich=yes --with-x11=0 --with-debugging=0 > --with-clanguage=C++ --with-shared-libraries=1 --download-mumps=yes > --download-f-blas-lapack=1 --download-parmetis=1 --download-metis > --download-scalapack=1 --download-blacs=1 > --with-cmake=/usr/bin/cmake28 --with-scalar-type=complex > --download-elemental > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatGetFactor() line 3944 in > /opt/petsc/petsc-dev/src/mat/interface/matrix.c > [0]PETSC ERROR: PCSetUp_LU() line 133 in > /opt/petsc/petsc-dev/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: PCSetUp() line 832 in > /opt/petsc/petsc-dev/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 267 in > /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 376 in > /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: main() line 169 in src/examples/waiproj.c > application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 > [cli_0]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 > > -Nachiket > > On Wed, Jan 2, 2013 at 2:33 PM, Jed Brown wrote: > > Did you try it? Yes, it works. > > > > > > On Wed, Jan 2, 2013 at 1:31 PM, Nachiket Gokhale > wrote: > >> > >> Does MUMPS work with PETSC in serial i.e. one MPI process? I need to > >> run in serial because I have to perform certain dense matrix > >> multiplications which do not work in parallel. If mumps does not > >> work, I think I will try superlu. > >> > >> -Nachiket > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhalen at gmail.com Wed Jan 2 13:55:24 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Wed, 2 Jan 2013 14:55:24 -0500 Subject: [petsc-users] MUMPS in serial In-Reply-To: References: Message-ID: Sorry, I wasn't aware of that. Is there any thing that you particularly recommend for dense LU factorizations? Otherwise, I will fall back on the default factorizations in petsc - which seem to work, though I haven't investigated them thoroughly. -Nachiket On Wed, Jan 2, 2013 at 2:35 PM, Jed Brown wrote: > On Wed, Jan 2, 2013 at 1:35 PM, Nachiket Gokhale wrote: >> >> Yes, I did. I got this error message - the configure log shows that I >> installed mumps. >> >> 0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: No support for this operation for this object type! >> [0]PETSC ERROR: Matrix format seqdense does not have a solver package >> mumps for LU. > > > MUMPS is not a dense solver. > >> >> Perhaps you must ./configure with --download-mumps! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Development HG revision: >> cc5f6de4d644fb53ec2bbf114fa776073e3e8534 HG Date: Fri Dec 21 11:22:24 >> 2012 -0600 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: /home/gokhale/WAIGEN/GDEB-WAIGEN2012/bin/waiproj on a >> linux-gcc named asd1.wai.com by gokhale Wed Jan 2 14:29:19 2013 >> [0]PETSC ERROR: Libraries linked from >> /opt/petsc/petsc-dev/linux-gcc-gpp-mpich-mumps-complex-elemental/lib >> [0]PETSC ERROR: Configure run at Fri Dec 21 14:30:56 2012 >> [0]PETSC ERROR: Configure options --with-x=0 --with-mpi=1 >> --download-mpich=yes --with-x11=0 --with-debugging=0 >> --with-clanguage=C++ --with-shared-libraries=1 --download-mumps=yes >> --download-f-blas-lapack=1 --download-parmetis=1 --download-metis >> --download-scalapack=1 --download-blacs=1 >> --with-cmake=/usr/bin/cmake28 --with-scalar-type=complex >> --download-elemental >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatGetFactor() line 3944 in >> /opt/petsc/petsc-dev/src/mat/interface/matrix.c >> [0]PETSC ERROR: PCSetUp_LU() line 133 in >> /opt/petsc/petsc-dev/src/ksp/pc/impls/factor/lu/lu.c >> [0]PETSC ERROR: PCSetUp() line 832 in >> /opt/petsc/petsc-dev/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: KSPSetUp() line 267 in >> /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: KSPSolve() line 376 in >> /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: main() line 169 in src/examples/waiproj.c >> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 >> [cli_0]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 >> >> -Nachiket >> >> On Wed, Jan 2, 2013 at 2:33 PM, Jed Brown wrote: >> > Did you try it? Yes, it works. >> > >> > >> > On Wed, Jan 2, 2013 at 1:31 PM, Nachiket Gokhale >> > wrote: >> >> >> >> Does MUMPS work with PETSC in serial i.e. one MPI process? I need to >> >> run in serial because I have to perform certain dense matrix >> >> multiplications which do not work in parallel. If mumps does not >> >> work, I think I will try superlu. >> >> >> >> -Nachiket >> > >> > > > From jedbrown at mcs.anl.gov Wed Jan 2 13:57:03 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 13:57:03 -0600 Subject: [petsc-users] MUMPS in serial In-Reply-To: References: Message-ID: On Wed, Jan 2, 2013 at 1:55 PM, Nachiket Gokhale wrote: > Sorry, I wasn't aware of that. Is there any thing that you > particularly recommend for dense LU factorizations? Otherwise, I will > fall back on the default factorizations in petsc - which seem to work, > though I haven't investigated them thoroughly. > PETSc calls LAPACK which is the obvious thing for serial dense linear algebra. > > -Nachiket > > On Wed, Jan 2, 2013 at 2:35 PM, Jed Brown wrote: > > On Wed, Jan 2, 2013 at 1:35 PM, Nachiket Gokhale > wrote: > >> > >> Yes, I did. I got this error message - the configure log shows that I > >> installed mumps. > >> > >> 0]PETSC ERROR: --------------------- Error Message > >> ------------------------------------ > >> [0]PETSC ERROR: No support for this operation for this object type! > >> [0]PETSC ERROR: Matrix format seqdense does not have a solver package > >> mumps for LU. > > > > > > MUMPS is not a dense solver. > > > >> > >> Perhaps you must ./configure with --download-mumps! > >> [0]PETSC ERROR: > >> ------------------------------------------------------------------------ > >> [0]PETSC ERROR: Petsc Development HG revision: > >> cc5f6de4d644fb53ec2bbf114fa776073e3e8534 HG Date: Fri Dec 21 11:22:24 > >> 2012 -0600 > >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. > >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > >> [0]PETSC ERROR: See docs/index.html for manual pages. > >> [0]PETSC ERROR: > >> ------------------------------------------------------------------------ > >> [0]PETSC ERROR: /home/gokhale/WAIGEN/GDEB-WAIGEN2012/bin/waiproj on a > >> linux-gcc named asd1.wai.com by gokhale Wed Jan 2 14:29:19 2013 > >> [0]PETSC ERROR: Libraries linked from > >> /opt/petsc/petsc-dev/linux-gcc-gpp-mpich-mumps-complex-elemental/lib > >> [0]PETSC ERROR: Configure run at Fri Dec 21 14:30:56 2012 > >> [0]PETSC ERROR: Configure options --with-x=0 --with-mpi=1 > >> --download-mpich=yes --with-x11=0 --with-debugging=0 > >> --with-clanguage=C++ --with-shared-libraries=1 --download-mumps=yes > >> --download-f-blas-lapack=1 --download-parmetis=1 --download-metis > >> --download-scalapack=1 --download-blacs=1 > >> --with-cmake=/usr/bin/cmake28 --with-scalar-type=complex > >> --download-elemental > >> [0]PETSC ERROR: > >> ------------------------------------------------------------------------ > >> [0]PETSC ERROR: MatGetFactor() line 3944 in > >> /opt/petsc/petsc-dev/src/mat/interface/matrix.c > >> [0]PETSC ERROR: PCSetUp_LU() line 133 in > >> /opt/petsc/petsc-dev/src/ksp/pc/impls/factor/lu/lu.c > >> [0]PETSC ERROR: PCSetUp() line 832 in > >> /opt/petsc/petsc-dev/src/ksp/pc/interface/precon.c > >> [0]PETSC ERROR: KSPSetUp() line 267 in > >> /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c > >> [0]PETSC ERROR: KSPSolve() line 376 in > >> /opt/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c > >> [0]PETSC ERROR: main() line 169 in src/examples/waiproj.c > >> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 > >> [cli_0]: aborting job: > >> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0 > >> > >> -Nachiket > >> > >> On Wed, Jan 2, 2013 at 2:33 PM, Jed Brown wrote: > >> > Did you try it? Yes, it works. > >> > > >> > > >> > On Wed, Jan 2, 2013 at 1:31 PM, Nachiket Gokhale > >> > wrote: > >> >> > >> >> Does MUMPS work with PETSC in serial i.e. one MPI process? I need to > >> >> run in serial because I have to perform certain dense matrix > >> >> multiplications which do not work in parallel. If mumps does not > >> >> work, I think I will try superlu. > >> >> > >> >> -Nachiket > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Wed Jan 2 15:41:47 2013 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Wed, 02 Jan 2013 15:41:47 -0600 Subject: [petsc-users] A quick question on 'un-symmetric graph' Message-ID: <50E4A99B.3020003@gmail.com> Dear folks, Here I came across a problem. [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Have un-symmetric graph (apparently). Use '-pc_gamg_sym_graph true' to symetrize the graph or '-pc_gamg_threshold 0.0' if the matrix is structurally symmetric.! My code basically uses PETSc /src/ksp/ksp/example/tutorial/ex45.c to solve the Poisson equation with the Dirichlet BC in x-direction and the Periodic BC in y- and z- direction. The executable file is: mpiexec -f $PBS_NODEFILE -np 32 ./ex45 -pc_type gamg -ksp_type cg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_max_it 1 -mg_levels_ksp_type richardson -ksp_rtol 1.0e-7 There is no problem for my code when I use small computational domain (83*41*21 with single core and even 163*81*41 with 4 processes). However, when I increase the domain size (323*161*81 with 32 processes), the error comes up. I wonder what's the possible reason of this kind of problem. Do you need any of my output in order to do some further inspection? or I just need to blindly add '-pc_gamg_sym_graph true' in PETSc option? thanks, Alan From kenway at utias.utoronto.ca Wed Jan 2 15:57:46 2013 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Wed, 2 Jan 2013 16:57:46 -0500 Subject: [petsc-users] Suggestion for KSPRichardson Message-ID: Hi Everyone I have a small suggestion for the KSPRichardson implementation. I often use a KSPRichardson object to perform a small number (2-4) applications of a preconditioner (Additive Schwartz for example). Since I'm using a fixed number of iterations, the object uses KSP_NORM_NONE to save the vecNorm. Near the end of the function KSPSolve_Richardson, (line 138 according to the current docs, link below), the new residual is evaluated. However, for the last iteration, when using KSP_NORM_NONE, the last matMult and VecAYPX is not necessary. This results in additional unnecessary matMults(). In one system I'm solving, this adds approximately 20% to the computation cost. (30 secs -> 36 secs) I think the following code should work: if (ksp->its < maxit || ksp->normtype != KSP_NORM_NONE){ ierr = KSP_MatMult(ksp,Amat,x,r);CHKERRQ(ierr); /* r <- b - Ax */ ierr = VecAYPX(r,-1.0,b);CHKERRQ(ierr); } Also, it makes the degenerate case of 1 iteration have same cost as if the KSPRichardson wasn't there. Hopefully this is useful. Gaetan Kenway Link to source code: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/rich/rich.c.html#KSPRICHARDSON -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 2 17:12:12 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 2 Jan 2013 17:12:12 -0600 Subject: [petsc-users] Suggestion for KSPRichardson In-Reply-To: References: Message-ID: Thanks, I've applied this to petsc-dev. On Wed, Jan 2, 2013 at 3:57 PM, Gaetan Kenway wrote: > Hi Everyone > > I have a small suggestion for the KSPRichardson implementation. I often > use a KSPRichardson object to perform a small number (2-4) applications of > a preconditioner (Additive Schwartz for example). Since I'm using a fixed > number of iterations, the object uses KSP_NORM_NONE to save the vecNorm. > > Near the end of the function KSPSolve_Richardson, (line 138 according to > the current docs, link below), the new residual is evaluated. However, for > the last iteration, when using KSP_NORM_NONE, the last matMult and VecAYPX > is not necessary. This results in additional unnecessary matMults(). In one > system I'm solving, this adds approximately 20% to the computation cost. > (30 secs -> 36 secs) > > I think the following code should work: > if (ksp->its < maxit || ksp->normtype != KSP_NORM_NONE){ > ierr = KSP_MatMult(ksp,Amat,x,r);CHKERRQ(ierr); /* r <- b > - Ax */ > ierr = VecAYPX(r,-1.0,b);CHKERRQ(ierr); > } > > Also, it makes the degenerate case of 1 iteration have same cost as if the > KSPRichardson wasn't there. > > Hopefully this is useful. > > Gaetan Kenway > > Link to source code: > > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/impls/rich/rich.c.html#KSPRICHARDSON > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhenglun.wei at gmail.com Wed Jan 2 17:27:30 2013 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Wed, 02 Jan 2013 17:27:30 -0600 Subject: [petsc-users] Fwd: A quick question on 'un-symmetric graph' In-Reply-To: <50E4A99B.3020003@gmail.com> References: <50E4A99B.3020003@gmail.com> Message-ID: <50E4C262.7030805@gmail.com> Hello all, I did some other tests. Now, it is narrow down to this situation: The domain and grid size is: 163*81*41 with 0.0125 = dx=dy=dz. The boundary condition is still the same. The code runs well with 4 processes while the same problem occurs when I was trying to use more than 4 processes (e.g. 6 or 8 processes). Hope this helps to detect the problem. thanks, Alan -------- Original Message -------- Subject: A quick question on 'un-symmetric graph' Date: Wed, 02 Jan 2013 15:41:47 -0600 From: Zhenglun (Alan) Wei To: PETSc users list Dear folks, Here I came across a problem. [0]PETSC ERROR: Petsc has generated inconsistent data! [0]PETSC ERROR: Have un-symmetric graph (apparently). Use '-pc_gamg_sym_graph true' to symetrize the graph or '-pc_gamg_threshold 0.0' if the matrix is structurally symmetric.! My code basically uses PETSc /src/ksp/ksp/example/tutorial/ex45.c to solve the Poisson equation with the Dirichlet BC in x-direction and the Periodic BC in y- and z- direction. The executable file is: mpiexec -f $PBS_NODEFILE -np 32 ./ex45 -pc_type gamg -ksp_type cg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_max_it 1 -mg_levels_ksp_type richardson -ksp_rtol 1.0e-7 There is no problem for my code when I use small computational domain (83*41*21 with single core and even 163*81*41 with 4 processes). However, when I increase the domain size (323*161*81 with 32 processes), the error comes up. I wonder what's the possible reason of this kind of problem. Do you need any of my output in order to do some further inspection? or I just need to blindly add '-pc_gamg_sym_graph true' in PETSc option? thanks, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldo.bonfiglioli at unibas.it Thu Jan 3 06:50:10 2013 From: aldo.bonfiglioli at unibas.it (Aldo Bonfiglioli) Date: Thu, 03 Jan 2013 13:50:10 +0100 Subject: [petsc-users] More on BlockSize in release 3.3 Message-ID: <50E57E82.4070609@unibas.it> Dear developers, I have modified src/vec/vec/examples/tutorials/ex14f.F to test 4 different combinations of VecSetValues(Blocked) calls on the local/global representation of the ghosted vector. The runtime option -job now controls the behavior. Things work except when using VecSetValuesBlocked on the LOCAL representation of the ghosted Vector. Is this maybe because the local representation (lx) does not seem to inherit the blocksize from the global vector (gx) it comes from? With 3.3 I cannot any longer set it explicitly using VecSetBlockSize. The modified code is enclosed. Regards, Aldo -- Dr. Aldo Bonfiglioli Associate professor of Fluid Flow Machinery Scuola di Ingegneria Universita' della Basilicata V.le dell'Ateneo lucano, 10 85100 Potenza ITALY tel:+39.0971.205203 fax:+39.0971.205215 Publications list -------------- next part -------------- A non-text attachment was scrubbed... Name: ex14f.F Type: text/x-fortran Size: 6102 bytes Desc: not available URL: From knepley at gmail.com Thu Jan 3 08:57:23 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Jan 2013 08:57:23 -0600 Subject: [petsc-users] More on BlockSize in release 3.3 In-Reply-To: <50E57E82.4070609@unibas.it> References: <50E57E82.4070609@unibas.it> Message-ID: On Thu, Jan 3, 2013 at 6:50 AM, Aldo Bonfiglioli wrote: > Dear developers, > I have modified > src/vec/vec/examples/tutorials/ex14f.F > > to test 4 different combinations of > VecSetValues(Blocked) calls on the > local/global representation of the ghosted vector. > > The runtime option -job now controls the behavior. > > Things work except when using VecSetValuesBlocked > on the LOCAL representation of the ghosted Vector. > > Is this maybe because the local representation (lx) > does not seem to inherit the blocksize from the global vector (gx) it > comes from? > With 3.3 I cannot any longer set it explicitly using VecSetBlockSize. > > The modified code is enclosed. > The problem is that VecDuplicate() does not preserve the local block size. Tracking it down. Matt > Regards, > Aldo > > > > -- > Dr. Aldo Bonfiglioli > Associate professor of Fluid Flow Machinery > Scuola di Ingegneria > Universita' della Basilicata > V.le dell'Ateneo lucano, 10 85100 Potenza ITALY > tel:+39.0971.205203 fax:+39.0971.205215 > > > Publications list > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 3 09:07:15 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Jan 2013 09:07:15 -0600 Subject: [petsc-users] More on BlockSize in release 3.3 In-Reply-To: References: <50E57E82.4070609@unibas.it> Message-ID: On Thu, Jan 3, 2013 at 8:57 AM, Matthew Knepley wrote: > On Thu, Jan 3, 2013 at 6:50 AM, Aldo Bonfiglioli < > aldo.bonfiglioli at unibas.it> wrote: > >> Dear developers, >> I have modified >> src/vec/vec/examples/tutorials/ex14f.F >> >> to test 4 different combinations of >> VecSetValues(Blocked) calls on the >> local/global representation of the ghosted vector. >> >> The runtime option -job now controls the behavior. >> >> Things work except when using VecSetValuesBlocked >> on the LOCAL representation of the ghosted Vector. >> >> Is this maybe because the local representation (lx) >> does not seem to inherit the blocksize from the global vector (gx) it >> comes from? >> With 3.3 I cannot any longer set it explicitly using VecSetBlockSize. >> >> The modified code is enclosed. >> > > The problem is that VecDuplicate() does not preserve the local block size. > Tracking it down. > Pushed a fix to 3.3 which will come out with the next patch release, or pull it (and its in dev). Matt > Matt > > >> Regards, >> Aldo >> >> >> >> -- >> Dr. Aldo Bonfiglioli >> Associate professor of Fluid Flow Machinery >> Scuola di Ingegneria >> Universita' della Basilicata >> V.le dell'Ateneo lucano, 10 85100 Potenza ITALY >> tel:+39.0971.205203 fax:+39.0971.205215 >> >> >> Publications list >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.scott at ed.ac.uk Thu Jan 3 10:17:15 2013 From: d.scott at ed.ac.uk (David Scott) Date: Thu, 03 Jan 2013 16:17:15 +0000 Subject: [petsc-users] Setting Ghosted Boundary Values Message-ID: <50E5AF0B.7020900@ed.ac.uk> I wish to set boundary values in a structured grid. How do I do this if I am using DMDA_BOUNDARY_GHOSTED? I thought that I should set the boundary values in the function specified by a call to DMSetInitialGuess but it appears to be passed a global vector. I would prefer any example code to be in Fortran but that is not essential. David -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From jedbrown at mcs.anl.gov Thu Jan 3 10:38:06 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 3 Jan 2013 10:38:06 -0600 Subject: [petsc-users] Setting Ghosted Boundary Values In-Reply-To: <50E5AF0B.7020900@ed.ac.uk> References: <50E5AF0B.7020900@ed.ac.uk> Message-ID: On Thu, Jan 3, 2013 at 10:17 AM, David Scott wrote: > I wish to set boundary values in a structured grid. How do I do this if I > am using DMDA_BOUNDARY_GHOSTED? I thought that I should set the boundary > values in the function specified by a call to DMSetInitialGuess but it > appears to be passed a global vector. > The local vector has that extra space. Scatter to the local vector as usual, then fill them up. Alternatively, use a *SetFunctionLocal() so that you start out with a local vector. http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/SNES/DMDASNESSetFunctionLocal.html > > I would prefer any example code to be in Fortran but that is not essential. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Fri Jan 4 08:54:41 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Fri, 4 Jan 2013 22:54:41 +0800 (CST) Subject: [petsc-users] Use of superlu_dis Message-ID: <3e9dc331.1ac14.13c060e87d8.Coremail.w_ang_temp@163.com> Hello, I have a problem about superlu_dis. The result of Ax=b is NaN. The error information is: ****** Warning from MC64A/AD. INFO(1) = 2 Linear solve converged due to CONVERGED_ITS iterations 1 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Floating point exception! [0]PETSC ERROR: Infinite or not-a-number generated in norm! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex4f on a arch-linu named node4 by wang Fri Jan 4 20:48:16 2013 [0]PETSC ERROR: Libraries linked from /public/soft/ddc/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib [0]PETSC ERROR: Configure run at Tue Jan 1 22:23:45 2013 [0]PETSC ERROR: Configure options --with-mpi-dir=/public/soft/ddc/soft/mpich2/ --download-f-blas-lapack=1 --download-hypre=1 --with-x=1 --with-debugging=0 --download-superlu_dist --download-parmetis --download-mumps --download-scalapack --download-blacs=/public/soft/ddc/soft/petsc/blacs-dev.tar.gz [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecNorm() line 167 in src/vec/vec/interface/rvector.c Does it mean that the matrix is singular? Or the size of the matrix (in the project, it is 250000X250000) is too large? Thanks. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jan 4 09:03:28 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 4 Jan 2013 09:03:28 -0600 Subject: [petsc-users] Use of superlu_dis In-Reply-To: <3e9dc331.1ac14.13c060e87d8.Coremail.w_ang_temp@163.com> References: <3e9dc331.1ac14.13c060e87d8.Coremail.w_ang_temp@163.com> Message-ID: 0. ALWAYS send the ENTIRE error message. 1. Use a --with-debugging=1 (the default) build, which will likely check earlier for NaN 2. Run in a debugger with -fp_trap. On Fri, Jan 4, 2013 at 8:54 AM, w_ang_temp wrote: > Hello, > I have a problem about superlu_dis. > The result of Ax=b is NaN. The error information is: > > ****** Warning from MC64A/AD. INFO(1) = 2 > Linear solve converged due to CONVERGED_ITS iterations 1 > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Floating point exception! > [0]PETSC ERROR: Infinite or not-a-number generated in norm! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 > CDT 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex4f on a arch-linu named node4 by wang Fri Jan 4 > 20:48:16 2013 > [0]PETSC ERROR: Libraries linked from > /public/soft/ddc/soft/petsc/petsc-3.2-p7/arch-linux2-c-opt/lib > [0]PETS C ERROR: Configure run at Tue Jan 1 22:23:45 2013 > [0]PETSC ERROR: Configure options > --with-mpi-dir=/public/soft/ddc/soft/mpich2/ --download-f-blas-lapack=1 > --download-hypre=1 --with-x=1 --with-debugging=0 --download-superlu_dist > --download-parmetis --download-mumps --download-scalapack > --download-blacs=/public/soft/ddc/soft/petsc/blacs-dev.tar.gz > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: VecNorm() line 167 in src/vec/vec/interface/rvector.c > > > Does it mean that the matrix is singular? Or the size of the matrix > (in the project, it is 250000X250000) is too large? > Thanks. > Jim > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppetrovic573 at gmail.com Fri Jan 4 12:01:05 2013 From: ppetrovic573 at gmail.com (Petar Petrovic) Date: Fri, 4 Jan 2013 19:01:05 +0100 Subject: [petsc-users] PETSc and threads Message-ID: Hello, I have read that PETSc isn't thread safe, however I am not sure I understand in which way. What I would like to do is execute parts of my code that make calls to PETSc routines in parallel, for example, if I have matices A, B, C and D, is it possible to call MatMatMult(A,B) on one thread and MatMatMult(C,D) on the other thread in parallel? Is there an example which shows how to do something this, e.g. combine PETSc with mine MPI calls or something similar? Thank you very much for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jan 4 13:05:50 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 4 Jan 2013 13:05:50 -0600 Subject: [petsc-users] PETSc and threads In-Reply-To: References: Message-ID: This is not supported due to logging/profiling/debugging features. Can you explain more about your problem and why you don't want to use MPI processes? On Fri, Jan 4, 2013 at 12:01 PM, Petar Petrovic wrote: > Hello, > I have read that PETSc isn't thread safe, however I am not sure I > understand in which way. What I would like to do is execute parts of my > code that make calls to PETSc routines in parallel, for example, > if I have matices A, B, C and D, is it possible to call MatMatMult(A,B) on > one thread and MatMatMult(C,D) on the other thread in parallel? Is there an > example which shows how to do something this, e.g. combine PETSc with mine > MPI calls or something similar? > Thank you very much for your help. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppetrovic573 at gmail.com Fri Jan 4 13:34:36 2013 From: ppetrovic573 at gmail.com (Petar Petrovic) Date: Fri, 4 Jan 2013 20:34:36 +0100 Subject: [petsc-users] PETSc and threads In-Reply-To: References: Message-ID: MPI processes are fine, I didn't express myself clearly. Let me try with another example. Lets say I need to solve n linear systems Ax = b_i, i=1..n and I want to do this in parallel. I would like to employ n processes to do this, so each i-th process does A\b_i, and they all run in parallel. Something like: for(i=0; i wrote: > This is not supported due to logging/profiling/debugging features. Can you > explain more about your problem and why you don't want to use MPI processes? > > > On Fri, Jan 4, 2013 at 12:01 PM, Petar Petrovic wrote: > >> Hello, >> I have read that PETSc isn't thread safe, however I am not sure I >> understand in which way. What I would like to do is execute parts of my >> code that make calls to PETSc routines in parallel, for example, >> if I have matices A, B, C and D, is it possible to call MatMatMult(A,B) >> on one thread and MatMatMult(C,D) on the other thread in parallel? Is there >> an example which shows how to do something this, e.g. combine PETSc with >> mine MPI calls or something similar? >> Thank you very much for your help. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jan 4 13:38:27 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 4 Jan 2013 13:38:27 -0600 Subject: [petsc-users] PETSc and threads In-Reply-To: References: Message-ID: Yes, you can do anything independently on different communicators. For example, each process can create objects on PETSC_COMM_SELF and solve them independently. You can also do each solve in parallel on distinct subcommunicators. On Fri, Jan 4, 2013 at 1:34 PM, Petar Petrovic wrote: > MPI processes are fine, I didn't express myself clearly. Let me try with > another example. > Lets say I need to solve n linear systems Ax = b_i, i=1..n and I want to > do this in parallel. I would like to employ n processes to do this, so each > i-th process does A\b_i, and they all run in parallel. Something like: > > for(i=0; i solve(Ax=b_i) > > where I would like to run this for loop in parallel. Can I do this? > > Let me just note that it doesn't necessarily need to be a A\b operation. > There are parts of my program that are embarrassingly parallel which I > would like to exploit. > > > On Fri, Jan 4, 2013 at 8:05 PM, Jed Brown wrote: > >> This is not supported due to logging/profiling/debugging features. Can >> you explain more about your problem and why you don't want to use MPI >> processes? >> >> >> On Fri, Jan 4, 2013 at 12:01 PM, Petar Petrovic wrote: >> >>> Hello, >>> I have read that PETSc isn't thread safe, however I am not sure I >>> understand in which way. What I would like to do is execute parts of my >>> code that make calls to PETSc routines in parallel, for example, >>> if I have matices A, B, C and D, is it possible to call MatMatMult(A,B) >>> on one thread and MatMatMult(C,D) on the other thread in parallel? Is there >>> an example which shows how to do something this, e.g. combine PETSc with >>> mine MPI calls or something similar? >>> Thank you very much for your help. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppetrovic573 at gmail.com Fri Jan 4 14:01:23 2013 From: ppetrovic573 at gmail.com (Petar Petrovic) Date: Fri, 4 Jan 2013 21:01:23 +0100 Subject: [petsc-users] PETSc and threads In-Reply-To: References: Message-ID: Thank you very much. Just, how do you suggest I do this, do you think its best to write something like an MPI master-slave program where master would read matrix A and the set of vectors b_i, i=1..n and send A and b_i to i-th slave to do the job? What is the way to send a PETSc data structure like a matrix or a vector from a master process to a concrete slave process? More generally, I am not sure what is the way to state that the for loop can be done in parallel and not sequentially. Can you perhaps point me to an example that does something similar? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jan 4 14:43:09 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 4 Jan 2013 14:43:09 -0600 Subject: [petsc-users] PETSc and threads In-Reply-To: References: Message-ID: On Fri, Jan 4, 2013 at 2:01 PM, Petar Petrovic wrote: > Thank you very much. > > Just, how do you suggest I do this, do you think its best to write > something like an MPI master-slave program where master would read matrix A > and the set of vectors b_i, i=1..n and send A and b_i to i-th slave to do > the job? > What is the way to send a PETSc data structure like a matrix or a vector > from a master process to a concrete slave process? > More generally, I am not sure what is the way to state that the for loop > can be done in parallel and not sequentially. Can you perhaps point me to > an example that does something similar? > You'll have to learn the basics of MPI programming. We have several tutorials on our site, some of which include video, that you could look at. You also might want to look at a book such as Using MPI. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Jan 5 20:32:57 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Jan 2013 20:32:57 -0600 Subject: [petsc-users] Fwd: A quick question on 'un-symmetric graph' In-Reply-To: <50E4C262.7030805@gmail.com> References: <50E4A99B.3020003@gmail.com> <50E4C262.7030805@gmail.com> Message-ID: Mark, this is a reasonably reduced test case. mpiexec -n 3 ./ex45 -da_grid_x 19 -da_grid_y 13 -da_grid_z 23 -pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_monitor This is a symmetric graph, but not a symmetric matrix, though the asymmetry is only due to boundary conditions (boundary rows have only a diagonal nonzero; the other zeros are still stored explicitly). Do you have a preference about how this is handled? (I don't think we should need extra options to handle this extremely common case.) On Wed, Jan 2, 2013 at 5:27 PM, Zhenglun (Alan) Wei wrote: > Hello all, > I did some other tests. Now, it is narrow down to this situation: > The domain and grid size is: 163*81*41 with 0.0125 = dx=dy=dz. The > boundary condition is still the same. > The code runs well with 4 processes while the same problem occurs when > I was trying to use more than 4 processes (e.g. 6 or 8 processes). > > Hope this helps to detect the problem. > > thanks, > Alan > > > > -------- Original Message -------- Subject: A quick question on > 'un-symmetric graph' Date: Wed, 02 Jan 2013 15:41:47 -0600 From: Zhenglun > (Alan) Wei To: PETSc > users list > > Dear folks, > Here I came across a problem. > > [0]PETSC ERROR: Petsc has generated inconsistent data! > [0]PETSC ERROR: Have un-symmetric graph (apparently). Use > '-pc_gamg_sym_graph true' to symetrize the graph or '-pc_gamg_threshold > 0.0' if the matrix is structurally symmetric.! > > My code basically uses PETSc /src/ksp/ksp/example/tutorial/ex45.c > to solve the Poisson equation with the Dirichlet BC in x-direction and > the Periodic BC in y- and z- direction. > The executable file is: > > mpiexec -f $PBS_NODEFILE -np 32 ./ex45 -pc_type gamg -ksp_type cg > -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_max_it 1 -mg_levels_ksp_type > richardson -ksp_rtol 1.0e-7 > > > There is no problem for my code when I use small computational > domain (83*41*21 with single core and even 163*81*41 with 4 processes). > However, when I increase the domain size (323*161*81 with 32 processes), > the error comes up. I wonder what's the possible reason of this kind of > problem. Do you need any of my output in order to do some further > inspection? or I just need to blindly add '-pc_gamg_sym_graph true' in > PETSc option? > > thanks, > Alan > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Sat Jan 5 22:19:29 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Sat, 5 Jan 2013 23:19:29 -0500 Subject: [petsc-users] A quick question on 'un-symmetric graph' In-Reply-To: References: <50E4A99B.3020003@gmail.com> <50E4C262.7030805@gmail.com> Message-ID: <88FF23CD-1743-4EBE-911F-9F24C89CD08C@columbia.edu> > > This is a symmetric graph, but not a symmetric matrix, though the asymmetry is only due to boundary conditions (boundary rows have only a diagonal nonzero; the other zeros are still stored explicitly). Do you have a preference about how this is handled? (I don't think we should need extra options to handle this extremely common case.) > If the graph is symmetric then the only problem is that after thresholding the graph can be unsymmetric if the values are not symmetric. So the solution is to: [0]PETSC ERROR: Have un-symmetric graph (apparently). Use '-pc_gamg_sym_graph true' to symetrize the graph or '-pc_gamg_threshold 0.0' if the matrix is structurally symmetric.! There is no simple way to make fix this, that I can think of. If I threshold I need to have the transpose data to threshold symmetrically and if the graph is not symmetric then the MIS algorithm would need to be reworked. The best way that I can think of to rework MIS is to symmetrize the graph, which puts us back to where we started. As the user noticed, this is not a problem is serial and it can often work in small scale parallel, but it gets you eventually. symetrizing the graph is not that expensive (compared to the rest of setup) -- we could make '-pc_gamg_sym_graph true' the default ? I'm thinking this is a good idea. Better to be robust and deal with optimizing for users that can not amortize setup. From jedbrown at mcs.anl.gov Sat Jan 5 22:36:31 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Jan 2013 22:36:31 -0600 Subject: [petsc-users] LU factorization and solution of independent matrices does not scale, why? In-Reply-To: <20121221220521.qbp4io8kws040o8g@mail.zih.tu-dresden.de> References: <50D37234.2040205@tu-dresden.de> <4F2AF113-B369-42AD-95B9-3D4C1E8F5CEE@mcs.anl.gov> <20121220213950.nyu4ddy1og0kkw8c@mail.zih.tu-dresden.de> <50D42D82.10603@tu-dresden.de> <20121221165112.h5x9cere68sgc488@mail.zih.tu-dresden.de> <20121221220521.qbp4io8kws040o8g@mail.zih.tu-dresden.de> Message-ID: I don't seem able to read this matrix file because it appears to be corrupt. Our mail programs might be messing with it. Can you encode it as binary or provide it some other way? Can you compare timing with all the other ranks dropped out. That is, instead of MPI_Comm_split(MPI_COMM_WORLD, rank / 4, rank, &coarseMpiComm); have all procs with rank >= 4 pass MPI_UNDEFINED and then skip over the solve itself (they'll get MPI_COMM_NULL returned). On Fri, Dec 21, 2012 at 3:05 PM, Thomas Witkowski < Thomas.Witkowski at tu-dresden.de> wrote: > So, here it is. Just compile and run with > > mpiexec -np 64 ./ex10 -ksp_type preonly -pc_type lu > -pc_factor_mat_solver_package superlu_dist -log_summary > > 64 cores: 0.09 seconds for solving > 1024 cores: 2.6 seconds for solving > > > Thomas > > > Zitat von Jed Brown : > > Can you reproduce this in a simpler environment so that we can report it? >> As I understand your statement, it sounds like you could reproduce by >> changing src/ksp/ksp/examples/**tutorials/ex10.c to create a subcomm of >> size >> 4 and the using that everywhere, then compare log_summary running on 4 >> cores to running on more (despite everything really being independent) >> >> It would also be worth using an MPI profiler to see if it's really >> spending >> a lot of time in MPI_Iprobe. Since SuperLU_DIST does not use MPI_Iprobe, >> it >> may be something else. >> >> On Fri, Dec 21, 2012 at 8:51 AM, Thomas Witkowski < >> Thomas.Witkowski at tu-dresden.de**> wrote: >> >> I use a modified MPICH version. On the system I use for these benchmarks >>> I >>> cannot use another MPI library. >>> >>> I'm not fixed to MUMPS. Superlu_dist, for example, works also perfectly >>> for this. But there is still the following problem I cannot solve: When I >>> increase the number of coarse space matrices, there seems to be no >>> scaling >>> direct solver for this. Just to summaries: >>> - one coarse space matrix is created always by one "cluster" consisting >>> of >>> four subdomanins/MPI tasks >>> - the four tasks are always local to one node, thus inter-node network >>> communication is not required for computing factorization and solve >>> - independent of the number of cluster, the coarse space matrices are the >>> same, have the same number of rows, nnz structure but possibly different >>> values >>> - there is NO load unbalancing >>> - the matrices must be factorized and there are a lot of solves (> 100) >>> with them >>> >>> It should be pretty clear, that computing LU factorization and solving >>> with it should scale perfectly. But at the moment, all direct solver I >>> tried (mumps, superlu_dist, pastix) are not able to scale. The loos of >>> scale is really worse, as you can see from the numbers I send before. >>> >>> Any ideas? Suggestions? Without a scaling solver method for these kind of >>> systems, my multilevel FETI-DP code is just more or less a joke, only >>> some >>> orders of magnitude slower than standard FETI-DP method :) >>> >>> Thomas >>> >>> Zitat von Jed Brown : >>> >>> MUMPS uses MPI_Iprobe on MPI_COMM_WORLD (hard-coded). What MPI >>> >>>> implementation have you been using? Is the behavior different with a >>>> different implementation? >>>> >>>> >>>> On Fri, Dec 21, 2012 at 2:36 AM, Thomas Witkowski < >>>> thomas.witkowski at tu-dresden.de****> wrote: >>>> >>>> Okay, I did a similar benchmark now with PETSc's event logging: >>>> >>>>> >>>>> UMFPACK >>>>> 16p: Local solve 350 1.0 2.3025e+01 1.1 5.00e+04 1.0 0.0e+00 >>>>> 0.0e+00 7.0e+02 63 0 0 0 52 63 0 0 0 51 0 >>>>> 64p: Local solve 350 1.0 2.3208e+01 1.1 5.00e+04 1.0 0.0e+00 >>>>> 0.0e+00 7.0e+02 60 0 0 0 52 60 0 0 0 51 0 >>>>> 256p: Local solve 350 1.0 2.3373e+01 1.1 5.00e+04 1.0 0.0e+00 >>>>> 0.0e+00 7.0e+02 49 0 0 0 52 49 0 0 0 51 1 >>>>> >>>>> MUMPS >>>>> 16p: Local solve 350 1.0 4.7183e+01 1.1 5.00e+04 1.0 0.0e+00 >>>>> 0.0e+00 7.0e+02 75 0 0 0 52 75 0 0 0 51 0 >>>>> 64p: Local solve 350 1.0 7.1409e+01 1.1 5.00e+04 1.0 0.0e+00 >>>>> 0.0e+00 7.0e+02 78 0 0 0 52 78 0 0 0 51 0 >>>>> 256p: Local solve 350 1.0 2.6079e+02 1.1 5.00e+04 1.0 0.0e+00 >>>>> 0.0e+00 7.0e+02 82 0 0 0 52 82 0 0 0 51 0 >>>>> >>>>> >>>>> As you see, the local solves with UMFPACK have nearly constant time >>>>> with >>>>> increasing number of subdomains. This is what I expect. The I replace >>>>> UMFPACK by MUMPS and I see increasing time for local solves. In the >>>>> last >>>>> columns, UMFPACK has a decreasing value from 63 to 49, while MUMPS's >>>>> column >>>>> increases here from 75 to 82. What does this mean? >>>>> >>>>> Thomas >>>>> >>>>> Am 21.12.2012 02:19, schrieb Matthew Knepley: >>>>> >>>>> On Thu, Dec 20, 2012 at 3:39 PM, Thomas Witkowski >>>>> >>>>> >>>>> ***de >>>>>> > >>>>>> >>>>>> >> >>>>>> >>>>>> wrote: >>>>>> >>>>>> I cannot use the information from log_summary, as I have three >>>>>> >>>>>>> different >>>>>>> LU >>>>>>> factorizations and solve (local matrices and two hierarchies of >>>>>>> coarse >>>>>>> grids). Therefore, I use the following work around to get the timing >>>>>>> of >>>>>>> the >>>>>>> solve I'm intrested in: >>>>>>> >>>>>>> You misunderstand how to use logging. You just put these thing in >>>>>>> >>>>>> separate stages. Stages represent >>>>>> parts of the code over which events are aggregated. >>>>>> >>>>>> Matt >>>>>> >>>>>> MPI::COMM_WORLD.Barrier(); >>>>>> >>>>>> wtime = MPI::Wtime(); >>>>>>> KSPSolve(*(data->ksp_schur_******primal_local), tmp_primal, >>>>>>> >>>>>>> >>>>>>> tmp_primal); >>>>>>> FetiTimings::fetiSolve03 += (MPI::Wtime() - wtime); >>>>>>> >>>>>>> The factorization is done explicitly before with "KSPSetUp", so I can >>>>>>> measure the time for LU factorization. It also does not scale! For 64 >>>>>>> cores, >>>>>>> I takes 0.05 seconds, for 1024 cores 1.2 seconds. In all >>>>>>> calculations, >>>>>>> the >>>>>>> local coarse space matrices defined on four cores have exactly the >>>>>>> same >>>>>>> number of rows and exactly the same number of non zero entries. So, >>>>>>> from >>>>>>> my >>>>>>> point of view, the time should be absolutely constant. >>>>>>> >>>>>>> Thomas >>>>>>> >>>>>>> Zitat von Barry Smith : >>>>>>> >>>>>>> >>>>>>> Are you timing ONLY the time to factor and solve the subproblems? >>>>>>> Or >>>>>>> >>>>>>> also the time to get the data to the collection of 4 cores at a >>>>>>>> time? >>>>>>>> >>>>>>>> If you are only using LU for these problems and not elsewhere in >>>>>>>> the >>>>>>>> code you can get the factorization and time from MatLUFactor() and >>>>>>>> MatSolve() or you can use stages to put this calculation in its own >>>>>>>> stage >>>>>>>> and use the MatLUFactor() and MatSolve() time from that stage. >>>>>>>> Also look at the load balancing column for the factorization and >>>>>>>> solve >>>>>>>> stage, it is well balanced? >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> On Dec 20, 2012, at 2:16 PM, Thomas Witkowski >>>>>>>> >>>>>>> >>>>>>> dresden.de > >>>>>>>> >>>>>>>> >> >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> In my multilevel FETI-DP code, I have localized course matrices, >>>>>>>> which >>>>>>>> >>>>>>>> are defined on only a subset of all MPI tasks, typically between 4 >>>>>>>>> and 64 >>>>>>>>> tasks. The MatAIJ and the KSP objects are both defined on a MPI >>>>>>>>> communicator, which is a subset of MPI::COMM_WORLD. The LU >>>>>>>>> factorization of >>>>>>>>> the matrices is computed with either MUMPS or superlu_dist, but >>>>>>>>> both >>>>>>>>> show >>>>>>>>> some scaling property I really wonder of: When the overall problem >>>>>>>>> size is >>>>>>>>> increased, the solve with the LU factorization of the local >>>>>>>>> matrices >>>>>>>>> does >>>>>>>>> not scale! But why not? I just increase the number of local >>>>>>>>> matrices, >>>>>>>>> but >>>>>>>>> all of them are independent of each other. Some example: I use 64 >>>>>>>>> cores, >>>>>>>>> each coarse matrix is spanned by 4 cores so there are 16 MPI >>>>>>>>> communicators >>>>>>>>> with 16 coarse space matrices. The problem need to solve 192 times >>>>>>>>> with the >>>>>>>>> coarse space systems, and this takes together 0.09 seconds. Now I >>>>>>>>> increase >>>>>>>>> the number of cores to 256, but let the local coarse space be >>>>>>>>> defined >>>>>>>>> again >>>>>>>>> on only 4 cores. Again, 192 solutions with these coarse spaces are >>>>>>>>> required, but now this takes 0.24 seconds. The same for 1024 cores, >>>>>>>>> and we >>>>>>>>> are at 1.7 seconds for the local coarse space solver! >>>>>>>>> >>>>>>>>> For me, this is a total mystery! Any idea how to explain, debug and >>>>>>>>> eventually how to resolve this problem? >>>>>>>>> >>>>>>>>> Thomas >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which >>>>>> their experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Jan 5 22:38:42 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Jan 2013 22:38:42 -0600 Subject: [petsc-users] A quick question on 'un-symmetric graph' In-Reply-To: <88FF23CD-1743-4EBE-911F-9F24C89CD08C@columbia.edu> References: <50E4A99B.3020003@gmail.com> <50E4C262.7030805@gmail.com> <88FF23CD-1743-4EBE-911F-9F24C89CD08C@columbia.edu> Message-ID: On Sat, Jan 5, 2013 at 10:19 PM, Mark F. Adams wrote: > There is no simple way to make fix this, that I can think of. If I > threshold I need to have the transpose data to threshold symmetrically and > if the graph is not symmetric then the MIS algorithm would need to be > reworked. The best way that I can think of to rework MIS is to symmetrize > the graph, which puts us back to where we started. What about marking isolated nodes and leaving them out entirely? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Sat Jan 5 22:59:22 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Sat, 5 Jan 2013 23:59:22 -0500 Subject: [petsc-users] A quick question on 'un-symmetric graph' In-Reply-To: References: <50E4A99B.3020003@gmail.com> <50E4C262.7030805@gmail.com> <88FF23CD-1743-4EBE-911F-9F24C89CD08C@columbia.edu> Message-ID: <205F0AF3-9040-426B-8996-3BC6C46AAB8C@columbia.edu> > > What about marking isolated nodes and leaving them out entirely? Isolated nodes are removed (so I do not actually do a true MIS) but if the graph not symmetric then other other processors think they are talking to a BC node and expect it to talk back to them. Maybe we are not understanding each other. From jedbrown at mcs.anl.gov Sat Jan 5 23:02:21 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 5 Jan 2013 23:02:21 -0600 Subject: [petsc-users] A quick question on 'un-symmetric graph' In-Reply-To: <205F0AF3-9040-426B-8996-3BC6C46AAB8C@columbia.edu> References: <50E4A99B.3020003@gmail.com> <50E4C262.7030805@gmail.com> <88FF23CD-1743-4EBE-911F-9F24C89CD08C@columbia.edu> <205F0AF3-9040-426B-8996-3BC6C46AAB8C@columbia.edu> Message-ID: On Sat, Jan 5, 2013 at 10:59 PM, Mark F. Adams wrote: > Isolated nodes are removed (so I do not actually do a true MIS) but if the > graph not symmetric then other other processors think they are talking to a > BC node and expect it to talk back to them. Maybe we are not understanding > each other. I meant to mark boundary nodes (by looking at rows) and communicate that status so that other rows don't expect it to "talk back". -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Jan 7 00:55:03 2013 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Sun, 06 Jan 2013 22:55:03 -0800 Subject: [petsc-users] ML options Message-ID: <50EA7147.3060304@berkeley.edu> I am adding ML as an option to our FEA code and was looking for a bit of guidance on options. Generally we solve 1,2, and 3D solids problems (nonlinear elasticity) but we also treat shells, thermal, problems, coupled problems, etc. etc. My basic run line looks like: -@${MPIEXEC} -n $(NPROC) $(MY_PROGRAM) -ksp_type cg -ksp_monitor -pc_type ml -log_summary -ksp_view -options_left but this does not work very well at all with 3D elasticity for example -- in fact it fails to converge after 10K iterations on a rather modest problem. However following ex26 in the ksp tutorials I also tried: -@${MPIEXEC} -n $(NPROC) $(FEAPRUN) -ksp_type cg -ksp_monitor -pc_type ml -mat_no_inode -log_summary -ksp_view -options_left And this worked very very much better -- converged in about 10 iterations. What exactly is -mat_no_inode doing for me? and are there other 'important' options that I should be aware of when using ML. -sanjay From jedbrown at mcs.anl.gov Mon Jan 7 07:49:39 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Jan 2013 07:49:39 -0600 Subject: [petsc-users] ML options In-Reply-To: <50EA7147.3060304@berkeley.edu> References: <50EA7147.3060304@berkeley.edu> Message-ID: Could we get an example matrix exhibiting this behavior? If you run with -ksp_view_binary, the solver will write out the matrix to a file called 'binaryoutput' (and 'binaryoutput.info') when KSPSolve() returns. I suppose it could be a "math" reason of the inodes somehow causing an incorrect near-null space to be passed to ML, but the interface is not supposed to work like this. If you are serious about smoothed aggregation for elasticity, you should use MatSetNearNullSpace() to provide the rigid body modes. As a related matter, does -pc_type gamg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_type richardson -mg_levels_pc_type sor converge well? On Mon, Jan 7, 2013 at 12:55 AM, Sanjay Govindjee wrote: > > I am adding ML as an option to our FEA code and was looking for a bit of > guidance on > options. Generally we solve 1,2, and 3D solids problems (nonlinear > elasticity) but > we also treat shells, thermal, problems, coupled problems, etc. etc. > > My basic run line looks like: > > -@${MPIEXEC} -n $(NPROC) $(MY_PROGRAM) -ksp_type cg -ksp_monitor -pc_type > ml -log_summary -ksp_view -options_left > > but this does not work very well at all with 3D elasticity for example -- > in fact it fails to converge after 10K iterations on a rather > modest problem. However following ex26 in the ksp tutorials I also tried: > > -@${MPIEXEC} -n $(NPROC) $(FEAPRUN) -ksp_type cg -ksp_monitor -pc_type ml > -mat_no_inode -log_summary -ksp_view -options_left > > And this worked very very much better -- converged in about 10 iterations. > What exactly is -mat_no_inode doing for me? and are there other > 'important' options > that I should be aware of when using ML. > > -sanjay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Mon Jan 7 08:41:39 2013 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Mon, 07 Jan 2013 09:41:39 -0500 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list Message-ID: <50EADEA3.9080905@giref.ulaval.ca> Hi, I have just compiled Petsc with "--with-64-bit-indices" and I see that MUMPS is missing but superLu still there. Ok like that. Now I would like to have something like this on my command line: -pc_factor_mat_solver_package mumps_if_available_otherwise_superlu_dist How can I add or register something in petsc to have this behavior? Thank you! Eric From jedbrown at mcs.anl.gov Mon Jan 7 08:46:49 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Jan 2013 08:46:49 -0600 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: <50EADEA3.9080905@giref.ulaval.ca> References: <50EADEA3.9080905@giref.ulaval.ca> Message-ID: It would be very easy to make PetscFListFind interpret choice1:choice2:choice3 as searching for items in that order, in which case this sort of option would work anywhere. This sounds convenient to me, though if you have impl-specific suboptions, you'll be notified that those were not used in -options_left. Is this okay? On Mon, Jan 7, 2013 at 8:41 AM, Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi, > > I have just compiled Petsc with "--with-64-bit-indices" and I see that > MUMPS is missing but superLu still there. Ok like that. > > Now I would like to have something like this on my command line: > > -pc_factor_mat_solver_package mumps_if_available_otherwise_**superlu_dist > > How can I add or register something in petsc to have this behavior? > > Thank you! > > Eric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Mon Jan 7 09:09:49 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 7 Jan 2013 10:09:49 -0500 Subject: [petsc-users] ML options In-Reply-To: References: <50EA7147.3060304@berkeley.edu> Message-ID: <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> ex56 is a simple 3D elasticity problem. There is a runex56 target that uses GAMG and a runex56_ml. These have a generic parameters and ML and GAMG work well. The eigen estimates could be bad. This can cause death. I've found that CG converges to the largest eigenvalue faster than the default GMRES so I use: -gamg_est_ksp_max_it 10 # this is the default, you could increase this to test -gamg_est_ksp_type cg Jed could tell you how to set this for ML. On Jan 7, 2013, at 8:49 AM, Jed Brown wrote: > Could we get an example matrix exhibiting this behavior? If you run with -ksp_view_binary, the solver will write out the matrix to a file called 'binaryoutput' (and 'binaryoutput.info') when KSPSolve() returns. I suppose it could be a "math" reason of the inodes somehow causing an incorrect near-null space to be passed to ML, but the interface is not supposed to work like this. If you are serious about smoothed aggregation for elasticity, you should use MatSetNearNullSpace() to provide the rigid body modes. > > As a related matter, does -pc_type gamg -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_type richardson -mg_levels_pc_type sor converge well? > > > On Mon, Jan 7, 2013 at 12:55 AM, Sanjay Govindjee wrote: > > I am adding ML as an option to our FEA code and was looking for a bit of guidance on > options. Generally we solve 1,2, and 3D solids problems (nonlinear elasticity) but > we also treat shells, thermal, problems, coupled problems, etc. etc. > > My basic run line looks like: > > -@${MPIEXEC} -n $(NPROC) $(MY_PROGRAM) -ksp_type cg -ksp_monitor -pc_type ml -log_summary -ksp_view -options_left > > but this does not work very well at all with 3D elasticity for example -- in fact it fails to converge after 10K iterations on a rather > modest problem. However following ex26 in the ksp tutorials I also tried: > > -@${MPIEXEC} -n $(NPROC) $(FEAPRUN) -ksp_type cg -ksp_monitor -pc_type ml -mat_no_inode -log_summary -ksp_view -options_left > > And this worked very very much better -- converged in about 10 iterations. What exactly is -mat_no_inode doing for me? and are there other 'important' options > that I should be aware of when using ML. > > -sanjay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 7 09:36:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Jan 2013 09:36:05 -0600 Subject: [petsc-users] ML options In-Reply-To: <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> Message-ID: On Mon, Jan 7, 2013 at 9:09 AM, Mark F. Adams wrote: > ex56 is a simple 3D elasticity problem. There is a runex56 target that > uses GAMG and a runex56_ml. These have a generic parameters and ML and > GAMG work well. > > The eigen estimates could be bad. This can cause death. I've found that > CG converges to the largest eigenvalue faster than the default GMRES so I > use: > > -gamg_est_ksp_max_it 10 # this is the default, you could increase this to > test > -gamg_est_ksp_type cg > > Jed could tell you how to set this for ML. > ML isn't using eigenvalue estimation (doesn't expose the algorithm). Sanjay is using the default smoother (Richardson + SOR) rather than chebyshev/pbjacobi. > > > > On Jan 7, 2013, at 8:49 AM, Jed Brown wrote: > > Could we get an example matrix exhibiting this behavior? If you run with > -ksp_view_binary, the solver will write out the matrix to a file called > 'binaryoutput' (and 'binaryoutput.info') when KSPSolve() returns. I > suppose it could be a "math" reason of the inodes somehow causing an > incorrect near-null space to be passed to ML, but the interface is not > supposed to work like this. If you are serious about smoothed aggregation > for elasticity, you should use MatSetNearNullSpace() to provide the rigid > body modes. > > As a related matter, does -pc_type gamg -pc_gamg_agg_nsmooths 1 > -mg_levels_ksp_type richardson -mg_levels_pc_type sor converge well? > > > On Mon, Jan 7, 2013 at 12:55 AM, Sanjay Govindjee wrote: > >> >> I am adding ML as an option to our FEA code and was looking for a bit of >> guidance on >> options. Generally we solve 1,2, and 3D solids problems (nonlinear >> elasticity) but >> we also treat shells, thermal, problems, coupled problems, etc. etc. >> >> My basic run line looks like: >> >> -@${MPIEXEC} -n $(NPROC) $(MY_PROGRAM) -ksp_type cg -ksp_monitor -pc_type >> ml -log_summary -ksp_view -options_left >> >> but this does not work very well at all with 3D elasticity for example -- >> in fact it fails to converge after 10K iterations on a rather >> modest problem. However following ex26 in the ksp tutorials I also tried: >> >> -@${MPIEXEC} -n $(NPROC) $(FEAPRUN) -ksp_type cg -ksp_monitor -pc_type ml >> -mat_no_inode -log_summary -ksp_view -options_left >> >> And this worked very very much better -- converged in about 10 >> iterations. What exactly is -mat_no_inode doing for me? and are there >> other 'important' options >> that I should be aware of when using ML. >> >> -sanjay >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Mon Jan 7 10:36:19 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 7 Jan 2013 11:36:19 -0500 Subject: [petsc-users] ML options In-Reply-To: References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> Message-ID: <4DA1C887-ADD6-4757-8708-8A7393AD60E7@columbia.edu> > > ML isn't using eigenvalue estimation (doesn't expose the algorithm). Sanjay is using the default smoother (Richardson + SOR) rather than chebyshev/pbjacobi. > The only other problem that I can think of is that ML is not getting the null space correctly. Unfortunately I do not see anything in MLs verbose output that says what it is doing for the null space. It would be good to know if GAMG has the same problem. From Eric.Chamberland at giref.ulaval.ca Mon Jan 7 16:02:38 2013 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Mon, 07 Jan 2013 17:02:38 -0500 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: References: <50EADEA3.9080905@giref.ulaval.ca> Message-ID: <50EB45FE.2060700@giref.ulaval.ca> On 01/07/2013 09:46 AM, Jed Brown wrote: > It would be very easy to make PetscFListFind interpret > choice1:choice2:choice3 as searching for items in that order, in which > case this sort of option would work anywhere. This sounds convenient to > me, though if you have impl-specific suboptions, you'll be notified that > those were not used in -options_left. Is this okay? > Yes, excellent! This sounds great for me! Eric From mark.adams at columbia.edu Mon Jan 7 16:23:47 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 7 Jan 2013 17:23:47 -0500 Subject: [petsc-users] ML options In-Reply-To: <4DA1C887-ADD6-4757-8708-8A7393AD60E7@columbia.edu> References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> <4DA1C887-ADD6-4757-8708-8A7393AD60E7@columbia.edu> Message-ID: <6BA180BD-FE1E-4446-8B42-411A25D46AA2@columbia.edu> I've added a flag: -pc_gamg_reuse_interpolation which can be set to false to redo the setup in GAMG every time. '-pc_ml_reuse_interpolation true' does seem to get ML to reuse some mesh setup. The setup time goes from .3 to .1 sec on one of my tests from the first to the second solve. Hong: this looks like a way to infer ML's RAP times. I think the second solves are just redoing the RAP with this flag, like what GAMG does by default. On Jan 7, 2013, at 11:36 AM, Mark F. Adams wrote: >> >> ML isn't using eigenvalue estimation (doesn't expose the algorithm). Sanjay is using the default smoother (Richardson + SOR) rather than chebyshev/pbjacobi. >> > > The only other problem that I can think of is that ML is not getting the null space correctly. Unfortunately I do not see anything in MLs verbose output that says what it is doing for the null space. > > It would be good to know if GAMG has the same problem. From bsmith at mcs.anl.gov Mon Jan 7 17:02:44 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Jan 2013 17:02:44 -0600 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: References: <50EADEA3.9080905@giref.ulaval.ca> Message-ID: On Jan 7, 2013, at 8:46 AM, Jed Brown wrote: > It would be very easy to make PetscFListFind interpret choice1:choice2:choice3 as searching for items in that order, in which case this sort of option would work anywhere. This sounds convenient to me, though if you have impl-specific suboptions, you'll be notified that those were not used in -options_left. Is this okay? > Jed, How would this be done in the code? Ideally there is a functional interface with a corresponding OptionsData base interface. Something like the equivalent of PCFactorSetMatSolverPackages(pc,{MATSOLVERSUPERLU_DIST,MATSOLVERMUMPS, MATSOLVERPETSC,PETSC_NULL}) Then follow this paradigm for all set types and packages? But then since there is a PCFactorSetMatSolverPackage() and a PCFactorSetMatSolverPackages() should there be a -pc_factor_mat_solver_package package AND -pc_factor_mat_solver_packages package1:package2 ? Barry > > On Mon, Jan 7, 2013 at 8:41 AM, Eric Chamberland wrote: > Hi, > > I have just compiled Petsc with "--with-64-bit-indices" and I see that MUMPS is missing but superLu still there. Ok like that. > > Now I would like to have something like this on my command line: > > -pc_factor_mat_solver_package mumps_if_available_otherwise_superlu_dist > > How can I add or register something in petsc to have this behavior? > > Thank you! > > Eric > From jedbrown at mcs.anl.gov Mon Jan 7 17:09:59 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Jan 2013 17:09:59 -0600 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: References: <50EADEA3.9080905@giref.ulaval.ca> Message-ID: On Mon, Jan 7, 2013 at 5:02 PM, Barry Smith wrote: > How would this be done in the code? Ideally there is a functional > interface with a corresponding OptionsData base interface. Something like > the equivalent of > PCFactorSetMatSolverPackages(pc,{MATSOLVERSUPERLU_DIST,MATSOLVERMUMPS, > MATSOLVERPETSC,PETSC_NULL}) > It could, but I'd prefer to have a generic routine that created an alternative from the array of types (by joining them with ':'). Otherwise it's just more boilerplate for every object. > Then follow this paradigm for all set types and packages? But then > since there is a PCFactorSetMatSolverPackage() and a > PCFactorSetMatSolverPackages() should there be > a -pc_factor_mat_solver_package package AND > -pc_factor_mat_solver_packages package1:package2 ? > No need, if it containts a ':' then it's an alternative. The implementation of PCFactorSetMatSolverPackages() would just join the strings and call PCFactorSetMatSolverPackage(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jan 7 17:30:00 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Jan 2013 17:30:00 -0600 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: References: <50EADEA3.9080905@giref.ulaval.ca> Message-ID: <2FBDB2C6-2B73-479B-BC6E-900048295F46@mcs.anl.gov> Ok, so you are suggesting the same functions/options database as today except that : separated strings for alternatives? Note that PetscFListGetPathAndFunction() which is used by all the checkers handles the form [/path/libname[.so.1.0]:]functionname[()] so : is already reserved. I would suggest | but the damn shell would require always protecting the arguments with "". Barry On Jan 7, 2013, at 5:09 PM, Jed Brown wrote: > On Mon, Jan 7, 2013 at 5:02 PM, Barry Smith wrote: > How would this be done in the code? Ideally there is a functional interface with a corresponding OptionsData base interface. Something like the equivalent of PCFactorSetMatSolverPackages(pc,{MATSOLVERSUPERLU_DIST,MATSOLVERMUMPS, MATSOLVERPETSC,PETSC_NULL}) > > It could, but I'd prefer to have a generic routine that created an alternative from the array of types (by joining them with ':'). > > Otherwise it's just more boilerplate for every object. > > Then follow this paradigm for all set types and packages? But then since there is a PCFactorSetMatSolverPackage() and a PCFactorSetMatSolverPackages() should there be > a -pc_factor_mat_solver_package package AND -pc_factor_mat_solver_packages package1:package2 ? > > No need, if it containts a ':' then it's an alternative. The implementation of PCFactorSetMatSolverPackages() would just join the strings and call PCFactorSetMatSolverPackage(). From jedbrown at mcs.anl.gov Mon Jan 7 17:32:21 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 7 Jan 2013 17:32:21 -0600 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: <2FBDB2C6-2B73-479B-BC6E-900048295F46@mcs.anl.gov> References: <50EADEA3.9080905@giref.ulaval.ca> <2FBDB2C6-2B73-479B-BC6E-900048295F46@mcs.anl.gov> Message-ID: On Mon, Jan 7, 2013 at 5:30 PM, Barry Smith wrote: > Ok, so you are suggesting the same functions/options database as today > except that : separated strings for alternatives? > Yes > > Note that PetscFListGetPathAndFunction() which is used by all the > checkers handles the form [/path/libname[.so.1.0]:]functionname[()] so : is > already reserved. I would suggest | but the damn shell would require always > protecting the arguments with "". > Maybe comma since semicolon is also taken. Pipe isn't that bad because usually people would be using this notation from an options file. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Jan 7 17:44:53 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 7 Jan 2013 17:44:53 -0600 Subject: [petsc-users] -pc_factor_mat_solver_package wish_list In-Reply-To: References: <50EADEA3.9080905@giref.ulaval.ca> <2FBDB2C6-2B73-479B-BC6E-900048295F46@mcs.anl.gov> Message-ID: <07F4F923-4660-4101-A2CE-65D8CC9B74D3@mcs.anl.gov> On Jan 7, 2013, at 5:32 PM, Jed Brown wrote: > On Mon, Jan 7, 2013 at 5:30 PM, Barry Smith wrote: > Ok, so you are suggesting the same functions/options database as today except that : separated strings for alternatives? > > Yes > > > Note that PetscFListGetPathAndFunction() which is used by all the checkers handles the form [/path/libname[.so.1.0]:]functionname[()] so : is already reserved. I would suggest | but the damn shell would require always protecting the arguments with "". > > Maybe comma since semicolon is also taken. Pipe isn't that bad because usually people would be using this notation from an options file. I thought of comma, but comma is used for arguments to PetscOptionsGetXXXArray() so I don't like the reuse in a slightly different way I can live with pipe and forcing the quotes; Barry From w_ang_temp at 163.com Tue Jan 8 07:36:33 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Tue, 8 Jan 2013 21:36:33 +0800 (CST) Subject: [petsc-users] DIVERGED_DTOL Message-ID: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> Hello, I use the default dtol(1.0E+5), in my view, only when residual norm is greater than dtol, the DIVERGED_DTOL occurs(||rk||>dtol*||b||). But in my project, it is not. The information is: 349 KSP Residual norm 3.697892503779e-01 350 KSP Residual norm 1.104685840662e+02 351 KSP Residual norm 1.199213228986e+02 352 KSP Residual norm 1.183644579434e+02 353 KSP Residual norm 1.234968225554e+02 354 KSP Residual norm 2.882557881065e-01 355 KSP Residual norm 2.170676916299e+02 356 KSP Residual norm 5.764266225925e+00 357 KSP Residual norm 1.701448294063e+04 Linear solve did not converge due to DIVERGED_DTOL iterations 357 When iteration=357, the residual norm is 1.7e+4,it is less than dtol. Is it right? Thanks. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 8 07:50:26 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Jan 2013 07:50:26 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> Message-ID: Compare to the initial residual. It should be obvious if you run with -ksp_monitor_true_residual On Tue, Jan 8, 2013 at 7:36 AM, w_ang_temp wrote: > Hello, > > I use the default dtol(1.0E+5), in my view, only when residual norm is > greater than > > dtol, the DIVERGED_DTOL occurs(||rk||>dtol*||b||). But in my project, it > is not. The information is: > > 349 KSP Residual norm 3.697892503779e-01 > 350 KSP Residual norm 1.104685840662e+02 > 351 KSP Residual norm 1.199213228986e+02 > 352 KSP Residual norm 1.183644579434e+02 > 353 KSP Residual norm 1.234968225554e+02 > 354 KSP Residual norm 2.882557881065e-01 > 355 KSP Residual norm 2.170676916299e+02 > 356 KSP Residual norm 5.764266225925e+00 > 357 KSP Residual norm 1.701448294063e+04 > Linear solve did not converge due to DIVERGED_DTOL iterations 357 > > When iteration=357, the residual norm is 1.7e+4,it is less than dtol. > Is it right? > > Thanks. > > Jim > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 8 08:07:36 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Jan 2013 08:07:36 -0600 Subject: [petsc-users] ML options In-Reply-To: <6BA180BD-FE1E-4446-8B42-411A25D46AA2@columbia.edu> References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> <4DA1C887-ADD6-4757-8708-8A7393AD60E7@columbia.edu> <6BA180BD-FE1E-4446-8B42-411A25D46AA2@columbia.edu> Message-ID: On Mon, Jan 7, 2013 at 4:23 PM, Mark F. Adams wrote: > '-pc_ml_reuse_interpolation true' does seem to get ML to reuse some mesh > setup. The setup time goes from .3 to .1 sec on one of my tests from the > first to the second solve. > > Hong: this looks like a way to infer ML's RAP times. I think the second > solves are just redoing the RAP with this flag, like what GAMG does by > default. > Huh, it's changing the sparsity of the coarse grid. $ ./ex15 -da_grid_x 20 -da_grid_y 20 -p 1.2 -ksp_converged_reason -pc_type ml -pc_ml_reuse_interpolation Linear solve converged due to CONVERGED_RTOL iterations 6 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Argument out of range! [0]PETSC ERROR: New nonzero at (0,8) caused a malloc! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: cb29460836d903f43276d687c4ba9f5917bf6651 HG Date: Sun Jan 06 14:49:14 2013 -0600 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./ex15 on a mpich named batura by jed Tue Jan 8 08:06:37 2013 [0]PETSC ERROR: Libraries linked from /home/jed/petsc/mpich/lib [0]PETSC ERROR: Configure run at Sun Jan 6 15:19:56 2013 [0]PETSC ERROR: Configure options --download-ams --download-blacs --download-chaco --download-generator --download-hypre --download-ml --download-spai --download-spooles --download-sundials --download-superlu --download-superlu_dist --download-triangle --with-blas-lapack=/usr --with-c2html --with-cholmod-dir=/usr --with-clique-dir=/home/jed/usr/clique-mpich --with-elemental-dir=/home/jed/usr/clique-mpich --with-exodusii-dir=/usr --with-hdf5-dir=/opt/mpich --with-lgrind --with-metis-dir=/home/jed/usr/clique-mpich --with-mpi-dir=/opt/mpich --with-netcdf-dir=/usr --with-openmp --with-parmetis-dir=/home/jed/usr/clique-mpich --with-pcbddc --with-pthreadclasses --with-shared-libraries --with-single-library=0 --with-sowing --with-threadcomm --with-umfpack-dir=/usr --with-x -PETSC_ARCH=mpich [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatSetValues_SeqAIJ() line 352 in /home/jed/petsc/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: MatSetValues() line 1083 in /home/jed/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: MatWrapML_SeqAIJ() line 348 in /home/jed/petsc/src/ksp/pc/impls/ml/ml.c [0]PETSC ERROR: PCSetUp_ML() line 639 in /home/jed/petsc/src/ksp/pc/impls/ml/ml.c [0]PETSC ERROR: PCSetUp() line 832 in /home/jed/petsc/src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSetUp() line 267 in /home/jed/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: KSPSolve() line 376 in /home/jed/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: SNES_KSPSolve() line 4460 in /home/jed/petsc/src/snes/interface/snes.c [0]PETSC ERROR: SNESSolve_NEWTONLS() line 216 in /home/jed/petsc/src/snes/impls/ls/ls.c [0]PETSC ERROR: SNESSolve() line 3678 in /home/jed/petsc/src/snes/interface/snes.c [0]PETSC ERROR: main() line 221 in src/snes/examples/tutorials/ex15.c application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.huber at unibas.ch Tue Jan 8 08:26:44 2013 From: johannes.huber at unibas.ch (Johannes Huber) Date: Tue, 8 Jan 2013 14:26:44 +0000 Subject: [petsc-users] Example for MatMatMultSymbolic Message-ID: <73644807AA76E34EA8A5562CA279021E05C54CE3@urz-mbx-3.urz.unibas.ch> Hi, I have a valgrind "invalid read" in MatGetBrowsOfAoCols_MPIAIJ in the lines rvalues = gen_from->values; /* holds the length of receiving row */ svalues = gen_to->values; /* holds the length of sending row */ (and others, but that are the first). This function is called indirectly via MatMatMultSymbolic. All invalid reads apear in MatGetBrowsOfAoCols_MPIAIJ. The calling sequence for the matrix factors are MatCreate(PETSC_COMM_WORLD,A); MatSetSizes(A,NumLocRows,NumLocCols,PETSC_DETERMINE,PETSC_DETERMINE); MatSetType(A,MATMPIAIJ); MatMPIAIJSetPreallocation(A,PETSC_NULL,nnz,PETSC_NULL,onz); MatSetValues(...); MatAssemblyBegin(...) MatAssemblyEnd(...) and then a call to MatMatMultSymbolic. I want the result matrix to be created. In the documentation I see, that this happens, if I pass MAT_REUSE_MATRIX as scall, but how can I pass the value, I don't the parameter. Where can I find an example for MatMatMultSymbolic / MatMatMultNumeric? Thanks, Hannes -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Tue Jan 8 11:08:27 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 9 Jan 2013 01:08:27 +0800 (CST) Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> Message-ID: <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> It is: 349 KSP preconditioned resid norm 3.697892503779e-01 true resid norm 3.963274534823e+04 ||r(i)||/||b|| 5.903069761695e-01 350 KSP preconditioned resid norm 1.104685840662e+02 true resid norm 1.183964449407e+07 ||r(i)||/||b|| 1.763447038252e+02 351 KSP preconditioned resid norm 1.199213228986e+02 true resid norm 1.285275666774e+07 ||r(i)||/||b|| 1.914344277014e+02 352 KSP preconditioned resid norm 1.183644579434e+02 true resid norm 1.268589721439e+07 ||r(i)||/||b|| 1.889491519909e+02 353 KSP preconditioned resid norm 1.234968225554e+02 true resid norm 1.323596647557e+07 ||r(i)||/||b|| 1.971421176662e+02 354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm 6.177943120636e+05 ||r(i)||/||b|| 9.201691405543e+00 357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 Linear solve did not converge due to DIVERGED_DTOL iterations 357 I cannot understand it. Which one means that the DIVERGED_DTOL occures:preconditioned resid norm, true resid norm or ||r(i)||/||b||? What is the difference between preconditioned resid norm and true resid norm? Thanks. Jim >At 2013-01-08 21:50:26,"Jed Brown" wrote: >Compare to the initial residual. It should be obvious if you run with -ksp_monitor_true_residual >>On Tue, Jan 8, 2013 at 7:36 AM, w_ang_temp wrote: >>Hello, >> I use the default dtol(1.0E+5), in my view, only when residual norm is greater than >>dtol, the DIVERGED_DTOL occurs(||rk||>dtol*||b||). But in my project, it is not. The information is: >>349 KSP Residual norm 3.697892503779e-01 >>350 KSP Residual norm 1.104685840662e+02 >>351 KSP Residual norm 1.199213228986e+02 >>352 KSP Residual norm 1.183644579434e+02 >>353 KSP Residual norm 1.234968225554e+02 >>354 KSP Residual norm 2.882557881065e-01 >>355 KSP Residual norm 2.170676916299e+02 >>356 KSP Residual norm 5.764266225925e+00 >>357 KSP Residual norm 1.701448294063e+04 >>Linear solve did not converge due to DIVERGED_DTOL iterations 357 >> When iteration=357, the residual norm is 1.7e+4,it is less than dtol. Is it right? >> Thanks. >> Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 8 11:13:44 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Jan 2013 11:13:44 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> Message-ID: On Tue, Jan 8, 2013 at 11:08 AM, w_ang_temp wrote: > 354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm > 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 > ^^ Notice how this ratio is less than 1.0? DTOL is defined in terms of the smallest residual norm seen. > 355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm > 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 > 356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm > 6.177943120636e+05 ||r(i)||/||b|| 9.20169 1405543e+00 > 357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm > 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 > > Linear solve did not converge due to DIVERGED_DTOL iterations 357 > > I cannot understand it. Which one means that the DIVERGED_DTOL > occures:preconditioned resid norm, true resid norm or ||r(i)||/||b||? > What is the difference between preconditioned resid norm and true resid > norm? > It uses the norm that your method is running. In this case, it's the preconditioned norm. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppetrovic573 at gmail.com Tue Jan 8 11:21:10 2013 From: ppetrovic573 at gmail.com (Petar Petrovic) Date: Tue, 8 Jan 2013 12:21:10 -0500 Subject: [petsc-users] Cholesky factorization Message-ID: Hello, Can you please tell me for which type of matrices can I run Cholesky factorization (MatCholeskyFactor) ? Can it be applied on sparse matrices, e.g. MATAIJ ? Many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 8 11:24:26 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Jan 2013 11:24:26 -0600 Subject: [petsc-users] Cholesky factorization In-Reply-To: References: Message-ID: http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html Matt On Tue, Jan 8, 2013 at 11:21 AM, Petar Petrovic wrote: > Hello, > Can you please tell me for which type of matrices can I run Cholesky > factorization (MatCholeskyFactor) ? Can it be applied on sparse matrices, > e.g. MATAIJ ? > Many thanks > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Tue Jan 8 11:50:17 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 9 Jan 2013 01:50:17 +0800 (CST) Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> Message-ID: <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> I am sorry. In my view, preconditioned resid norm:||rp||=||Bb-BAx||(B is the preconditioned matrix); true resid norm:||rt||=||b-Ax||; ||r(i)||/||b||: ||rt||/||b||. Is it right? (1) Divergence is detected if ||rp||/||b|| > dtol or ||rt||/||b|| > dtol ? Both of them (rt/b:1.701448294063e+04 / 6.7139E+4; rt/b:2.716078947576e+04; dtol=1.0E+5 ) are not in this example, but it is divergent? (2) Convergence is detected at iteration k if : ||rp||/||b|| < rtol But I find that when "preconditioned resid norm" is less than rtol, it begins to be convergent(another example,rtol=1.0E-15): 19 KSP preconditioned resid norm 4.964358598559e-15 true resid norm 1.076736705942e-08 ||r(i)||/||b|| 1.603737473724e-13 120 KSP preconditioned resid norm 1.045516340849e-14 true resid norm 1.089531944048e-08 ||r(i)||/||b|| 1.622795245901e-13 121 KSP preconditioned resid norm 1.209016864072e-14 true resid norm 1.096191254361e-08 ||r(i)||/||b|| 1.632713906089e-13 122 KSP preconditioned resid norm 1.568004225873e-15 true resid norm 1.073893120243e-08 ||r(i)||/||b|| 1.599502116167e-13 123 KSP preconditioned resid norm 5.066448468788e-15 true resid norm 1.078375214589e-08 ||r(i)||/||b|| 1.606177938235e-13 124 KSP preconditioned resid norm 3.619818305395e-16 true resid norm 1.073887987132e-08 ||r(i)||/||b|| 1.599494470692e-13 Linear solve converged due to CONVERGED_RTOL iterations 124 In iteration 124, preconditioned resid norm begins to be smaller than rtol. Thanks. Jim >On 2013-01-09 01:13:44?"Jed Brown" ??? >>On Tue, Jan 8, 2013 at 11:08 AM, w_ang_temp wrote: >>354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 >^^ Notice how this ratio is less than 1.0? DTOL is defined in terms of the smallest residual norm seen. >>355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 >>356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm 6.177943120636e+05 ||r(i)||/||b|| 9.20169 1405543e+00 >>357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 >>Linear solve did not converge due to DIVERGED_DTOL iterations 357 >>I cannot understand it. Which one means that the DIVERGED_DTOL occures:preconditioned resid norm, true resid norm or ||r(i)||/||b||? >>What is the difference between preconditioned resid norm and true resid norm? >It uses the norm that your method is running. In this case, it's the preconditioned norm. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 8 11:58:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Jan 2013 11:58:24 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> Message-ID: On Tue, Jan 8, 2013 at 11:50 AM, w_ang_temp wrote: > > I am sorry. > In my view, preconditioned resid norm:||rp||=||Bb-BAx||(B is the > preconditioned matrix); > -ksp_norm_type preconditioned is the default for GMRES, so it's using preconditioned residual. > true resid norm:||rt||=||b-Ax||; ||r(i)||/||b||: ||rt||/||b||. Is it > right? > (1) Divergence is detected if > > ||rp||/||b|| > dtol or ||rt||/||b|| > dtol ? > Neither, it's |rp|/|min(rp0,rp1,rp2,rp3,...)|. Your solver "converges" a bit at some iteration and then jumps a lot so the denominator is smaller than rp0. > Both of them (rt/b:1.701448294063e+04 / 6.7139E+4; > rt/b:2.716078947576e+04; dtol=1.0E+5 ) > are not in this example, but it is divergent? > > (2) Convergence is detected at iteration k if : > ||rp||/||b|| < rtol > But I find that when "preconditioned resid norm" is less than rtol, it > begins to be convergent(another example,rtol=1.0E-15): > 19 KSP preconditioned resid norm 4.964358598559e-15 true resid norm > 1.076736705942e-08 ||r(i)||/||b|| 1.603737473724e-13 > 120 KSP preconditioned resid norm 1.045516340849e-14 true resid norm > 1.089531944048e-08 ||r(i)||/||b|| 1.622795245901e-13 > 121 KSP preconditioned resid norm 1.209016864072e-14 true resid norm > 1.096191254361e-08 ||r(i)||/||b|| 1.632713906089e-13 > 122 KSP preconditioned resid norm 1.568004225873e-15 true resid norm > 1.073893120243e-08 ||r(i)||/||b|| 1.599502116167e-13 > 123 KSP preconditioned resid norm 5.066448468788e-15 true resid norm > 1.078375214589e-08 ||r(i)||/||b|| 1.606177938235e-13 > 124 KSP preconditioned resid norm 3.619818305395e-16 true resid norm > 1.073887987132e-08 ||r(i)||/||b|| 1.599494470692e-13 > Linear solve converged due to CONVERGED_RTOL iterations 124 > In iteration 124, preconditioned resid norm begins to be smaller than > rtol. > > Thanks. Jim > > > > > >On 2013-01-09 01:13:44?"Jed Brown" ??? > > >>On Tue, Jan 8, 2013 at 11:08 AM, w_ang_temp wrote: > >> >>354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm >> 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 >> > > >^^ Notice how this ratio is less than 1.0? DTOL is defined in terms of > the smallest residual norm seen. > > >> >>355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm >> 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 >> >>356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm >> 6.177943120636e+05 ||r(i)||/||b|| 9.20169 1405543e+00 >> >>357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm >> 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 >> >> >>Linear solve did not converge due to DIVERGED_DTOL iterations 357 >> >> >>I cannot understand it. Which one means that the DIVERGED_DTOL >> occures:preconditioned resid norm, true resid norm or ||r(i)||/||b||? >> >>What is the difference between preconditioned resid norm and true >> resid norm? >> > > >It uses the norm that your method is running. In this case, it's the > preconditioned norm. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppetrovic573 at gmail.com Tue Jan 8 12:33:16 2013 From: ppetrovic573 at gmail.com (Petar Petrovic) Date: Tue, 8 Jan 2013 13:33:16 -0500 Subject: [petsc-users] Cholesky factorization In-Reply-To: References: Message-ID: Thank you very much. Can you tell me how to set the package that is used for MatCholeskyFactor? I need it to run for a sparse matrix so I have tried using: #ifdef PETSC_HAVE_CHOLMOD ... #endif but the code inside doesn't run. Does this mean that the package is not installed or am I supposed to do something else, use some command line options or something? Sorry these are very basic questions, but I am new to PETSc and I cannot seem to find examples for this. On Tue, Jan 8, 2013 at 12:24 PM, Matthew Knepley wrote: > http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html > > Matt > > > On Tue, Jan 8, 2013 at 11:21 AM, Petar Petrovic wrote: > >> Hello, >> Can you please tell me for which type of matrices can I run Cholesky >> factorization (MatCholeskyFactor) ? Can it be applied on sparse matrices, >> e.g. MATAIJ ? >> Many thanks >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 8 12:35:26 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Jan 2013 12:35:26 -0600 Subject: [petsc-users] Cholesky factorization In-Reply-To: References: Message-ID: On Tue, Jan 8, 2013 at 12:33 PM, Petar Petrovic wrote: > Thank you very much. > Can you tell me how to set the package that is used for MatCholeskyFactor? > I need it to run for a sparse matrix so I have tried using: > > #ifdef PETSC_HAVE_CHOLMOD > ... > #endif > > but the code inside doesn't run. Does this mean that the package is not > installed or am I supposed to do something else, use some command line > options or something? Sorry these are very basic questions, but I am new to > PETSc and I cannot seem to find examples for this. > You need to activate 3rd party packages during configure, e.g. --download-umfpack. http://www.mcs.anl.gov/petsc/documentation/installation.html Matt > On Tue, Jan 8, 2013 at 12:24 PM, Matthew Knepley wrote: > >> http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html >> >> Matt >> >> >> On Tue, Jan 8, 2013 at 11:21 AM, Petar Petrovic wrote: >> >>> Hello, >>> Can you please tell me for which type of matrices can I run Cholesky >>> factorization (MatCholeskyFactor) ? Can it be applied on sparse matrices, >>> e.g. MATAIJ ? >>> Many thanks >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jan 8 13:02:08 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 8 Jan 2013 13:02:08 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> Message-ID: <09E11A13-A818-42C8-A369-21E68B497608@mcs.anl.gov> If the solver residual norms jump up a great deal during the run and then eventually settle down then you can set a very large dtol to prevent the termination BUT I don't recommend using solvers where the residual norms jump up that much, it would better to use a solver where the residual is mostly monotonically decreasing. Barry On Jan 8, 2013, at 11:58 AM, Jed Brown wrote: > On Tue, Jan 8, 2013 at 11:50 AM, w_ang_temp wrote: > > I am sorry. > In my view, preconditioned resid norm:||rp||=||Bb-BAx||(B is the preconditioned matrix); > > -ksp_norm_type preconditioned is the default for GMRES, so it's using preconditioned residual. > > true resid norm:||rt||=||b-Ax||; ||r(i)||/||b||: ||rt||/||b||. Is it right? > (1) Divergence is detected if > > ||rp||/||b|| > dtol or ||rt||/||b|| > dtol ? > > Neither, it's |rp|/|min(rp0,rp1,rp2,rp3,...)|. Your solver "converges" a bit at some iteration and then jumps a lot so the denominator is smaller than rp0. > > Both of them (rt/b:1.701448294063e+04 / 6.7139E+4; rt/b:2.716078947576e+04; dtol=1.0E+5 ) > are not in this example, but it is divergent? > > (2) Convergence is detected at iteration k if : > ||rp||/||b|| < rtol > But I find that when "preconditioned resid norm" is less than rtol, it begins to be convergent(another example,rtol=1.0E-15): > 19 KSP preconditioned resid norm 4.964358598559e-15 true resid norm 1.076736705942e-08 ||r(i)||/||b|| 1.603737473724e-13 > 120 KSP preconditioned resid norm 1.045516340849e-14 true resid norm 1.089531944048e-08 ||r(i)||/||b|| 1.622795245901e-13 > 121 KSP preconditioned resid norm 1.209016864072e-14 true resid norm 1.096191254361e-08 ||r(i)||/||b|| 1.632713906089e-13 > 122 KSP preconditioned resid norm 1.568004225873e-15 true resid norm 1.073893120243e-08 ||r(i)||/||b|| 1.599502116167e-13 > 123 KSP preconditioned resid norm 5.066448468788e-15 true resid norm 1.078375214589e-08 ||r(i)||/||b|| 1.606177938235e-13 > 124 KSP preconditioned resid norm 3.619818305395e-16 true resid norm 1.073887987132e-08 ||r(i)||/||b|| 1.599494470692e-13 > Linear solve converged due to CONVERGED_RTOL iterations 124 > In iteration 124, preconditioned resid norm begins to be smaller than rtol. > > Thanks. Jim > > > > > >On 2013-01-09 01:13:44?"Jed Brown" ??? > > >>On Tue, Jan 8, 2013 at 11:08 AM, w_ang_temp wrote: > >>354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 > > >^^ Notice how this ratio is less than 1.0? DTOL is defined in terms of the smallest residual norm seen. > > >>355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 > >>356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm 6.177943120636e+05 ||r(i)||/||b|| 9.20169 1405543e+00 > >>357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 > > >>Linear solve did not converge due to DIVERGED_DTOL iterations 357 > > >>I cannot understand it. Which one means that the DIVERGED_DTOL occures:preconditioned resid norm, true resid norm or ||r(i)||/||b||? > >>What is the difference between preconditioned resid norm and true resid norm? > > >It uses the norm that your method is running. In this case, it's the preconditioned norm. > > > From ppetrovic573 at gmail.com Tue Jan 8 14:25:44 2013 From: ppetrovic573 at gmail.com (Petar Petrovic) Date: Tue, 8 Jan 2013 15:25:44 -0500 Subject: [petsc-users] Cholesky factorization In-Reply-To: References: Message-ID: Thanks, but is there any way to check if the package is already installed? My problem is that I run the PETSc installation someone already set up and I don't have the access to change it. I have tried running ./configure --help in the directory PETSc is installed, but it just reports the error because I don't have the access permition. Also, sorry I didn't understand, are the directives in the code enough or do I have to run my mpiexec command with additional options? On Tue, Jan 8, 2013 at 1:35 PM, Matthew Knepley wrote: > On Tue, Jan 8, 2013 at 12:33 PM, Petar Petrovic wrote: > >> Thank you very much. >> Can you tell me how to set the package that is used for >> MatCholeskyFactor? >> I need it to run for a sparse matrix so I have tried using: >> >> #ifdef PETSC_HAVE_CHOLMOD >> ... >> #endif >> >> but the code inside doesn't run. Does this mean that the package is not >> installed or am I supposed to do something else, use some command line >> options or something? Sorry these are very basic questions, but I am new to >> PETSc and I cannot seem to find examples for this. >> > > You need to activate 3rd party packages during configure, e.g. > --download-umfpack. > > http://www.mcs.anl.gov/petsc/documentation/installation.html > > Matt > > >> On Tue, Jan 8, 2013 at 12:24 PM, Matthew Knepley wrote: >> >>> http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html >>> >>> Matt >>> >>> >>> On Tue, Jan 8, 2013 at 11:21 AM, Petar Petrovic wrote: >>> >>>> Hello, >>>> Can you please tell me for which type of matrices can I run Cholesky >>>> factorization (MatCholeskyFactor) ? Can it be applied on sparse matrices, >>>> e.g. MATAIJ ? >>>> Many thanks >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 8 14:33:37 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Jan 2013 14:33:37 -0600 Subject: [petsc-users] Cholesky factorization In-Reply-To: References: Message-ID: On Tue, Jan 8, 2013 at 2:25 PM, Petar Petrovic wrote: > Thanks, but is there any way to check if the package is already installed? > My problem is that I run the PETSc installation someone already set up and > I don't have the access to change it. I have tried running ./configure > --help in the directory PETSc is installed, but it just reports the error > because I don't have the access permition. > Look at the (a) the end of configure.log or (b) $PETSC_DIR/$PETSC_ARCH/include/petscconf.h > Also, sorry I didn't understand, are the directives in the code enough or > do I have to run my mpiexec command with additional options? > You don't need command line flags. > > > On Tue, Jan 8, 2013 at 1:35 PM, Matthew Knepley wrote: > >> On Tue, Jan 8, 2013 at 12:33 PM, Petar Petrovic wrote: >> >>> Thank you very much. >>> Can you tell me how to set the package that is used for >>> MatCholeskyFactor? >>> I need it to run for a sparse matrix so I have tried using: >>> >>> #ifdef PETSC_HAVE_CHOLMOD >>> ... >>> #endif >>> >>> but the code inside doesn't run. Does this mean that the package is not >>> installed or am I supposed to do something else, use some command line >>> options or something? Sorry these are very basic questions, but I am new to >>> PETSc and I cannot seem to find examples for this. >>> >> >> You need to activate 3rd party packages during configure, e.g. >> --download-umfpack. >> >> http://www.mcs.anl.gov/petsc/documentation/installation.html >> >> Matt >> >> >>> On Tue, Jan 8, 2013 at 12:24 PM, Matthew Knepley wrote: >>> >>>> http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html >>>> >>>> Matt >>>> >>>> >>>> On Tue, Jan 8, 2013 at 11:21 AM, Petar Petrovic >>> > wrote: >>>> >>>>> Hello, >>>>> Can you please tell me for which type of matrices can I run Cholesky >>>>> factorization (MatCholeskyFactor) ? Can it be applied on sparse matrices, >>>>> e.g. MATAIJ ? >>>>> Many thanks >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 8 16:58:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 8 Jan 2013 16:58:05 -0600 Subject: [petsc-users] Example for MatMatMultSymbolic In-Reply-To: <73644807AA76E34EA8A5562CA279021E05C54CE3@urz-mbx-3.urz.unibas.ch> References: <73644807AA76E34EA8A5562CA279021E05C54CE3@urz-mbx-3.urz.unibas.ch> Message-ID: Typically you would use this, for which there are a few examples (at the bottom of the page). http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatMatMult.html If that doesn't fix your problem, please set up a test case or at least give us verbose diagnostics. On Tue, Jan 8, 2013 at 8:26 AM, Johannes Huber wrote: > Hi, > I have a valgrind "invalid read" in MatGetBrowsOfAoCols_MPIAIJ in the lines > rvalues = gen_from->values; /* holds the length of receiving row */ > svalues = gen_to->values; /* holds the length of sending row */ > (and others, but that are the first). This function is called indirectly > via MatMatMultSymbolic. > All invalid reads apear in MatGetBrowsOfAoCols_MPIAIJ. > The calling sequence for the matrix factors are > MatCreate(PETSC_COMM_WORLD,A); > MatSetSizes(A,NumLocRows,NumLocCols,PETSC_DETERMINE,PETSC_DETERMINE); > MatSetType(A,MATMPIAIJ); > MatMPIAIJSetPreallocation(A,PETSC_NULL,nnz,PETSC_NULL,onz); > MatSetValues(...); > MatAssemblyBegin(...) > MatAssemblyEnd(...) > and then a call to MatMatMultSymbolic. > I want the result matrix to be created. In the documentation I see, that > this happens, if I pass MAT_REUSE_MATRIX as scall, but how can I pass the > value, I don't the parameter. > Where can I find an example for MatMatMultSymbolic / MatMatMultNumeric? > Thanks, > Hannes > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Jan 9 02:24:36 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 9 Jan 2013 09:24:36 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b Message-ID: Hi, I _have_ to use hypre-2.9.0b with petsc 3.3.x (because 2.8.0b compilation on Windows is broken due to issues with the MSVC 10 compiler). I am aware of both 2.8.0b being officially used by petsc as well as of Jed mentioning building problems in hypre-2.9.0b. I indeed can not build 2.9.0b on linux due to errors but - ironically - it is the only version I can currently build natively on Windows. Therefore I would like to know if the decision to officially use 2.8.0b in petsc 3.3.x is dictated exclusively by building infrastructure issues or also by some code / API / functionality related issues. Thanks for any comments. Dominik From jedbrown at mcs.anl.gov Wed Jan 9 06:40:39 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Jan 2013 06:40:39 -0600 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: Petsc-3.3 was released half a year before hypre-2.9. The hypre developers say they noticed the bug shortly after publishing the tarball, but did not make a patch release. You can either (a) talk them into giving you/us the patch or (b) publishing a patch release with a fixed build system, or (c) fix the build system yourself and send us the patch. As soon as we have a working hypre-2.9, we'll include it in petsc-dev (and petsc-3.4, which we hope to release in a couple months). On Wed, Jan 9, 2013 at 2:24 AM, Dominik Szczerba wrote: > Hi, > > I _have_ to use hypre-2.9.0b with petsc 3.3.x (because 2.8.0b > compilation on Windows is broken due to issues with the MSVC 10 > compiler). I am aware of both 2.8.0b being officially used by petsc as > well as of Jed mentioning building problems in hypre-2.9.0b. I indeed > can not build 2.9.0b on linux due to errors but - ironically - it is > the only version I can currently build natively on Windows. Therefore > I would like to know if the decision to officially use 2.8.0b in petsc > 3.3.x is dictated exclusively by building infrastructure issues or > also by some code / API / functionality related issues. Thanks for any > comments. > > Dominik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Jan 9 07:51:03 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 9 Jan 2013 14:51:03 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Wed, Jan 9, 2013 at 1:40 PM, Jed Brown wrote: > Petsc-3.3 was released half a year before hypre-2.9. The hypre developers > say they noticed the bug shortly after publishing the tarball, but did not > make a patch release. You can either (a) talk them into giving you/us the > patch or (b) publishing a patch release with a fixed build system, or (c) > fix the build system yourself and send us the patch. As soon as we have a > working hypre-2.9, we'll include it in petsc-dev (and petsc-3.4, which we > hope to release in a couple months). I will look into solving the problem, see how far I can get. But do I get it right that the only problem is building, not any code/functionality issues? D. From jedbrown at mcs.anl.gov Wed Jan 9 08:02:00 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Jan 2013 08:02:00 -0600 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Wed, Jan 9, 2013 at 7:51 AM, Dominik Szczerba wrote: > I will look into solving the problem, see how far I can get. > But do I get it right that the only problem is building, not any > code/functionality issues? > I don't know. If we need to update something in PETSc's interface to Hypre, we'll do it. I reported the build problem and they said they had patched it, but did not send me a patch. I encourage more people to ask them to publish (read-only) their source repository. Then we could stop guessing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Wed Jan 9 08:06:38 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 9 Jan 2013 15:06:38 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Wed, Jan 9, 2013 at 3:02 PM, Jed Brown wrote: > On Wed, Jan 9, 2013 at 7:51 AM, Dominik Szczerba > wrote: >> >> I will look into solving the problem, see how far I can get. >> But do I get it right that the only problem is building, not any >> code/functionality issues? > > > I don't know. If we need to update something in PETSc's interface to Hypre, > we'll do it. I reported the build problem and they said they had patched it, > but did not send me a patch. I encourage more people to ask them to publish > (read-only) their source repository. Then we could stop guessing. Yes, I have contacted them already on that issue. Dominik From rui.silva at uam.es Wed Jan 9 09:04:55 2013 From: rui.silva at uam.es (Rui Emanuel Ferreira da Silva) Date: Wed, 09 Jan 2013 16:04:55 +0100 Subject: [petsc-users] Convergence problem with Krylov subspace methods Message-ID: <20130109160455.Horde.3mIgTyKzwZxQ7YcXOr0U9WA@webmail.uam.es> To whom it may concern, I am writing to you to ask some technical problems that I am dealing with the use of PETSc. The problem that I need to solve is a system of linear equations (Ax=b). The matrix A is a banded matrix (five-point matrix) resulting from the discretization of a second derivative in a 2D space. In other words, it is a pentadiagonal matrix, but the two outer bands are separated from the three central bands. This matrix is complex and is not hermitian (its actual shape is A= H - E - i*delta, where H is a hermitian five-point matrix and E and delta a real scalar). Its size is 1.8e7 x 1.8e7, thus the problem cannot be solved with direct methods but with iterative methods. For negative values of E, I have been able to solve the system using PETSc with a Krylov subspace method, with no problems. But for positive values, where the spectrum is quasi-degenerate, I cannot solve it. I have tried the following iterative methods: --> GMRES with the ILU preconditioner --> BICG --> BCGS and convergence was not reached in any of the cases. I have run out of ideas, so my question is: is it possible that you suggest me any method which I could use to deal with such a problem? Please forgive the intrussion if this question is not adequate in this email list. Thank you very much in advance, Rui Silva. ------------------- Rui Silva EMTCCM (European Master in Theoretical Chemistry and Computational Modelling) UAM, Departamento de Qu?mica, M?dulo 13 CAMPUS http://www.uam.es/departamentos/ciencias/quimica/spline/index.html ------------------- From bourdin at lsu.edu Wed Jan 9 09:37:21 2013 From: bourdin at lsu.edu (Blaise A Bourdin) Date: Wed, 9 Jan 2013 15:37:21 +0000 Subject: [petsc-users] binary vtk viewer DMDA Message-ID: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> Hi, I am looking at the documentation and the examples looking for a simple illustration of how to use the new vtk binary viewers for structured data defined by a DMDA, but can't find any straightforward example. Is there a simple example that I am missing? When I try PetscViewerVTKAddField(VTKviewer,(PetscObject) dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); I get a compilation time error: TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope indeed, DMDAVTKWriteAll is defined in a private header. Is this the way it is supposed to be? Is the xml file describing the content of the binary files generated automatically or do I need to take care of it by myself? I am using petsc-3.3, latest changeset. Regards, Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin From jedbrown at mcs.anl.gov Wed Jan 9 09:51:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Jan 2013 09:51:24 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: None of that crazy "developer" nonsense is need for users. Just do this: PetscViewer viewer; /* file name extension sets format by default, see also PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ ierr = PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); ierr = VecView(X,viewer);CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); When using TS, you can do -ts_monitor_draw_solution_vtk 'filename-%03D.vts' to save each time step to a numbered binary file (ready to animate in paraview or visit). On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin wrote: > Hi, > > I am looking at the documentation and the examples looking for a simple > illustration of how to use the new vtk binary viewers for structured data > defined by a DMDA, but can't find any straightforward example. Is there a > simple example that I am missing? > > When I try > PetscViewerVTKAddField(VTKviewer,(PetscObject) > dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); > I get a compilation time error: > TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope > indeed, DMDAVTKWriteAll is defined in a private header. Is this the way it > is supposed to be? > > Is the xml file describing the content of the binary files generated > automatically or do I need to take care of it by myself? > > I am using petsc-3.3, latest changeset. > > Regards, > Blaise > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > http://www.math.lsu.edu/~bourdin > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 9 10:10:17 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Jan 2013 10:10:17 -0600 Subject: [petsc-users] Convergence problem with Krylov subspace methods In-Reply-To: <20130109160455.Horde.3mIgTyKzwZxQ7YcXOr0U9WA@webmail.uam.es> References: <20130109160455.Horde.3mIgTyKzwZxQ7YcXOr0U9WA@webmail.uam.es> Message-ID: On Wed, Jan 9, 2013 at 9:04 AM, Rui Emanuel Ferreira da Silva < rui.silva at uam.es> wrote: > To whom it may concern, > > I am writing to you to ask some technical problems that I am dealing with > the use of PETSc. > > The problem that I need to solve is a system of linear equations (Ax=b). > The matrix A is a banded matrix (five-point matrix) resulting from the > discretization of a second derivative in a 2D space. In other words, it is > a pentadiagonal matrix, but the two outer bands are separated from the > three central bands. > > This matrix is complex and is not hermitian (its actual shape is A= H - E > - i*delta, where H is a hermitian five-point matrix and E and delta a real > scalar). Its size is 1.8e7 x 1.8e7, thus the problem cannot be solved with > direct methods but with iterative methods. > > For negative values of E, I have been able to solve the system using PETSc > with a Krylov subspace method, with no problems. > But for positive values, where the spectrum is quasi-degenerate, I cannot > solve it. I have tried the following iterative methods: > > --> GMRES with the ILU preconditioner > --> BICG > --> BCGS > > and convergence was not reached in any of the cases. > > I have run out of ideas, so my question is: is it possible that you > suggest me any method which I could use to deal with such a problem? > Helmholtz is very difficult for iterative methods: http://www.maths.dur.ac.uk/events/Meetings/LMS/2010/NAMP/Talks/gander.pdf https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDUQFjAA&url=http%3A%2F%2Fwww.unige.ch%2F~gander%2FPreprints%2FHelmholtzReview.ps.gz&ei=rZXtUJqwFaOs2wXsoIHYBw&usg=AFQjCNEhaFnD7CrzZKF16IZWWQ1nVtdUGg&bvm=bv.1357316858,d.b2I First, you ought to try MUMPS for your problem in case it fits in the memory you have. Thanks, Matt > Please forgive the intrussion if this question is not adequate in this > email list. > > Thank you very much in advance, > Rui Silva. > > > ------------------- > Rui Silva > EMTCCM (European Master in Theoretical Chemistry and Computational > Modelling) > UAM, Departamento de Qu?mica, M?dulo 13 > CAMPUS http://www.uam.es/**departamentos/ciencias/** > quimica/spline/index.html > ------------------- > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Wed Jan 9 10:10:50 2013 From: bourdin at lsu.edu (Blaise A Bourdin) Date: Wed, 9 Jan 2013 16:10:50 +0000 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> That's a good start indeed. Is there any way to save files defined on different DMDA (same grid but different number of dof). When I try to do that, I get the following error message: MacBook-Pro:VTK blaise$ ./TestVTK [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments are incompatible! [0]PETSC ERROR: Cannot write a field from more than one grid to the same VTK file! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./TestVTK on a Darwin-gc named MacBook-Pro.local by blaise Wed Jan 9 10:09:37 2013 [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-gcc4.2-mef90-g/lib [0]PETSC ERROR: Configure run at Thu Jan 3 15:42:04 2013 [0]PETSC ERROR: Configure options --download-boost=1 --download-chaco=1 --download-exodusii=/opt/HPC/src/tarball/exodusii-5.22b.tgz --download-hdf5=1 --download-metis=1 --download-netcdf=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-clanguage=C++ --with-cmake=cmake --with-debugging=1 --with-fortran-datatypes --with-gnu-compilers=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-gcc4.2 --with-pic --with-shared-libraries=1 --with-sieve --with-sieve-memory-logging --with-x11=1 PETSC_ARCH=Darwin-gcc4.2-mef90-g [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscViewerVTKAddField_VTK() line 126 in /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: PetscViewerVTKAddField() line 32 in /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: VecView_MPI_DA() line 531 in /opt/HPC/petsc-3.3/src/dm/impls/da/gr2.c [0]PETSC ERROR: VecView() line 776 in /opt/HPC/petsc-3.3/src/vec/vec/interface/vector.c [0]PETSC ERROR: main() line 46 in TestVTK.c application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 Blaise On Jan 9, 2013, at 9:51 AM, Jed Brown > wrote: None of that crazy "developer" nonsense is need for users. Just do this: PetscViewer viewer; /* file name extension sets format by default, see also PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ ierr = PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); ierr = VecView(X,viewer);CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); When using TS, you can do -ts_monitor_draw_solution_vtk 'filename-%03D.vts' to save each time step to a numbered binary file (ready to animate in paraview or visit). On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin > wrote: Hi, I am looking at the documentation and the examples looking for a simple illustration of how to use the new vtk binary viewers for structured data defined by a DMDA, but can't find any straightforward example. Is there a simple example that I am missing? When I try PetscViewerVTKAddField(VTKviewer,(PetscObject) dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); I get a compilation time error: TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope indeed, DMDAVTKWriteAll is defined in a private header. Is this the way it is supposed to be? Is the xml file describing the content of the binary files generated automatically or do I need to take care of it by myself? I am using petsc-3.3, latest changeset. Regards, Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 9 10:14:17 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Jan 2013 10:14:17 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: Unfortunately, the VTK format does not support multiple entries in the same file. I have no idea why they chose to make the format that way. You can ask them to remove this ridiculous restriction, but until then, you have to write separate files or use a different format. On Wed, Jan 9, 2013 at 10:10 AM, Blaise A Bourdin wrote: > That's a good start indeed. Is there any way to save files defined on > different DMDA (same grid but different number of dof). When I try to do > that, I get the following error message: > MacBook-Pro:VTK blaise$ ./TestVTK > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Arguments are incompatible! > [0]PETSC ERROR: Cannot write a field from more than one grid to the same > VTK file! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, unknown > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./TestVTK on a Darwin-gc named MacBook-Pro.local by blaise > Wed Jan 9 10:09:37 2013 > [0]PETSC ERROR: Libraries linked from > /opt/HPC/petsc-3.3/Darwin-gcc4.2-mef90-g/lib > [0]PETSC ERROR: Configure run at Thu Jan 3 15:42:04 2013 > [0]PETSC ERROR: Configure options --download-boost=1 --download-chaco=1 > --download-exodusii=/opt/HPC/src/tarball/exodusii-5.22b.tgz > --download-hdf5=1 --download-metis=1 --download-netcdf=1 > --download-parmetis=1 --download-sowing=1 --download-triangle=1 > --download-yaml=1 --with-clanguage=C++ --with-cmake=cmake > --with-debugging=1 --with-fortran-datatypes --with-gnu-compilers=1 > --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-gcc4.2 --with-pic > --with-shared-libraries=1 --with-sieve --with-sieve-memory-logging > --with-x11=1 PETSC_ARCH=Darwin-gcc4.2-mef90-g > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscViewerVTKAddField_VTK() line 126 in > /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: PetscViewerVTKAddField() line 32 in > /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: VecView_MPI_DA() line 531 in > /opt/HPC/petsc-3.3/src/dm/impls/da/gr2.c > [0]PETSC ERROR: VecView() line 776 in > /opt/HPC/petsc-3.3/src/vec/vec/interface/vector.c > [0]PETSC ERROR: main() line 46 in TestVTK.c > application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 > > Blaise > > On Jan 9, 2013, at 9:51 AM, Jed Brown > wrote: > > None of that crazy "developer" nonsense is need for users. Just do this: > > PetscViewer viewer; > /* file name extension sets format by default, see also > PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ > ierr = > PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); > ierr = VecView(X,viewer);CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); > > When using TS, you can do -ts_monitor_draw_solution_vtk > 'filename-%03D.vts' to save each time step to a numbered binary file (ready > to animate in paraview or visit). > > > On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin wrote: > >> Hi, >> >> I am looking at the documentation and the examples looking for a simple >> illustration of how to use the new vtk binary viewers for structured data >> defined by a DMDA, but can't find any straightforward example. Is there a >> simple example that I am missing? >> >> When I try >> PetscViewerVTKAddField(VTKviewer,(PetscObject) >> dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); >> I get a compilation time error: >> TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope >> indeed, DMDAVTKWriteAll is defined in a private header. Is this the way >> it is supposed to be? >> >> Is the xml file describing the content of the binary files generated >> automatically or do I need to take care of it by myself? >> >> I am using petsc-3.3, latest changeset. >> >> Regards, >> Blaise >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >> http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> >> >> > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > http://www.math.lsu.edu/~bourdin > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 9 10:20:22 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Jan 2013 10:20:22 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: On Wed, Jan 9, 2013 at 10:10 AM, Blaise A Bourdin wrote: > That's a good start indeed. Is there any way to save files defined on > different DMDA (same grid but different number of dof). When I try to do > that, I get the following error message: > Hmm, that would mean introducing a structural comparison rather than an identity. Matt > MacBook-Pro:VTK blaise$ ./TestVTK > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Arguments are incompatible! > [0]PETSC ERROR: Cannot write a field from more than one grid to the same > VTK file! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, unknown > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./TestVTK on a Darwin-gc named MacBook-Pro.local by blaise > Wed Jan 9 10:09:37 2013 > [0]PETSC ERROR: Libraries linked from > /opt/HPC/petsc-3.3/Darwin-gcc4.2-mef90-g/lib > [0]PETSC ERROR: Configure run at Thu Jan 3 15:42:04 2013 > [0]PETSC ERROR: Configure options --download-boost=1 --download-chaco=1 > --download-exodusii=/opt/HPC/src/tarball/exodusii-5.22b.tgz > --download-hdf5=1 --download-metis=1 --download-netcdf=1 > --download-parmetis=1 --download-sowing=1 --download-triangle=1 > --download-yaml=1 --with-clanguage=C++ --with-cmake=cmake > --with-debugging=1 --with-fortran-datatypes --with-gnu-compilers=1 > --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-gcc4.2 --with-pic > --with-shared-libraries=1 --with-sieve --with-sieve-memory-logging > --with-x11=1 PETSC_ARCH=Darwin-gcc4.2-mef90-g > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: PetscViewerVTKAddField_VTK() line 126 in > /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: PetscViewerVTKAddField() line 32 in > /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c > [0]PETSC ERROR: VecView_MPI_DA() line 531 in > /opt/HPC/petsc-3.3/src/dm/impls/da/gr2.c > [0]PETSC ERROR: VecView() line 776 in > /opt/HPC/petsc-3.3/src/vec/vec/interface/vector.c > [0]PETSC ERROR: main() line 46 in TestVTK.c > application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 > > Blaise > > On Jan 9, 2013, at 9:51 AM, Jed Brown > wrote: > > None of that crazy "developer" nonsense is need for users. Just do this: > > PetscViewer viewer; > /* file name extension sets format by default, see also > PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ > ierr = > PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); > ierr = VecView(X,viewer);CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); > > When using TS, you can do -ts_monitor_draw_solution_vtk > 'filename-%03D.vts' to save each time step to a numbered binary file (ready > to animate in paraview or visit). > > > On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin wrote: > >> Hi, >> >> I am looking at the documentation and the examples looking for a simple >> illustration of how to use the new vtk binary viewers for structured data >> defined by a DMDA, but can't find any straightforward example. Is there a >> simple example that I am missing? >> >> When I try >> PetscViewerVTKAddField(VTKviewer,(PetscObject) >> dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); >> I get a compilation time error: >> TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope >> indeed, DMDAVTKWriteAll is defined in a private header. Is this the way >> it is supposed to be? >> >> Is the xml file describing the content of the binary files generated >> automatically or do I need to take care of it by myself? >> >> I am using petsc-3.3, latest changeset. >> >> Regards, >> Blaise >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >> http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> >> >> > > -- > Department of Mathematics and Center for Computation & Technology > Louisiana State University, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 > http://www.math.lsu.edu/~bourdin > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Wed Jan 9 10:21:55 2013 From: bourdin at lsu.edu (Blaise A Bourdin) Date: Wed, 9 Jan 2013 16:21:55 +0000 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: <6C4AE5741C2B874282F58DA136A2CCB8D26D79@BL2PRD0612MB663.namprd06.prod.outlook.com> I can scatter each dof in a separate vec and save it, then. Thanks, Blaise On Jan 9, 2013, at 10:14 AM, Jed Brown > wrote: Unfortunately, the VTK format does not support multiple entries in the same file. I have no idea why they chose to make the format that way. You can ask them to remove this ridiculous restriction, but until then, you have to write separate files or use a different format. On Wed, Jan 9, 2013 at 10:10 AM, Blaise A Bourdin > wrote: That's a good start indeed. Is there any way to save files defined on different DMDA (same grid but different number of dof). When I try to do that, I get the following error message: MacBook-Pro:VTK blaise$ ./TestVTK [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments are incompatible! [0]PETSC ERROR: Cannot write a field from more than one grid to the same VTK file! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./TestVTK on a Darwin-gc named MacBook-Pro.local by blaise Wed Jan 9 10:09:37 2013 [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-gcc4.2-mef90-g/lib [0]PETSC ERROR: Configure run at Thu Jan 3 15:42:04 2013 [0]PETSC ERROR: Configure options --download-boost=1 --download-chaco=1 --download-exodusii=/opt/HPC/src/tarball/exodusii-5.22b.tgz --download-hdf5=1 --download-metis=1 --download-netcdf=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-clanguage=C++ --with-cmake=cmake --with-debugging=1 --with-fortran-datatypes --with-gnu-compilers=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-gcc4.2 --with-pic --with-shared-libraries=1 --with-sieve --with-sieve-memory-logging --with-x11=1 PETSC_ARCH=Darwin-gcc4.2-mef90-g [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscViewerVTKAddField_VTK() line 126 in /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: PetscViewerVTKAddField() line 32 in /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: VecView_MPI_DA() line 531 in /opt/HPC/petsc-3.3/src/dm/impls/da/gr2.c [0]PETSC ERROR: VecView() line 776 in /opt/HPC/petsc-3.3/src/vec/vec/interface/vector.c [0]PETSC ERROR: main() line 46 in TestVTK.c application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 Blaise On Jan 9, 2013, at 9:51 AM, Jed Brown > wrote: None of that crazy "developer" nonsense is need for users. Just do this: PetscViewer viewer; /* file name extension sets format by default, see also PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ ierr = PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); ierr = VecView(X,viewer);CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); When using TS, you can do -ts_monitor_draw_solution_vtk 'filename-%03D.vts' to save each time step to a numbered binary file (ready to animate in paraview or visit). On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin > wrote: Hi, I am looking at the documentation and the examples looking for a simple illustration of how to use the new vtk binary viewers for structured data defined by a DMDA, but can't find any straightforward example. Is there a simple example that I am missing? When I try PetscViewerVTKAddField(VTKviewer,(PetscObject) dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); I get a compilation time error: TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope indeed, DMDAVTKWriteAll is defined in a private header. Is this the way it is supposed to be? Is the xml file describing the content of the binary files generated automatically or do I need to take care of it by myself? I am using petsc-3.3, latest changeset. Regards, Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 9 10:23:15 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Jan 2013 10:23:15 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: On Wed, Jan 9, 2013 at 10:20 AM, Matthew Knepley wrote: > > On Wed, Jan 9, 2013 at 10:10 AM, Blaise A Bourdin wrote: > >> That's a good start indeed. Is there any way to save files defined on >> different DMDA (same grid but different number of dof). When I try to do >> that, I get the following error message: >> > > Hmm, that would mean introducing a structural comparison rather than an > identity. > Oh, good point. If someone implements a test for DM congruence, I can generalize the viewer. > > Matt > > >> MacBook-Pro:VTK blaise$ ./TestVTK >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Arguments are incompatible! >> [0]PETSC ERROR: Cannot write a field from more than one grid to the same >> VTK file! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, unknown >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./TestVTK on a Darwin-gc named MacBook-Pro.local by >> blaise Wed Jan 9 10:09:37 2013 >> [0]PETSC ERROR: Libraries linked from >> /opt/HPC/petsc-3.3/Darwin-gcc4.2-mef90-g/lib >> [0]PETSC ERROR: Configure run at Thu Jan 3 15:42:04 2013 >> [0]PETSC ERROR: Configure options --download-boost=1 --download-chaco=1 >> --download-exodusii=/opt/HPC/src/tarball/exodusii-5.22b.tgz >> --download-hdf5=1 --download-metis=1 --download-netcdf=1 >> --download-parmetis=1 --download-sowing=1 --download-triangle=1 >> --download-yaml=1 --with-clanguage=C++ --with-cmake=cmake >> --with-debugging=1 --with-fortran-datatypes --with-gnu-compilers=1 >> --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-gcc4.2 --with-pic >> --with-shared-libraries=1 --with-sieve --with-sieve-memory-logging >> --with-x11=1 PETSC_ARCH=Darwin-gcc4.2-mef90-g >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: PetscViewerVTKAddField_VTK() line 126 in >> /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c >> [0]PETSC ERROR: PetscViewerVTKAddField() line 32 in >> /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c >> [0]PETSC ERROR: VecView_MPI_DA() line 531 in >> /opt/HPC/petsc-3.3/src/dm/impls/da/gr2.c >> [0]PETSC ERROR: VecView() line 776 in >> /opt/HPC/petsc-3.3/src/vec/vec/interface/vector.c >> [0]PETSC ERROR: main() line 46 in TestVTK.c >> application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 >> [unset]: aborting job: >> application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 >> >> Blaise >> >> On Jan 9, 2013, at 9:51 AM, Jed Brown >> wrote: >> >> None of that crazy "developer" nonsense is need for users. Just do >> this: >> >> PetscViewer viewer; >> /* file name extension sets format by default, see also >> PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ >> ierr = >> PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); >> ierr = VecView(X,viewer);CHKERRQ(ierr); >> ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); >> >> When using TS, you can do -ts_monitor_draw_solution_vtk >> 'filename-%03D.vts' to save each time step to a numbered binary file (ready >> to animate in paraview or visit). >> >> >> On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin wrote: >> >>> Hi, >>> >>> I am looking at the documentation and the examples looking for a simple >>> illustration of how to use the new vtk binary viewers for structured data >>> defined by a DMDA, but can't find any straightforward example. Is there a >>> simple example that I am missing? >>> >>> When I try >>> PetscViewerVTKAddField(VTKviewer,(PetscObject) >>> dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); >>> I get a compilation time error: >>> TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope >>> indeed, DMDAVTKWriteAll is defined in a private header. Is this the way >>> it is supposed to be? >>> >>> Is the xml file describing the content of the binary files generated >>> automatically or do I need to take care of it by myself? >>> >>> I am using petsc-3.3, latest changeset. >>> >>> Regards, >>> Blaise >>> -- >>> Department of Mathematics and Center for Computation & Technology >>> Louisiana State University, Baton Rouge, LA 70803, USA >>> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >>> http://www.math.lsu.edu/~bourdin >>> >>> >>> >>> >>> >>> >>> >>> >>> >> >> -- >> Department of Mathematics and Center for Computation & Technology >> Louisiana State University, Baton Rouge, LA 70803, USA >> Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 >> http://www.math.lsu.edu/~bourdin >> >> >> >> >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Jan 9 10:50:10 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 9 Jan 2013 10:50:10 -0600 Subject: [petsc-users] Example for MatMatMultSymbolic In-Reply-To: References: <73644807AA76E34EA8A5562CA279021E05C54CE3@urz-mbx-3.urz.unibas.ch> Message-ID: Hannes, Do you use petsc-3.3? MatMtMult() might be buggy. We have updated MatMtMult() in petsc-dev. Suggest testing your code using petsc-dev (http://www.mcs.anl.gov/petsc/developers/index.html). As Jed suggested, you would use MatMatMult() instead of MatMatMultSymbolic() and MatMatMultNumeric(). However, we do have examples using the latter. See petsc-dev/src/mat/examples/tests/ex93.c which I updated to run in parallel. https://bitbucket.org/petsc/petsc-dev/commits/37e8170f9464171541ac1e801921bf58 I tested ex93 with valgrind and got clean output. If your code shows valgrind error with petsc-dev, please let us know. Hong On Tue, Jan 8, 2013 at 4:58 PM, Jed Brown wrote: > Typically you would use this, for which there are a few examples (at the > bottom of the page). > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatMatMult.html > > If that doesn't fix your problem, please set up a test case or at least give > us verbose diagnostics. > > > > On Tue, Jan 8, 2013 at 8:26 AM, Johannes Huber > wrote: >> >> Hi, >> I have a valgrind "invalid read" in MatGetBrowsOfAoCols_MPIAIJ in the >> lines >> rvalues = gen_from->values; /* holds the length of receiving row */ >> svalues = gen_to->values; /* holds the length of sending row */ >> (and others, but that are the first). This function is called indirectly >> via MatMatMultSymbolic. >> All invalid reads apear in MatGetBrowsOfAoCols_MPIAIJ. >> The calling sequence for the matrix factors are >> MatCreate(PETSC_COMM_WORLD,A); >> MatSetSizes(A,NumLocRows,NumLocCols,PETSC_DETERMINE,PETSC_DETERMINE); >> MatSetType(A,MATMPIAIJ); >> MatMPIAIJSetPreallocation(A,PETSC_NULL,nnz,PETSC_NULL,onz); >> MatSetValues(...); >> MatAssemblyBegin(...) >> MatAssemblyEnd(...) >> and then a call to MatMatMultSymbolic. >> I want the result matrix to be created. In the documentation I see, that >> this happens, if I pass MAT_REUSE_MATRIX as scall, but how can I pass the >> value, I don't the parameter. >> Where can I find an example for MatMatMultSymbolic / MatMatMultNumeric? >> Thanks, >> Hannes > > From erocha.ssa at gmail.com Wed Jan 9 11:18:25 2013 From: erocha.ssa at gmail.com (Eduardo) Date: Wed, 9 Jan 2013 15:18:25 -0200 Subject: [petsc-users] Assembling a symmetric block into a matrix Message-ID: Hi all, Is there any way to assemble a block that is symmetric to a matrix (also symmetric)? I mean, as far as I know, the MatSetValues assumes a full block, i.e the parameter v in: PetscErrorCode MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar v[],InsertMode addv) is a full block (local matrix) that is assembled into the global matrix mat. Thanks in advance, Eduardo From jedbrown at mcs.anl.gov Wed Jan 9 11:24:19 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 9 Jan 2013 11:24:19 -0600 Subject: [petsc-users] Assembling a symmetric block into a matrix In-Reply-To: References: Message-ID: When using SBAIJ, you only set the upper-triangular part. If you want to be able to set all entries anyway, run with -mat_ignore_lower_triangular or call MatSetOption(mat,MAT_IGNORE_LOWER_TRIANGULAR,PETSC_TRUE). There is no automatic way to symmetrize, but you're welcome to compute half and symmetrize before calling MatSetValues(). On Wed, Jan 9, 2013 at 11:18 AM, Eduardo wrote: > Hi all, > > Is there any way to assemble a block that is symmetric to a matrix > (also symmetric)? I mean, as far as I know, the MatSetValues assumes a > full block, i.e the parameter v in: > > PetscErrorCode MatSetValues(Mat mat,PetscInt m,const PetscInt > idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar > v[],InsertMode addv) > > is a full block (local matrix) that is assembled into the global matrix > mat. > > Thanks in advance, > Eduardo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erocha.ssa at gmail.com Wed Jan 9 11:39:19 2013 From: erocha.ssa at gmail.com (Eduardo) Date: Wed, 9 Jan 2013 15:39:19 -0200 Subject: [petsc-users] Assembling a symmetric block into a matrix In-Reply-To: References: Message-ID: So, does the v block (the logically two-dimensional input array of values) still have memory positions for the lower-triangular? I mean do I still have to allocate a full v block even if the lower-triangular is never touched? Thanks a lot, Eduardo On Wed, Jan 9, 2013 at 3:24 PM, Jed Brown wrote: > When using SBAIJ, you only set the upper-triangular part. If you want to be > able to set all entries anyway, run with -mat_ignore_lower_triangular or > call MatSetOption(mat,MAT_IGNORE_LOWER_TRIANGULAR,PETSC_TRUE). There is no > automatic way to symmetrize, but you're welcome to compute half and > symmetrize before calling MatSetValues(). > > > On Wed, Jan 9, 2013 at 11:18 AM, Eduardo wrote: >> >> Hi all, >> >> Is there any way to assemble a block that is symmetric to a matrix >> (also symmetric)? I mean, as far as I know, the MatSetValues assumes a >> full block, i.e the parameter v in: >> >> PetscErrorCode MatSetValues(Mat mat,PetscInt m,const PetscInt >> idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar >> v[],InsertMode addv) >> >> is a full block (local matrix) that is assembled into the global matrix >> mat. >> >> Thanks in advance, >> Eduardo > > From knepley at gmail.com Wed Jan 9 11:40:03 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Jan 2013 11:40:03 -0600 Subject: [petsc-users] Assembling a symmetric block into a matrix In-Reply-To: References: Message-ID: On Wed, Jan 9, 2013 at 11:39 AM, Eduardo wrote: > So, does the v block (the logically two-dimensional input array of > values) still have memory positions for the lower-triangular? I mean > do I still have to allocate a full v block even if the > lower-triangular is never touched? > Yes. Matt > Thanks a lot, > Eduardo > > On Wed, Jan 9, 2013 at 3:24 PM, Jed Brown wrote: > > When using SBAIJ, you only set the upper-triangular part. If you want to > be > > able to set all entries anyway, run with -mat_ignore_lower_triangular or > > call MatSetOption(mat,MAT_IGNORE_LOWER_TRIANGULAR,PETSC_TRUE). There is > no > > automatic way to symmetrize, but you're welcome to compute half and > > symmetrize before calling MatSetValues(). > > > > > > On Wed, Jan 9, 2013 at 11:18 AM, Eduardo wrote: > >> > >> Hi all, > >> > >> Is there any way to assemble a block that is symmetric to a matrix > >> (also symmetric)? I mean, as far as I know, the MatSetValues assumes a > >> full block, i.e the parameter v in: > >> > >> PetscErrorCode MatSetValues(Mat mat,PetscInt m,const PetscInt > >> idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar > >> v[],InsertMode addv) > >> > >> is a full block (local matrix) that is assembled into the global matrix > >> mat. > >> > >> Thanks in advance, > >> Eduardo > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Wed Jan 9 12:45:42 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Wed, 9 Jan 2013 11:45:42 -0700 Subject: [petsc-users] while solving the linear system with half billion unknowns on the super computer with 1020 cores, the preconditioner pcmg couldn't set up Message-ID: Hi all, I want to try to solve a problem with half billion unknowns with preconditioner pcmg (Of course, I have successfully provided the interpolation matrix and the coarse matrix). When the size of the unknowns is 1e7 level, the solve work very well with 1020 cores on the super computer. But when the size of the unknowns increases to 1e8 level, the preconditioner setup stage break down. The following is my run script that I use to set the solver and the preconditioner. -pc_type mg -ksp_type fgmres -pc_mg_levels 2 -pc_mg_cycle_type v -pc_mg_type multiplicative -mg_levels_1_ksp_type richardson -mg_levels_1_ksp_max_it 1 -mg_levels_1_pc_type asm -mg_levels_1_sub_ksp_type preonly -mg_levels_1_sub_pc_type ilu -mg_levels_1_sub_pc_factor_levels 1 -mg_levels_1_sub_pc_factor_mat_ordering_type rcm -mg_coarse_ksp_type gmres -mg_coarse_ksp_rtol 0.1 -mg_coarse_ksp_max_it 2 -mg_coarse_pc_type asm -mg_coarse_sub_ksp_type preonly -mg_coarse_sub_pc_type ilu -mg_coarse_sub_pc_factor_levels 1 -mg_coarse_sub_pc_factor_mat_ordering_type rcm -ksp_view My question is weather the linear system with half billion unknowns is too big to solve. Or are there some bugs in preconditioner pcmg? -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 9 13:01:18 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Jan 2013 13:01:18 -0600 Subject: [petsc-users] while solving the linear system with half billion unknowns on the super computer with 1020 cores, the preconditioner pcmg couldn't set up In-Reply-To: References: Message-ID: On Wed, Jan 9, 2013 at 12:45 PM, Fande Kong wrote: > Hi all, > > I want to try to solve a problem with half billion unknowns with > preconditioner pcmg (Of course, I have successfully provided the > interpolation matrix and the coarse matrix). When the size of the unknowns > is 1e7 level, the solve work very well with 1020 cores on the super > computer. But when the size of the unknowns increases to 1e8 level, the > preconditioner setup stage break down. The following is my run script that > I use to set the solver and the preconditioner. > You can see, I am sure, that this report is completely useless. What does "the preconditioner setup stage break down" mean? Did it hang? Did it crash? Did it output an error message that you did not send? Also, you did not even send the configure.log or make.log. How do we have any idea what you are doing? Matt > -pc_type mg -ksp_type fgmres -pc_mg_levels 2 -pc_mg_cycle_type v > -pc_mg_type multiplicative -mg_levels_1_ksp_type richardson > -mg_levels_1_ksp_max_it 1 -mg_levels_1_pc_type asm > -mg_levels_1_sub_ksp_type preonly -mg_levels_1_sub_pc_type ilu > -mg_levels_1_sub_pc_factor_levels 1 > -mg_levels_1_sub_pc_factor_mat_ordering_type rcm -mg_coarse_ksp_type gmres > -mg_coarse_ksp_rtol 0.1 -mg_coarse_ksp_max_it 2 -mg_coarse_pc_type asm > -mg_coarse_sub_ksp_type preonly -mg_coarse_sub_pc_type ilu > -mg_coarse_sub_pc_factor_levels 1 > -mg_coarse_sub_pc_factor_mat_ordering_type rcm -ksp_view > > My question is weather the linear system with half billion unknowns is too > big to solve. Or are there some bugs in preconditioner pcmg? > > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Wed Jan 9 15:36:49 2013 From: bourdin at lsu.edu (Blaise A Bourdin) Date: Wed, 9 Jan 2013 21:36:49 +0000 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> On Jan 9, 2013, at 10:23 AM, Jed Brown > wrote: On Wed, Jan 9, 2013 at 10:20 AM, Matthew Knepley > wrote: On Wed, Jan 9, 2013 at 10:10 AM, Blaise A Bourdin > wrote: That's a good start indeed. Is there any way to save files defined on different DMDA (same grid but different number of dof). When I try to do that, I get the following error message: Hmm, that would mean introducing a structural comparison rather than an identity. Oh, good point. If someone implements a test for DM congruence, I can generalize the viewer. Wouldn't it be trivial for a DMDA? Blaise Matt MacBook-Pro:VTK blaise$ ./TestVTK [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments are incompatible! [0]PETSC ERROR: Cannot write a field from more than one grid to the same VTK file! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, unknown [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./TestVTK on a Darwin-gc named MacBook-Pro.local by blaise Wed Jan 9 10:09:37 2013 [0]PETSC ERROR: Libraries linked from /opt/HPC/petsc-3.3/Darwin-gcc4.2-mef90-g/lib [0]PETSC ERROR: Configure run at Thu Jan 3 15:42:04 2013 [0]PETSC ERROR: Configure options --download-boost=1 --download-chaco=1 --download-exodusii=/opt/HPC/src/tarball/exodusii-5.22b.tgz --download-hdf5=1 --download-metis=1 --download-netcdf=1 --download-parmetis=1 --download-sowing=1 --download-triangle=1 --download-yaml=1 --with-clanguage=C++ --with-cmake=cmake --with-debugging=1 --with-fortran-datatypes --with-gnu-compilers=1 --with-mpi-dir=/opt/HPC/mpich2-1.4.1p1-gcc4.2 --with-pic --with-shared-libraries=1 --with-sieve --with-sieve-memory-logging --with-x11=1 PETSC_ARCH=Darwin-gcc4.2-mef90-g [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscViewerVTKAddField_VTK() line 126 in /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: PetscViewerVTKAddField() line 32 in /opt/HPC/petsc-3.3/src/sys/viewer/impls/vtk/vtkv.c [0]PETSC ERROR: VecView_MPI_DA() line 531 in /opt/HPC/petsc-3.3/src/dm/impls/da/gr2.c [0]PETSC ERROR: VecView() line 776 in /opt/HPC/petsc-3.3/src/vec/vec/interface/vector.c [0]PETSC ERROR: main() line 46 in TestVTK.c application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 75) - process 0 Blaise On Jan 9, 2013, at 9:51 AM, Jed Brown > wrote: None of that crazy "developer" nonsense is need for users. Just do this: PetscViewer viewer; /* file name extension sets format by default, see also PetscViewerSetFormat(viewer,PETSC_VIEWER_VTK_VTS) */ ierr = PetscViewerVTKOpen(comm,"yourfile.vts",FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); ierr = VecView(X,viewer);CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewer);CHKERRQ(ierr); When using TS, you can do -ts_monitor_draw_solution_vtk 'filename-%03D.vts' to save each time step to a numbered binary file (ready to animate in paraview or visit). On Wed, Jan 9, 2013 at 9:37 AM, Blaise A Bourdin > wrote: Hi, I am looking at the documentation and the examples looking for a simple illustration of how to use the new vtk binary viewers for structured data defined by a DMDA, but can't find any straightforward example. Is there a simple example that I am missing? When I try PetscViewerVTKAddField(VTKviewer,(PetscObject) dmda1,DMDAVTKWriteAll,PETSC_VTK_POINT_FIELD,(PetscObject) p);CHKERRQ(ierr); I get a compilation time error: TestVTK.c:53: error: ?DMDAVTKWriteAll? was not declared in this scope indeed, DMDAVTKWriteAll is defined in a private header. Is this the way it is supposed to be? Is the xml file describing the content of the binary files generated automatically or do I need to take care of it by myself? I am using petsc-3.3, latest changeset. Regards, Blaise -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- Department of Mathematics and Center for Computation & Technology Louisiana State University, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 http://www.math.lsu.edu/~bourdin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 9 22:34:46 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 9 Jan 2013 22:34:46 -0600 Subject: [petsc-users] while solving the linear system with half billion unknowns on the super computer with 1020 cores, the preconditioner pcmg couldn't set up In-Reply-To: References: Message-ID: <8832157F-F8B1-4460-A640-C28CA9800E49@mcs.anl.gov> The only thing specifically related to the number of unknowns that is not due to a bug in either your code or our code is if 1) the total problem size is greater than 2^31 - 1 (which it doesn't sound like in your case) or 2) the number of non zeros in a matrix on a single MPI process is more than 2^31 -1 (which could be if each process has a huge amount of memory) These limits are because by default PETSc uses 32 bit integers to hold indices and sizes. If you configure PETSc with --with-64-bit-indices then PETSc uses 64 bit integers to hold indices and sizes and you can solver any size problem. Barry On Jan 9, 2013, at 12:45 PM, Fande Kong wrote: > Hi all, > > I want to try to solve a problem with half billion unknowns with preconditioner pcmg (Of course, I have successfully provided the interpolation matrix and the coarse matrix). When the size of the unknowns is 1e7 level, the solve work very well with 1020 cores on the super computer. But when the size of the unknowns increases to 1e8 level, the preconditioner setup stage break down. The following is my run script that I use to set the solver and the preconditioner. > > -pc_type mg -ksp_type fgmres -pc_mg_levels 2 -pc_mg_cycle_type v -pc_mg_type multiplicative -mg_levels_1_ksp_type richardson -mg_levels_1_ksp_max_it 1 -mg_levels_1_pc_type asm -mg_levels_1_sub_ksp_type preonly -mg_levels_1_sub_pc_type ilu -mg_levels_1_sub_pc_factor_levels 1 -mg_levels_1_sub_pc_factor_mat_ordering_type rcm -mg_coarse_ksp_type gmres -mg_coarse_ksp_rtol 0.1 -mg_coarse_ksp_max_it 2 -mg_coarse_pc_type asm -mg_coarse_sub_ksp_type preonly -mg_coarse_sub_pc_type ilu -mg_coarse_sub_pc_factor_levels 1 -mg_coarse_sub_pc_factor_mat_ordering_type rcm -ksp_view > > My question is weather the linear system with half billion unknowns is too big to solve. Or are there some bugs in preconditioner pcmg? > > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences From jedbrown at mcs.anl.gov Thu Jan 10 00:12:03 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Jan 2013 00:12:03 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: On Wed, Jan 9, 2013 at 3:36 PM, Blaise A Bourdin wrote: > Wouldn't it be trivial for a DMDA? It should be, but I was trying to buy some time because I was picking up projects at a seemingly exponential rate this week, despite prior obligations. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Jan 10 03:53:42 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 10 Jan 2013 10:53:42 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: >> I don't know. If we need to update something in PETSc's interface to Hypre, >> we'll do it. I reported the build problem and they said they had patched it, >> but did not send me a patch. I encourage more people to ask them to publish >> (read-only) their source repository. Then we could stop guessing. > > Yes, I have contacted them already on that issue. I got an answer for them that the needed fixes are scheduled for the next release perhaps within the next couple of weeks. Regards, Dominik From dominik at itis.ethz.ch Thu Jan 10 08:04:11 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 10 Jan 2013 15:04:11 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Thu, Jan 10, 2013 at 10:53 AM, Dominik Szczerba wrote: >>> I don't know. If we need to update something in PETSc's interface to Hypre, >>> we'll do it. I reported the build problem and they said they had patched it, >>> but did not send me a patch. I encourage more people to ask them to publish >>> (read-only) their source repository. Then we could stop guessing. >> >> Yes, I have contacted them already on that issue. > > I got an answer for them that the needed fixes are scheduled for the > next release perhaps within the next couple of weeks. Meanwhile, I easily managed to compile hypre 2.9.0b myself using cmake on Windows and built petsc with it (using --with-hypre switches). Not tested runtime or linking to it yet though. Dominik From d.scott at ed.ac.uk Thu Jan 10 09:03:14 2013 From: d.scott at ed.ac.uk (David Scott) Date: Thu, 10 Jan 2013 15:03:14 +0000 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries Message-ID: <50EED832.1010604@ed.ac.uk> Hello, I am solving Poisson's equation (actually Laplace's equation in this simple test case) on a 3D structured grid. The boundary condition in the first dimension is periodic. In the others there are Von Neumann conditions except for one surface where the second derivative is zero. I have specified DMDA_BOUNDARY_NONE in these two dimensions and deal with the boundary conditions by constructing an appropriate matrix. Here is an extract from the Fortran code: if (j==0) then ! Von Neumann boundary conditions on y=0 boundary. v(1) = 1 col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = -1 col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j+1 col(MatStencil_k, 2) = k call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) else if (j==maxl) then ! Boundary condition on y=maxl boundary. v(1) = 1 col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = -2 col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j-1 col(MatStencil_k, 2) = k v(3) = 1 col(MatStencil_i, 3) = i col(MatStencil_j, 3) = j-2 col(MatStencil_k, 3) = k call MatSetValuesStencil(B, 1, row, 3, col, v, INSERT_VALUES, ierr) else if (k==0) then Here the second clause deals with the second derivative on the boundary. In order for this code to work I have to set the stencil width to 2 even though 'j-2' refers to an interior, non-halo point in the grid. This leads to larger halo swaps than would be required if a stencil width of 1 could be used. Is there a better way to encode the problem? David -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From jedbrown at mcs.anl.gov Thu Jan 10 09:28:21 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Jan 2013 09:28:21 -0600 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: <50EED832.1010604@ed.ac.uk> References: <50EED832.1010604@ed.ac.uk> Message-ID: Second derivative is not a boundary condition for Poisson; that is the equation satisfied in the interior. Unless you are intentionally attempting to apply a certain kind of outflow boundary condition (i.e., you're NOT solving Laplace) then there is a problem with your formulation. I suggest you revisit the continuum problem and establish that it is well-posed before concerning yourself with implementation details. On Thu, Jan 10, 2013 at 9:03 AM, David Scott wrote: > Hello, > > I am solving Poisson's equation (actually Laplace's equation in this > simple test case) on a 3D structured grid. The boundary condition in the > first dimension is periodic. In the others there are Von Neumann conditions > except for one surface where the second derivative is zero. I have > specified DMDA_BOUNDARY_NONE in these two dimensions and deal with the > boundary conditions by constructing an appropriate matrix. Here is an > extract from the Fortran code: > > if (j==0) then > ! Von Neumann boundary conditions on y=0 boundary. > v(1) = 1 > col(MatStencil_i, 1) = i > col(MatStencil_j, 1) = j > col(MatStencil_k, 1) = k > v(2) = -1 > col(MatStencil_i, 2) = i > col(MatStencil_j, 2) = j+1 > col(MatStencil_k, 2) = k > call MatSetValuesStencil(B, 1, row, 2, col, v, > INSERT_VALUES, ierr) > else if (j==maxl) then > ! Boundary condition on y=maxl boundary. > v(1) = 1 > col(MatStencil_i, 1) = i > col(MatStencil_j, 1) = j > col(MatStencil_k, 1) = k > v(2) = -2 > col(MatStencil_i, 2) = i > col(MatStencil_j, 2) = j-1 > col(MatStencil_k, 2) = k > v(3) = 1 > col(MatStencil_i, 3) = i > col(MatStencil_j, 3) = j-2 > col(MatStencil_k, 3) = k > call MatSetValuesStencil(B, 1, row, 3, col, v, > INSERT_VALUES, ierr) > else if (k==0) then > > > Here the second clause deals with the second derivative on the boundary. > > In order for this code to work I have to set the stencil width to 2 even > though 'j-2' refers to an interior, non-halo > point in the grid. This leads to larger halo swaps than would be required > if a stencil width of 1 could be used. > > Is there a better way to encode the problem? > > David > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.scott at ed.ac.uk Thu Jan 10 09:44:35 2013 From: d.scott at ed.ac.uk (David Scott) Date: Thu, 10 Jan 2013 15:44:35 +0000 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: References: <50EED832.1010604@ed.ac.uk> Message-ID: <50EEE1E3.4020602@ed.ac.uk> All right, I'll say it differently. I wish to solve div.grad phi = 0 with the boundary conditions that I have described. On 10/01/2013 15:28, Jed Brown wrote: > Second derivative is not a boundary condition for Poisson; that is the > equation satisfied in the interior. Unless you are intentionally > attempting to apply a certain kind of outflow boundary condition (i.e., > you're NOT solving Laplace) then there is a problem with your > formulation. I suggest you revisit the continuum problem and establish > that it is well-posed before concerning yourself with implementation > details. > > > On Thu, Jan 10, 2013 at 9:03 AM, David Scott > wrote: > > Hello, > > I am solving Poisson's equation (actually Laplace's equation in this > simple test case) on a 3D structured grid. The boundary condition in > the first dimension is periodic. In the others there are Von Neumann > conditions except for one surface where the second derivative is > zero. I have specified DMDA_BOUNDARY_NONE in these two dimensions > and deal with the boundary conditions by constructing an appropriate > matrix. Here is an extract from the Fortran code: > > if (j==0) then > ! Von Neumann boundary conditions on y=0 boundary. > v(1) = 1 > col(MatStencil_i, 1) = i > col(MatStencil_j, 1) = j > col(MatStencil_k, 1) = k > v(2) = -1 > col(MatStencil_i, 2) = i > col(MatStencil_j, 2) = j+1 > col(MatStencil_k, 2) = k > call MatSetValuesStencil(B, 1, row, 2, col, v, > INSERT_VALUES, ierr) > else if (j==maxl) then > ! Boundary condition on y=maxl boundary. > v(1) = 1 > col(MatStencil_i, 1) = i > col(MatStencil_j, 1) = j > col(MatStencil_k, 1) = k > v(2) = -2 > col(MatStencil_i, 2) = i > col(MatStencil_j, 2) = j-1 > col(MatStencil_k, 2) = k > v(3) = 1 > col(MatStencil_i, 3) = i > col(MatStencil_j, 3) = j-2 > col(MatStencil_k, 3) = k > call MatSetValuesStencil(B, 1, row, 3, col, v, > INSERT_VALUES, ierr) > else if (k==0) then > > > Here the second clause deals with the second derivative on the boundary. > > In order for this code to work I have to set the stencil width to 2 > even though 'j-2' refers to an interior, non-halo > point in the grid. This leads to larger halo swaps than would be > required if a stencil width of 1 could be used. > > Is there a better way to encode the problem? > > David > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -- Dr. D. M. Scott Applications Consultant Edinburgh Parallel Computing Centre Tel. 0131 650 5921 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From john.fettig at gmail.com Thu Jan 10 10:15:02 2013 From: john.fettig at gmail.com (John Fettig) Date: Thu, 10 Jan 2013 11:15:02 -0500 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: FWIW, hypre 2.8.0b can be made to compile in windows with MSVC 10 in cygwin. John On Wed, Jan 9, 2013 at 3:24 AM, Dominik Szczerba wrote: > Hi, > > I _have_ to use hypre-2.9.0b with petsc 3.3.x (because 2.8.0b > compilation on Windows is broken due to issues with the MSVC 10 > compiler). I am aware of both 2.8.0b being officially used by petsc as > well as of Jed mentioning building problems in hypre-2.9.0b. I indeed > can not build 2.9.0b on linux due to errors but - ironically - it is > the only version I can currently build natively on Windows. Therefore > I would like to know if the decision to officially use 2.8.0b in petsc > 3.3.x is dictated exclusively by building infrastructure issues or > also by some code / API / functionality related issues. Thanks for any > comments. > > Dominik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jan 10 10:27:25 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Jan 2013 10:27:25 -0600 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: <50EEE1E3.4020602@ed.ac.uk> References: <50EED832.1010604@ed.ac.uk> <50EEE1E3.4020602@ed.ac.uk> Message-ID: ... and you describe a "boundary condition" as "second derivative is zero", which is not a boundary condition, making your problem ill-posed. (Indeed, consider the family of problems in which you extend the domain in that patch and apply _any_ boundary conditions in the extended domain. All of those solutions are also solutions of your problem with "second derivative is zero" on your "boundary".) On Thu, Jan 10, 2013 at 9:44 AM, David Scott wrote: > All right, I'll say it differently. I wish to solve > div.grad phi = 0 > with the boundary conditions that I have described. > > > On 10/01/2013 15:28, Jed Brown wrote: > >> Second derivative is not a boundary condition for Poisson; that is the >> equation satisfied in the interior. Unless you are intentionally >> attempting to apply a certain kind of outflow boundary condition (i.e., >> you're NOT solving Laplace) then there is a problem with your >> formulation. I suggest you revisit the continuum problem and establish >> that it is well-posed before concerning yourself with implementation >> details. >> >> >> On Thu, Jan 10, 2013 at 9:03 AM, David Scott > > wrote: >> >> Hello, >> >> I am solving Poisson's equation (actually Laplace's equation in this >> simple test case) on a 3D structured grid. The boundary condition in >> the first dimension is periodic. In the others there are Von Neumann >> conditions except for one surface where the second derivative is >> zero. I have specified DMDA_BOUNDARY_NONE in these two dimensions >> and deal with the boundary conditions by constructing an appropriate >> matrix. Here is an extract from the Fortran code: >> >> if (j==0) then >> ! Von Neumann boundary conditions on y=0 boundary. >> v(1) = 1 >> col(MatStencil_i, 1) = i >> col(MatStencil_j, 1) = j >> col(MatStencil_k, 1) = k >> v(2) = -1 >> col(MatStencil_i, 2) = i >> col(MatStencil_j, 2) = j+1 >> col(MatStencil_k, 2) = k >> call MatSetValuesStencil(B, 1, row, 2, col, v, >> INSERT_VALUES, ierr) >> else if (j==maxl) then >> ! Boundary condition on y=maxl boundary. >> v(1) = 1 >> col(MatStencil_i, 1) = i >> col(MatStencil_j, 1) = j >> col(MatStencil_k, 1) = k >> v(2) = -2 >> col(MatStencil_i, 2) = i >> col(MatStencil_j, 2) = j-1 >> col(MatStencil_k, 2) = k >> v(3) = 1 >> col(MatStencil_i, 3) = i >> col(MatStencil_j, 3) = j-2 >> col(MatStencil_k, 3) = k >> call MatSetValuesStencil(B, 1, row, 3, col, v, >> INSERT_VALUES, ierr) >> else if (k==0) then >> >> >> Here the second clause deals with the second derivative on the >> boundary. >> >> In order for this code to work I have to set the stencil width to 2 >> even though 'j-2' refers to an interior, non-halo >> point in the grid. This leads to larger halo swaps than would be >> required if a stencil width of 1 could be used. >> >> Is there a better way to encode the problem? >> >> David >> >> -- >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> >> > > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Thu Jan 10 10:33:42 2013 From: u.tabak at tudelft.nl (Umut Tabak) Date: Thu, 10 Jan 2013 17:33:42 +0100 Subject: [petsc-users] Hypre and IC preconditioning Message-ID: <50EEED66.5060307@tudelft.nl> Dear all, I would like to test some ideas of mine over the CG method with incomplete cholesky preconditioner with a drop tolerance. I checked the Hypre manual for the preconditioner generation, however there is not a direct IC preconditioner but and ILUT which results in an unsymmetric preconditioner, is it wise to use this as a preconditioner for CG in PETSc? Best regards, Umut From knepley at gmail.com Thu Jan 10 10:37:16 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Jan 2013 10:37:16 -0600 Subject: [petsc-users] Hypre and IC preconditioning In-Reply-To: <50EEED66.5060307@tudelft.nl> References: <50EEED66.5060307@tudelft.nl> Message-ID: http://www.mcs.anl.gov/petsc/petsc-dev/docs/linearsolvertable.html Matt On Thu, Jan 10, 2013 at 10:33 AM, Umut Tabak wrote: > Dear all, > > I would like to test some ideas of mine over the CG method with incomplete > cholesky preconditioner with a drop tolerance. > I checked the Hypre manual for the preconditioner generation, however > there is not a direct IC preconditioner but and ILUT which results in an > unsymmetric preconditioner, is it wise to use this as a preconditioner for > CG in PETSc? > > Best regards, > Umut > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Thu Jan 10 10:44:05 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Fri, 11 Jan 2013 00:44:05 +0800 (CST) Subject: [petsc-users] Determine the positive definiteness Message-ID: <7d751366.27a62.13c2558d93c.Coremail.w_ang_temp@163.com> Hello, I want to determine the positive definiteness. So I want to know if it is a right way. I use "-ksp_type cg -pc_type none". You know, CG does not work for indefinite matrices. I get "Linear solve did not converge due to DIVERGED_INDEFINITE_MAT". From the information, I think it is indefinite. Is it a right way? Thanks. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.richard.green at gmail.com Thu Jan 10 10:57:04 2013 From: kevin.richard.green at gmail.com (Kevin Green) Date: Thu, 10 Jan 2013 11:57:04 -0500 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: I was just playing around with this in the recent petsc-dev, and noticed the following: It works fine when provided from the command line, but when used in a file with -options_file, I get the error: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: ! [0]PETSC ERROR: -ts_monitor_draw_solution_vtk requires a file template, e.g. filename-%03d.vts! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: 117b19d7e9971ca4c134171ccd46df7d0d54c1e1 HG Date: Thu Jan 10 09:41:10 2013 -0600 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./cortex on a arch-linux2-c-debug named kg-ThinkPad-X220-Tablet by kg Thu Jan 10 11:44:39 2013 [0]PETSC ERROR: Libraries linked from /home/kg/libs/petsc-dev/arch-linux2-c-debug/lib [0]PETSC ERROR: Configure run at Thu Jan 10 11:05:06 2013 [0]PETSC ERROR: Configure options --with-debugging=1 --with-x=1 PETSC_ARCH=arch-linux2-c-debug [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: TSSetFromOptions() line 220 in src/ts/interface/ts.c [0]PETSC ERROR: main() line 225 in src/main.c -------------------------------------------------------------------------- This is the first time I've ever had a difference between command line and -options_file behaviour... Is it a bug, or is there something special about using string commands in -options_file that I just don't know about? Cheers, Kevin On Thu, Jan 10, 2013 at 1:12 AM, Jed Brown wrote: > On Wed, Jan 9, 2013 at 3:36 PM, Blaise A Bourdin wrote: > >> Wouldn't it be trivial for a DMDA? > > > It should be, but I was trying to buy some time because I was picking up > projects at a seemingly exponential rate this week, despite prior > obligations. ;-) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jan 10 11:00:36 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Jan 2013 11:00:36 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: Oh, this is the problem with allowing % for comments, but not parsing quotes. Last time we discussed this, I thought we decided that % should not be a comment character. Should we disable it? On Thu, Jan 10, 2013 at 10:57 AM, Kevin Green wrote: > I was just playing around with this in the recent petsc-dev, and noticed > the following: > > It works fine when provided from the command line, but when used in a file > with -options_file, I get the error: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: ! > [0]PETSC ERROR: -ts_monitor_draw_solution_vtk requires a file template, > e.g. filename-%03d.vts! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: > 117b19d7e9971ca4c134171ccd46df7d0d54c1e1 HG Date: Thu Jan 10 09:41:10 2013 > -0600 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./cortex on a arch-linux2-c-debug named > kg-ThinkPad-X220-Tablet by kg Thu Jan 10 11:44:39 2013 > [0]PETSC ERROR: Libraries linked from > /home/kg/libs/petsc-dev/arch-linux2-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jan 10 11:05:06 2013 > [0]PETSC ERROR: Configure options --with-debugging=1 --with-x=1 > PETSC_ARCH=arch-linux2-c-debug > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: TSSetFromOptions() line 220 in src/ts/interface/ts.c > [0]PETSC ERROR: main() line 225 in src/main.c > -------------------------------------------------------------------------- > > > This is the first time I've ever had a difference between command line and > -options_file behaviour... Is it a bug, or is there something special about > using string commands in -options_file that I just don't know about? > > Cheers, > Kevin > > > On Thu, Jan 10, 2013 at 1:12 AM, Jed Brown wrote: > >> On Wed, Jan 9, 2013 at 3:36 PM, Blaise A Bourdin wrote: >> >>> Wouldn't it be trivial for a DMDA? >> >> >> It should be, but I was trying to buy some time because I was picking up >> projects at a seemingly exponential rate this week, despite prior >> obligations. ;-) >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.richard.green at gmail.com Thu Jan 10 11:05:02 2013 From: kevin.richard.green at gmail.com (Kevin Green) Date: Thu, 10 Jan 2013 12:05:02 -0500 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: Personally, I think it should probably be disabled... AFAIK # works for comments, and what are the advantages of having multiple comment characters? On Thu, Jan 10, 2013 at 12:00 PM, Jed Brown wrote: > Oh, this is the problem with allowing % for comments, but not parsing > quotes. Last time we discussed this, I thought we decided that % should not > be a comment character. Should we disable it? > > > On Thu, Jan 10, 2013 at 10:57 AM, Kevin Green < > kevin.richard.green at gmail.com> wrote: > >> I was just playing around with this in the recent petsc-dev, and noticed >> the following: >> >> It works fine when provided from the command line, but when used in a >> file with -options_file, I get the error: >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: ! >> [0]PETSC ERROR: -ts_monitor_draw_solution_vtk requires a file template, >> e.g. filename-%03d.vts! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Development HG revision: >> 117b19d7e9971ca4c134171ccd46df7d0d54c1e1 HG Date: Thu Jan 10 09:41:10 2013 >> -0600 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./cortex on a arch-linux2-c-debug named >> kg-ThinkPad-X220-Tablet by kg Thu Jan 10 11:44:39 2013 >> [0]PETSC ERROR: Libraries linked from >> /home/kg/libs/petsc-dev/arch-linux2-c-debug/lib >> [0]PETSC ERROR: Configure run at Thu Jan 10 11:05:06 2013 >> [0]PETSC ERROR: Configure options --with-debugging=1 --with-x=1 >> PETSC_ARCH=arch-linux2-c-debug >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: TSSetFromOptions() line 220 in src/ts/interface/ts.c >> [0]PETSC ERROR: main() line 225 in src/main.c >> -------------------------------------------------------------------------- >> >> >> This is the first time I've ever had a difference between command line >> and -options_file behaviour... Is it a bug, or is there something special >> about using string commands in -options_file that I just don't know about? >> >> Cheers, >> Kevin >> >> >> On Thu, Jan 10, 2013 at 1:12 AM, Jed Brown wrote: >> >>> On Wed, Jan 9, 2013 at 3:36 PM, Blaise A Bourdin wrote: >>> >>>> Wouldn't it be trivial for a DMDA? >>> >>> >>> It should be, but I was trying to buy some time because I was picking up >>> projects at a seemingly exponential rate this week, despite prior >>> obligations. ;-) >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Jan 10 11:22:17 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 10 Jan 2013 11:22:17 -0600 Subject: [petsc-users] Hypre and IC preconditioning In-Reply-To: <50EEED66.5060307@tudelft.nl> References: <50EEED66.5060307@tudelft.nl> Message-ID: Umut: We have incomplete cholesky preconditioner (icc) in petsc, but do not have support for drop tolerance. Hyper and superlu support ILUT, which is an unsymmetric preconditioner, thus cannot be used for cg. Several years ago, I tried to write ilut for petsc, but did not see any convincing benefit of dropping small matrix entries compared with ilu. The development was then abandoned. Note, ilut requires almost full matrix numerical factorization for dropping entries; the only trade-off is controlled memory for matrix factors. For numerical stability, row/col pivot is needed in general. There is no math theory that supports dropping small entries would still maintain good approximation of full matrix factorizations. I would suggest using icc instead. Hong > Dear all, > > I would like to test some ideas of mine over the CG method with incomplete > cholesky preconditioner with a drop tolerance. > I checked the Hypre manual for the preconditioner generation, however there > is not a direct IC preconditioner but and ILUT which results in an > unsymmetric preconditioner, is it wise to use this as a preconditioner for > CG in PETSc? > > Best regards, > Umut From knepley at gmail.com Thu Jan 10 11:46:36 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 10 Jan 2013 11:46:36 -0600 Subject: [petsc-users] Determine the positive definiteness In-Reply-To: <7d751366.27a62.13c2558d93c.Coremail.w_ang_temp@163.com> References: <7d751366.27a62.13c2558d93c.Coremail.w_ang_temp@163.com> Message-ID: On Thu, Jan 10, 2013 at 10:44 AM, w_ang_temp wrote: > Hello, > > I want to determine the positive definiteness. So I want to know > > if it is a right way. > > I use "-ksp_type cg -pc_type none". You know, CG does not work for > > indefinite matrices. I get "Linear solve did not converge due to > > DIVERGED_INDEFINITE_MAT". From the information, I think it is indefinite. > > Is it a right way? > Its not full proof, but it could be a diagnostic. Matt > Thanks. > > Jim > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Jan 10 12:09:14 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 10 Jan 2013 19:09:14 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Thu, Jan 10, 2013 at 5:15 PM, John Fettig wrote: > FWIW, hypre 2.8.0b can be made to compile in windows with MSVC 10 in cygwin. > > John Thanks, yes, I know, but I must compile it natively using MSVC, its beyond my control. PS/ I managed to compile 2.7.0b on my own with lots of hacks, 2.8.0b resists, as posted, 2.9.0b builds smoothly using cmake. D. From dominik at itis.ethz.ch Thu Jan 10 12:10:49 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 10 Jan 2013 19:10:49 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: > On Thu, Jan 10, 2013 at 5:15 PM, John Fettig wrote: >> FWIW, hypre 2.8.0b can be made to compile in windows with MSVC 10 in cygwin. >> >> John > > Thanks, yes, I know, but I must compile it natively using MSVC, its > beyond my control. > PS/ I managed to compile 2.7.0b on my own with lots of hacks, 2.8.0b > resists, as posted, 2.9.0b builds smoothly using cmake. Maybe I was not precise enough: what you refer to as "can be made to compile in windows with MSVC 10 in cygwin" is not fully true: it means compiling with MSVC but linking with cygwin, resulting in a cygwin dependency, which is ruled out by the owner of the project that I am working on. D. From bsmith at mcs.anl.gov Thu Jan 10 12:42:01 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 10 Jan 2013 12:42:01 -0600 Subject: [petsc-users] binary vtk viewer DMDA In-Reply-To: References: <6C4AE5741C2B874282F58DA136A2CCB8D26AF7@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D26C9E@BL2PRD0612MB663.namprd06.prod.outlook.com> <6C4AE5741C2B874282F58DA136A2CCB8D27586@BL2PRD0612MB663.namprd06.prod.outlook.com> Message-ID: <5032E85E-B0B5-44D5-BD75-0CC65F19FCF3@mcs.anl.gov> On Jan 10, 2013, at 11:05 AM, Kevin Green wrote: > Personally, I think it should probably be disabled... AFAIK # works for comments, and what are the advantages of having multiple comment characters? Sorry, I thought we already fixed this. I have pushed to petsc-dev that only # are treated as comments > > > > On Thu, Jan 10, 2013 at 12:00 PM, Jed Brown wrote: > Oh, this is the problem with allowing % for comments, but not parsing quotes. Last time we discussed this, I thought we decided that % should not be a comment character. Should we disable it? > > > On Thu, Jan 10, 2013 at 10:57 AM, Kevin Green wrote: > I was just playing around with this in the recent petsc-dev, and noticed the following: > > It works fine when provided from the command line, but when used in a file with -options_file, I get the error: > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: ! > [0]PETSC ERROR: -ts_monitor_draw_solution_vtk requires a file template, e.g. filename-%03d.vts! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: 117b19d7e9971ca4c134171ccd46df7d0d54c1e1 HG Date: Thu Jan 10 09:41:10 2013 -0600 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./cortex on a arch-linux2-c-debug named kg-ThinkPad-X220-Tablet by kg Thu Jan 10 11:44:39 2013 > [0]PETSC ERROR: Libraries linked from /home/kg/libs/petsc-dev/arch-linux2-c-debug/lib > [0]PETSC ERROR: Configure run at Thu Jan 10 11:05:06 2013 > [0]PETSC ERROR: Configure options --with-debugging=1 --with-x=1 PETSC_ARCH=arch-linux2-c-debug > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: TSSetFromOptions() line 220 in src/ts/interface/ts.c > [0]PETSC ERROR: main() line 225 in src/main.c > -------------------------------------------------------------------------- > > > This is the first time I've ever had a difference between command line and -options_file behaviour... Is it a bug, or is there something special about using string commands in -options_file that I just don't know about? > > Cheers, > Kevin > > > On Thu, Jan 10, 2013 at 1:12 AM, Jed Brown wrote: > On Wed, Jan 9, 2013 at 3:36 PM, Blaise A Bourdin wrote: > Wouldn't it be trivial for a DMDA? > > It should be, but I was trying to buy some time because I was picking up projects at a seemingly exponential rate this week, despite prior obligations. ;-) > > > From dharmareddy84 at gmail.com Thu Jan 10 13:08:12 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 10 Jan 2013 13:08:12 -0600 Subject: [petsc-users] Mat Error Message-ID: Hello, I am seeing an error in my code when i switched from pets 3.2-p5 to petsc 3.3-p5. Please have a look at the error information below. I do not pass any petsc specific command line options. The following is the subroutine which leads to error message. subroutine initBCSolver_solver(this,comm,SolverType,NumDof,ierr,EqnCtx) use SolverUtils_m implicit none #include "finclude/petsc.h" class(Solver_t), intent(inout) :: this MPI_Comm :: comm integer, intent(in) :: SolverType integer, intent(in) :: NumDof class(*),pointer,intent(inout),optional :: EqnCtx PetscErrorCode :: ierr PetscReal :: rtol PetscReal :: abstol PetscReal :: dtol PetscInt :: maxits ! Local Variables PetscInt :: psize ! problem size PetscScalar :: pfive ! ! Set solver type !if(SolverType /= LINEAR .and. SolverType /= NonLinear) then ! print*,'Error:: Solver must be linear or Nonlinear',solverType ! stop !end if this%comm = comm this%SolverType = SolverType ! Set problem size this%numDof = numDof psize = this%numDof ! 3 ! intiate Storage vectors call VecCreateSeq(this%comm,psize,this%sol,ierr) call VecSetOption(this%sol, VEC_IGNORE_NEGATIVE_INDICES,PETSC_TRUE,ierr) call VecDuplicate(this%sol,this%fxn,ierr) this%flag_vec_sol = .true. this%flag_vec_fxn = .true. select case(this%SolverType) case(Linear) ! create the linear operator call MatCreate(this%comm,this%A,ierr) call MatSetSizes(this%A,PETSC_DECIDE,PETSC_DECIDE,psize,psize,ierr) call MatSetFromOptions(this%A,ierr) ! create storage for rhs call VecDuplicate(this%sol,this%rhs,ierr) ! Create KSP context call KSPCreate(this%comm,this%ksp,ierr) this%flag_ksp_ksp = .true. rtol = PETSC_DEFAULT_DOUBLE_PRECISION abstol = PETSC_DEFAULT_DOUBLE_PRECISION dtol = PETSC_DEFAULT_DOUBLE_PRECISION maxIts = PETSC_DEFAULT_INTEGER call KSPSetTolerances(this%ksp,rtol,abstol,dtol,maxIts,ierr); call KSPSetFromOptions(this%ksp,ierr) call KSPSetOperators(this%ksp,this%A,this%A,DIFFERENT_NONZERO_PATTERN,ierr) case(nonLinear) call MatCreate(this%comm,this%Jac,ierr) this%flag_mat_Jac = .true. call MatSetSizes(this%Jac,PETSC_DECIDE,PETSC_DECIDE,psize,psize,ierr) call MatSetFromOptions(this%Jac,ierr) ! create snes context call SNESCreate(this%comm,this%snes,ierr) this%flag_snes_snes = .true. ! Set function evaluation routine and vector ! use the defualt FormFunction_snes call SNESSetFunction(this%snes,this%fxn,FormFunction_snes,this,ierr) ! Set Jacobian matrix data structure and Jacobian evaluation routine call SNESSetJacobian(this%snes,this%Jac,this%Jac,FormJacobian_snes,this,ierr) !call SNESSetJacobian(this%snes,this%Jac,this%Jac,PETSC_NULL_FUNCTION,PETSC_NULL_OBJECT,ierr) call SNESSetTolerances(this%snes,1e-15*25.6E-3,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,25000,100000,ierr); call SNESSetFromOptions(this%snes,ierr) end select !call VecDuplicate(this%sol,this%bOhmic,ierr) pfive = 0.0 call VecSet(this%sol,pfive,ierr) ! set Intial Guess end subroutine initBCSolver_solver end module BoundaryConditionUtils_m [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on argument 1 "mat" before MatGetFactorAvailable()! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 CST 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /home1/00924/Reddy135/projects/utgds/test/IIIVTunnelFET2D/PoisTest on a sandybrid named login2.stampede.tacc.utexas.edu by Reddy135 Thu Jan 10 12:50:19 2013 [0]PETSC ERROR: Libraries linked from /opt/apps/intel13/mvapich2_1_9/petsc/3.3/sandybridge-debug/lib [0]PETSC ERROR: Configure run at Mon Jan 7 14:42:13 2013 [0]PETSC ERROR: Configure options --with-x=0 -with-pic --with-external-packages-dir=/opt/apps/intel13/mvapich2_1_9/petsc/3.3/externalpackages --with-mpi-compilers=1 --with-mpi-dir=/opt/apps/intel13/mvapich2/1.9 --with-scalar-type=real --with-dynamic-loading=0 --with-shared-libraries=1 --with-spai=1 --download-spai --with-hypre=1 --download-hypre --with-mumps=1 --download-mumps --with-scalapack=1 --download-scalapack --with-blacs=1 --download-blacs --with-spooles=1 --download-spooles --with-superlu=1 --download-superlu --with-superlu_dist=1 --download-superlu_dist --with-parmetis=1 --download-parmetis --with-metis=1 --download-metis --with-debugging=yes --with-blas-lapack-dir=/opt/apps/intel/13/composer_xe_2013.1.117/mkl/lib/intel64 --with-mpiexec=mpirun_rsh --COPTFLAGS= --CXXOPTFLAGS= --FOPTFLAGS= [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatGetFactorAvailable() line 3921 in /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/mat/interface/matrix.c [0]PETSC ERROR: PCGetDefaultType_Private() line 26 in /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/ksp/pc/interface/precon.c [0]PETSC ERROR: PCSetFromOptions() line 181 in /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/ksp/pc/interface/pcset.c [0]PETSC ERROR: KSPSetFromOptions() line 287 in /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/ksp/ksp/interface/itcl.c [0]PETSC ERROR: SNESSetFromOptions() line 678 in /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/snes/interface/snes.c -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jan 10 13:10:15 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Jan 2013 13:10:15 -0600 Subject: [petsc-users] Mat Error In-Reply-To: References: Message-ID: * You MUST now call MatXXXSetPreallocation() or MatSetUp() on any matrix you create directly (not using DMCreateMatrix()) before calling MatSetValues(), MatSetValuesBlocked() etc. http://www.mcs.anl.gov/petsc/documentation/changes/33.html On Thu, Jan 10, 2013 at 1:08 PM, Dharmendar Reddy wrote: > Hello, > I am seeing an error in my code when i switched from pets 3.2-p5 > to petsc 3.3-p5. Please have a look at the error information below. I do > not pass any petsc specific command line options. > > The following is the subroutine which leads to error message. > > subroutine initBCSolver_solver(this,comm,SolverType,NumDof,ierr,EqnCtx) > use SolverUtils_m > implicit none > #include "finclude/petsc.h" > class(Solver_t), intent(inout) :: this > MPI_Comm :: comm > integer, intent(in) :: SolverType > integer, intent(in) :: NumDof > class(*),pointer,intent(inout),optional :: EqnCtx > PetscErrorCode :: ierr > PetscReal :: rtol > PetscReal :: abstol > PetscReal :: dtol > PetscInt :: maxits > ! Local Variables > PetscInt :: psize ! problem size > PetscScalar :: pfive ! > ! Set solver type > !if(SolverType /= LINEAR .and. SolverType /= NonLinear) then > ! print*,'Error:: Solver must be linear or Nonlinear',solverType > ! stop > !end if > this%comm = comm > this%SolverType = SolverType > ! Set problem size > this%numDof = numDof > psize = this%numDof ! 3 > ! intiate Storage vectors > call VecCreateSeq(this%comm,psize,this%sol,ierr) > call VecSetOption(this%sol, > VEC_IGNORE_NEGATIVE_INDICES,PETSC_TRUE,ierr) > call VecDuplicate(this%sol,this%fxn,ierr) > this%flag_vec_sol = .true. > this%flag_vec_fxn = .true. > > select case(this%SolverType) > case(Linear) > ! create the linear operator > call MatCreate(this%comm,this%A,ierr) > call MatSetSizes(this%A,PETSC_DECIDE,PETSC_DECIDE,psize,psize,ierr) > call MatSetFromOptions(this%A,ierr) > ! create storage for rhs > call VecDuplicate(this%sol,this%rhs,ierr) > ! Create KSP context > call KSPCreate(this%comm,this%ksp,ierr) > this%flag_ksp_ksp = .true. > rtol = PETSC_DEFAULT_DOUBLE_PRECISION > abstol = PETSC_DEFAULT_DOUBLE_PRECISION > dtol = PETSC_DEFAULT_DOUBLE_PRECISION > maxIts = PETSC_DEFAULT_INTEGER > call KSPSetTolerances(this%ksp,rtol,abstol,dtol,maxIts,ierr); > call KSPSetFromOptions(this%ksp,ierr) > call > KSPSetOperators(this%ksp,this%A,this%A,DIFFERENT_NONZERO_PATTERN,ierr) > case(nonLinear) > call MatCreate(this%comm,this%Jac,ierr) > this%flag_mat_Jac = .true. > call > MatSetSizes(this%Jac,PETSC_DECIDE,PETSC_DECIDE,psize,psize,ierr) > call MatSetFromOptions(this%Jac,ierr) > ! create snes context > call SNESCreate(this%comm,this%snes,ierr) > this%flag_snes_snes = .true. > ! Set function evaluation routine and vector > ! use the defualt FormFunction_snes > call > SNESSetFunction(this%snes,this%fxn,FormFunction_snes,this,ierr) > ! Set Jacobian matrix data structure and Jacobian evaluation > routine > > call > SNESSetJacobian(this%snes,this%Jac,this%Jac,FormJacobian_snes,this,ierr) > !call > SNESSetJacobian(this%snes,this%Jac,this%Jac,PETSC_NULL_FUNCTION,PETSC_NULL_OBJECT,ierr) > call > SNESSetTolerances(this%snes,1e-15*25.6E-3,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,25000,100000,ierr); > call SNESSetFromOptions(this%snes,ierr) > end select > > !call VecDuplicate(this%sol,this%bOhmic,ierr) > > pfive = 0.0 > call VecSet(this%sol,pfive,ierr) ! set Intial Guess > end subroutine initBCSolver_solver > > end module BoundaryConditionUtils_m > > > > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on > argument 1 "mat" before MatGetFactorAvailable()! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 5, Sat Dec 1 15:10:41 > CST 2012 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: > /home1/00924/Reddy135/projects/utgds/test/IIIVTunnelFET2D/PoisTest on a > sandybrid named login2.stampede.tacc.utexas.edu by Reddy135 Thu Jan 10 > 12:50:19 2013 > [0]PETSC ERROR: Libraries linked from > /opt/apps/intel13/mvapich2_1_9/petsc/3.3/sandybridge-debug/lib > [0]PETSC ERROR: Configure run at Mon Jan 7 14:42:13 2013 > [0]PETSC ERROR: Configure options --with-x=0 -with-pic > --with-external-packages-dir=/opt/apps/intel13/mvapich2_1_9/petsc/3.3/externalpackages > --with-mpi-compilers=1 --with-mpi-dir=/opt/apps/intel13/mvapich2/1.9 > --with-scalar-type=real --with-dynamic-loading=0 --with-shared-libraries=1 > --with-spai=1 --download-spai --with-hypre=1 --download-hypre > --with-mumps=1 --download-mumps --with-scalapack=1 --download-scalapack > --with-blacs=1 --download-blacs --with-spooles=1 --download-spooles > --with-superlu=1 --download-superlu --with-superlu_dist=1 > --download-superlu_dist --with-parmetis=1 --download-parmetis > --with-metis=1 --download-metis --with-debugging=yes > --with-blas-lapack-dir=/opt/apps/intel/13/composer_xe_2013.1.117/mkl/lib/intel64 > --with-mpiexec=mpirun_rsh --COPTFLAGS= --CXXOPTFLAGS= --FOPTFLAGS= > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatGetFactorAvailable() line 3921 in > /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/mat/interface/matrix.c > [0]PETSC ERROR: PCGetDefaultType_Private() line 26 in > /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: PCSetFromOptions() line 181 in > /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/ksp/pc/interface/pcset.c > [0]PETSC ERROR: KSPSetFromOptions() line 287 in > /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/ksp/ksp/interface/itcl.c > [0]PETSC ERROR: SNESSetFromOptions() line 678 in > /opt/apps/intel13/mvapich2_1_9/petsc/3.3/src/snes/interface/snes.c > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.fettig at gmail.com Thu Jan 10 13:09:57 2013 From: john.fettig at gmail.com (John Fettig) Date: Thu, 10 Jan 2013 14:09:57 -0500 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Thu, Jan 10, 2013 at 1:10 PM, Dominik Szczerba wrote: > > On Thu, Jan 10, 2013 at 5:15 PM, John Fettig > wrote: > >> FWIW, hypre 2.8.0b can be made to compile in windows with MSVC 10 in > cygwin. > >> > >> John > > > > Thanks, yes, I know, but I must compile it natively using MSVC, its > > beyond my control. > > PS/ I managed to compile 2.7.0b on my own with lots of hacks, 2.8.0b > > resists, as posted, 2.9.0b builds smoothly using cmake. > > Maybe I was not precise enough: what you refer to as "can be made to > compile in windows with MSVC 10 in cygwin" is not fully true: it means > compiling with MSVC but linking with cygwin, resulting in a cygwin > dependency, which is ruled out by the owner of the project that I am > working on. > I'm not sure I understand. The resulting library has no dependency on cygwin, and is compiled entirely by cl. The library itself is built by "lib", the microsoft library tool. Are you saying that requiring cygwin as the build environment for hypre is unacceptable? John -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Thu Jan 10 13:18:09 2013 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Thu, 10 Jan 2013 11:18:09 -0800 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: References: <50EED832.1010604@ed.ac.uk> <50EEE1E3.4020602@ed.ac.uk> Message-ID: <50EF13F1.7070209@berkeley.edu> Perhaps more simply, consider the 1D version of your problem: u''=0 on [0,1] with "Boundary Conditions" u'(0)=a and u'(1) =b The solution is u(x) = a x + c (for any! value of c). This is not going to solve happily :( On 1/10/13 8:27 AM, Jed Brown wrote: > ... and you describe a "boundary condition" as "second derivative is > zero", which is not a boundary condition, making your problem > ill-posed. (Indeed, consider the family of problems in which you > extend the domain in that patch and apply _any_ boundary conditions in > the extended domain. All of those solutions are also solutions of your > problem with "second derivative is zero" on your "boundary".) > > > On Thu, Jan 10, 2013 at 9:44 AM, David Scott > wrote: > > All right, I'll say it differently. I wish to solve > div.grad phi = 0 > with the boundary conditions that I have described. > > > On 10/01/2013 15:28, Jed Brown wrote: > > Second derivative is not a boundary condition for Poisson; > that is the > equation satisfied in the interior. Unless you are intentionally > attempting to apply a certain kind of outflow boundary > condition (i.e., > you're NOT solving Laplace) then there is a problem with your > formulation. I suggest you revisit the continuum problem and > establish > that it is well-posed before concerning yourself with > implementation > details. > > > On Thu, Jan 10, 2013 at 9:03 AM, David Scott > >> wrote: > > Hello, > > I am solving Poisson's equation (actually Laplace's > equation in this > simple test case) on a 3D structured grid. The boundary > condition in > the first dimension is periodic. In the others there are > Von Neumann > conditions except for one surface where the second > derivative is > zero. I have specified DMDA_BOUNDARY_NONE in these two > dimensions > and deal with the boundary conditions by constructing an > appropriate > matrix. Here is an extract from the Fortran code: > > if (j==0) then > ! Von Neumann boundary conditions on y=0 > boundary. > v(1) = 1 > col(MatStencil_i, 1) = i > col(MatStencil_j, 1) = j > col(MatStencil_k, 1) = k > v(2) = -1 > col(MatStencil_i, 2) = i > col(MatStencil_j, 2) = j+1 > col(MatStencil_k, 2) = k > call MatSetValuesStencil(B, 1, row, 2, > col, v, > INSERT_VALUES, ierr) > else if (j==maxl) then > ! Boundary condition on y=maxl boundary. > v(1) = 1 > col(MatStencil_i, 1) = i > col(MatStencil_j, 1) = j > col(MatStencil_k, 1) = k > v(2) = -2 > col(MatStencil_i, 2) = i > col(MatStencil_j, 2) = j-1 > col(MatStencil_k, 2) = k > v(3) = 1 > col(MatStencil_i, 3) = i > col(MatStencil_j, 3) = j-2 > col(MatStencil_k, 3) = k > call MatSetValuesStencil(B, 1, row, 3, > col, v, > INSERT_VALUES, ierr) > else if (k==0) then > > > Here the second clause deals with the second derivative on > the boundary. > > In order for this code to work I have to set the stencil > width to 2 > even though 'j-2' refers to an interior, non-halo > point in the grid. This leads to larger halo swaps than > would be > required if a stencil width of 1 could be used. > > Is there a better way to encode the problem? > > David > > -- > The University of Edinburgh is a charitable body, > registered in > Scotland, with registration number SC005336. > > > > > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Thu Jan 10 13:46:57 2013 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Thu, 10 Jan 2013 11:46:57 -0800 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: <50EF13F1.7070209@berkeley.edu> References: <50EED832.1010604@ed.ac.uk> <50EEE1E3.4020602@ed.ac.uk> <50EF13F1.7070209@berkeley.edu> Message-ID: <50EF1AB1.3040105@berkeley.edu> That should have been u''(1)=b for the second "BC" On 1/10/13 11:18 AM, Sanjay Govindjee wrote: > Perhaps more simply, consider the 1D version of your problem: > > u''=0 on [0,1] with "Boundary Conditions" u'(0)=a and u'(1) =b > > The solution is u(x) = a x + c (for any! value of c). > This is not going to solve happily :( > > > On 1/10/13 8:27 AM, Jed Brown wrote: >> ... and you describe a "boundary condition" as "second derivative is >> zero", which is not a boundary condition, making your problem >> ill-posed. (Indeed, consider the family of problems in which you >> extend the domain in that patch and apply _any_ boundary conditions >> in the extended domain. All of those solutions are also solutions of >> your problem with "second derivative is zero" on your "boundary".) >> >> >> On Thu, Jan 10, 2013 at 9:44 AM, David Scott > > wrote: >> >> All right, I'll say it differently. I wish to solve >> div.grad phi = 0 >> with the boundary conditions that I have described. >> >> >> On 10/01/2013 15:28, Jed Brown wrote: >> >> Second derivative is not a boundary condition for Poisson; >> that is the >> equation satisfied in the interior. Unless you are intentionally >> attempting to apply a certain kind of outflow boundary >> condition (i.e., >> you're NOT solving Laplace) then there is a problem with your >> formulation. I suggest you revisit the continuum problem and >> establish >> that it is well-posed before concerning yourself with >> implementation >> details. >> >> >> On Thu, Jan 10, 2013 at 9:03 AM, David Scott >> >> >> wrote: >> >> Hello, >> >> I am solving Poisson's equation (actually Laplace's >> equation in this >> simple test case) on a 3D structured grid. The boundary >> condition in >> the first dimension is periodic. In the others there are >> Von Neumann >> conditions except for one surface where the second >> derivative is >> zero. I have specified DMDA_BOUNDARY_NONE in these two >> dimensions >> and deal with the boundary conditions by constructing an >> appropriate >> matrix. Here is an extract from the Fortran code: >> >> if (j==0) then >> ! Von Neumann boundary conditions on y=0 >> boundary. >> v(1) = 1 >> col(MatStencil_i, 1) = i >> col(MatStencil_j, 1) = j >> col(MatStencil_k, 1) = k >> v(2) = -1 >> col(MatStencil_i, 2) = i >> col(MatStencil_j, 2) = j+1 >> col(MatStencil_k, 2) = k >> call MatSetValuesStencil(B, 1, row, 2, >> col, v, >> INSERT_VALUES, ierr) >> else if (j==maxl) then >> ! Boundary condition on y=maxl boundary. >> v(1) = 1 >> col(MatStencil_i, 1) = i >> col(MatStencil_j, 1) = j >> col(MatStencil_k, 1) = k >> v(2) = -2 >> col(MatStencil_i, 2) = i >> col(MatStencil_j, 2) = j-1 >> col(MatStencil_k, 2) = k >> v(3) = 1 >> col(MatStencil_i, 3) = i >> col(MatStencil_j, 3) = j-2 >> col(MatStencil_k, 3) = k >> call MatSetValuesStencil(B, 1, row, 3, >> col, v, >> INSERT_VALUES, ierr) >> else if (k==0) then >> >> >> Here the second clause deals with the second derivative >> on the boundary. >> >> In order for this code to work I have to set the stencil >> width to 2 >> even though 'j-2' refers to an interior, non-halo >> point in the grid. This leads to larger halo swaps than >> would be >> required if a stencil width of 1 could be used. >> >> Is there a better way to encode the problem? >> >> David >> >> -- >> The University of Edinburgh is a charitable body, >> registered in >> Scotland, with registration number SC005336. >> >> >> >> >> -- >> Dr. D. M. Scott >> Applications Consultant >> Edinburgh Parallel Computing Centre >> Tel. 0131 650 5921 >> >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> > -- ----------------------------------------------- Sanjay Govindjee, PhD, PE Professor of Civil Engineering Vice Chair for Academic Affairs 779 Davis Hall Structural Engineering, Mechanics and Materials Department of Civil Engineering University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu http://www.ce.berkeley.edu/~sanjay ----------------------------------------------- New Books: Engineering Mechanics of Deformable Solids: A Presentation with Exercises http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 http://ukcatalogue.oup.com/product/9780199651641.do http://amzn.com/0199651647 Engineering Mechanics 3 (Dynamics) http://www.springer.com/materials/mechanics/book/978-3-642-14018-1 http://amzn.com/3642140181 ----------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Jan 10 13:49:43 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 10 Jan 2013 20:49:43 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: Please run dumpbin on the library to verify if this is a native or cygwin object. D> On Thu, Jan 10, 2013 at 8:09 PM, John Fettig wrote: > On Thu, Jan 10, 2013 at 1:10 PM, Dominik Szczerba > wrote: >> >> > On Thu, Jan 10, 2013 at 5:15 PM, John Fettig >> > wrote: >> >> FWIW, hypre 2.8.0b can be made to compile in windows with MSVC 10 in >> >> cygwin. >> >> >> >> John >> > >> > Thanks, yes, I know, but I must compile it natively using MSVC, its >> > beyond my control. >> > PS/ I managed to compile 2.7.0b on my own with lots of hacks, 2.8.0b >> > resists, as posted, 2.9.0b builds smoothly using cmake. >> >> Maybe I was not precise enough: what you refer to as "can be made to >> compile in windows with MSVC 10 in cygwin" is not fully true: it means >> compiling with MSVC but linking with cygwin, resulting in a cygwin >> dependency, which is ruled out by the owner of the project that I am >> working on. > > > I'm not sure I understand. The resulting library has no dependency on > cygwin, and is compiled entirely by cl. The library itself is built by > "lib", the microsoft library tool. Are you saying that requiring cygwin as > the build environment for hypre is unacceptable? > > John > From bsmith at mcs.anl.gov Thu Jan 10 13:56:25 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 10 Jan 2013 13:56:25 -0600 Subject: [petsc-users] Determine the positive definiteness In-Reply-To: References: <7d751366.27a62.13c2558d93c.Coremail.w_ang_temp@163.com> Message-ID: <8671C38B-AA8A-4153-A8B7-05BD71B6FF02@mcs.anl.gov> If cg says indefinite then it is indefinite if cg runs and does not say it is indefinite it may still be indefinite (though probably not all that likely). Barry On Jan 10, 2013, at 11:46 AM, Matthew Knepley wrote: > On Thu, Jan 10, 2013 at 10:44 AM, w_ang_temp wrote: > Hello, > > I want to determine the positive definiteness. So I want to know > > if it is a right way. > > I use "-ksp_type cg -pc_type none". You know, CG does not work for > > indefinite matrices. I get "Linear solve did not converge due to > > DIVERGED_INDEFINITE_MAT". From the information, I think it is indefinite. > > Is it a right way? > > > Its not full proof, but it could be a diagnostic. > > Matt > > Thanks. > > Jim > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From d.scott at ed.ac.uk Thu Jan 10 14:35:28 2013 From: d.scott at ed.ac.uk (David Scott) Date: Thu, 10 Jan 2013 20:35:28 +0000 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: <50EF13F1.7070209@berkeley.edu> References: <50EED832.1010604@ed.ac.uk> <50EEE1E3.4020602@ed.ac.uk> <50EF13F1.7070209@berkeley.edu> Message-ID: <50EF2610.4040204@ed.ac.uk> Thanks for the replies. I obviously need to think about the problem more. I was aware that the solution was not fully determined and I had created a MatNullSpace to deal with the arbitrary constant. I have contacted the original author of the code (I am just modifying it to use PETSc) to find out more about the formulation of the problem. I should point out that the the actual problem I want to deal with does not have a zero RHS. This systems of equations is only part of the complete system and only the gradient of the solution is required elsewhere so the arbitrary constant has no physical significance. David On 10/01/2013 19:18, Sanjay Govindjee wrote: > Perhaps more simply, consider the 1D version of your problem: > > u''=0 on [0,1] with "Boundary Conditions" u'(0)=a and u'(1) =b > > The solution is u(x) = a x + c (for any! value of c). > This is not going to solve happily :( > > > On 1/10/13 8:27 AM, Jed Brown wrote: >> ... and you describe a "boundary condition" as "second derivative is >> zero", which is not a boundary condition, making your problem >> ill-posed. (Indeed, consider the family of problems in which you >> extend the domain in that patch and apply _any_ boundary conditions in >> the extended domain. All of those solutions are also solutions of your >> problem with "second derivative is zero" on your "boundary".) >> >> >> On Thu, Jan 10, 2013 at 9:44 AM, David Scott > > wrote: >> >> All right, I'll say it differently. I wish to solve >> div.grad phi = 0 >> with the boundary conditions that I have described. >> >> >> On 10/01/2013 15:28, Jed Brown wrote: >> >> Second derivative is not a boundary condition for Poisson; >> that is the >> equation satisfied in the interior. Unless you are intentionally >> attempting to apply a certain kind of outflow boundary >> condition (i.e., >> you're NOT solving Laplace) then there is a problem with your >> formulation. I suggest you revisit the continuum problem and >> establish >> that it is well-posed before concerning yourself with >> implementation >> details. >> >> >> On Thu, Jan 10, 2013 at 9:03 AM, David Scott > >> >> wrote: >> >> Hello, >> >> I am solving Poisson's equation (actually Laplace's >> equation in this >> simple test case) on a 3D structured grid. The boundary >> condition in >> the first dimension is periodic. In the others there are >> Von Neumann >> conditions except for one surface where the second >> derivative is >> zero. I have specified DMDA_BOUNDARY_NONE in these two >> dimensions >> and deal with the boundary conditions by constructing an >> appropriate >> matrix. Here is an extract from the Fortran code: >> >> if (j==0) then >> ! Von Neumann boundary conditions on y=0 >> boundary. >> v(1) = 1 >> col(MatStencil_i, 1) = i >> col(MatStencil_j, 1) = j >> col(MatStencil_k, 1) = k >> v(2) = -1 >> col(MatStencil_i, 2) = i >> col(MatStencil_j, 2) = j+1 >> col(MatStencil_k, 2) = k >> call MatSetValuesStencil(B, 1, row, 2, >> col, v, >> INSERT_VALUES, ierr) >> else if (j==maxl) then >> ! Boundary condition on y=maxl boundary. >> v(1) = 1 >> col(MatStencil_i, 1) = i >> col(MatStencil_j, 1) = j >> col(MatStencil_k, 1) = k >> v(2) = -2 >> col(MatStencil_i, 2) = i >> col(MatStencil_j, 2) = j-1 >> col(MatStencil_k, 2) = k >> v(3) = 1 >> col(MatStencil_i, 3) = i >> col(MatStencil_j, 3) = j-2 >> col(MatStencil_k, 3) = k >> call MatSetValuesStencil(B, 1, row, 3, >> col, v, >> INSERT_VALUES, ierr) >> else if (k==0) then >> >> >> Here the second clause deals with the second derivative on >> the boundary. >> >> In order for this code to work I have to set the stencil >> width to 2 >> even though 'j-2' refers to an interior, non-halo >> point in the grid. This leads to larger halo swaps than >> would be >> required if a stencil width of 1 could be used. >> >> Is there a better way to encode the problem? >> >> David >> >> -- >> The University of Edinburgh is a charitable body, >> registered in >> Scotland, with registration number SC005336. >> >> >> >> >> -- >> Dr. D. M. Scott >> Applications Consultant >> Edinburgh Parallel Computing Centre >> Tel. 0131 650 5921 >> >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> > -- Dr. D. M. Scott Applications Consultant Edinburgh Parallel Computing Centre Tel. 0131 650 5921 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From jedbrown at mcs.anl.gov Thu Jan 10 14:39:17 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 10 Jan 2013 14:39:17 -0600 Subject: [petsc-users] Dealing with 2nd Derivatives on Boundaries In-Reply-To: <50EF2610.4040204@ed.ac.uk> References: <50EED832.1010604@ed.ac.uk> <50EEE1E3.4020602@ed.ac.uk> <50EF13F1.7070209@berkeley.edu> <50EF2610.4040204@ed.ac.uk> Message-ID: On Thu, Jan 10, 2013 at 2:35 PM, David Scott wrote: > Thanks for the replies. I obviously need to think about the problem more. > I was aware that the solution was not fully determined and I had created a > MatNullSpace to deal with the arbitrary constant. > > I have contacted the original author of the code (I am just modifying it > to use PETSc) to find out more about the formulation of the problem. I > should point out that the the actual problem I want to deal with does not > have a zero RHS. > > This systems of equations is only part of the complete system and only the > gradient of the solution is required elsewhere so the arbitrary constant > has no physical significance. > You are in 2D and the dimension of your null space is the number of grid points on that "non-boundary". It can change your solution in a myriad of ways. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.fettig at gmail.com Thu Jan 10 15:44:36 2013 From: john.fettig at gmail.com (John Fettig) Date: Thu, 10 Jan 2013 16:44:36 -0500 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: $ dumpbin.exe libHYPRE.lib Microsoft (R) COFF/PE Dumper Version 10.00.40219.01 Copyright (C) Microsoft Corporation. All rights reserved. Dump of file libHYPRE.lib File Type: LIBRARY Summary A198 .bss 18C .data 11B28 .debug$S 135E7 .drectve CE64 .pdata F829 .rdata 2146A6 .text 12720 .xdata On Thu, Jan 10, 2013 at 2:49 PM, Dominik Szczerba wrote: > Please run dumpbin on the library to verify if this is a native or > cygwin object. > > D> > > On Thu, Jan 10, 2013 at 8:09 PM, John Fettig > wrote: > > On Thu, Jan 10, 2013 at 1:10 PM, Dominik Szczerba > > wrote: > >> > >> > On Thu, Jan 10, 2013 at 5:15 PM, John Fettig > >> > wrote: > >> >> FWIW, hypre 2.8.0b can be made to compile in windows with MSVC 10 in > >> >> cygwin. > >> >> > >> >> John > >> > > >> > Thanks, yes, I know, but I must compile it natively using MSVC, its > >> > beyond my control. > >> > PS/ I managed to compile 2.7.0b on my own with lots of hacks, 2.8.0b > >> > resists, as posted, 2.9.0b builds smoothly using cmake. > >> > >> Maybe I was not precise enough: what you refer to as "can be made to > >> compile in windows with MSVC 10 in cygwin" is not fully true: it means > >> compiling with MSVC but linking with cygwin, resulting in a cygwin > >> dependency, which is ruled out by the owner of the project that I am > >> working on. > > > > > > I'm not sure I understand. The resulting library has no dependency on > > cygwin, and is compiled entirely by cl. The library itself is built by > > "lib", the microsoft library tool. Are you saying that requiring cygwin > as > > the build environment for hypre is unacceptable? > > > > John > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tisaac at ices.utexas.edu Thu Jan 10 19:26:39 2013 From: tisaac at ices.utexas.edu (Tobin Isaac) Date: Thu, 10 Jan 2013 19:26:39 -0600 Subject: [petsc-users] ML options In-Reply-To: References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> Message-ID: <20130111012639.GA21767@ices.utexas.edu> On Mon, Jan 07, 2013 at 09:36:05AM -0600, Jed Brown wrote: > On Mon, Jan 7, 2013 at 9:09 AM, Mark F. Adams wrote: > > > ex56 is a simple 3D elasticity problem. There is a runex56 target that > > uses GAMG and a runex56_ml. These have a generic parameters and ML and > > GAMG work well. > > > > The eigen estimates could be bad. This can cause death. I've found that > > CG converges to the largest eigenvalue faster than the default GMRES so I > > use: > > > > -gamg_est_ksp_max_it 10 # this is the default, you could increase this to > > test > > -gamg_est_ksp_type cg > > > > Jed could tell you how to set this for ML. > > > > ML isn't using eigenvalue estimation (doesn't expose the algorithm). Sanjay > is using the default smoother (Richardson + SOR) rather than > chebyshev/pbjacobi. Which version of petsc is being used? I submitted a bug fix for Richardson + SOR as a smoother for inodes, but I don't know which versions of petsc have integrated it. It could be the same bug. > > > > > > > > > > On Jan 7, 2013, at 8:49 AM, Jed Brown wrote: > > > > Could we get an example matrix exhibiting this behavior? If you run with > > -ksp_view_binary, the solver will write out the matrix to a file called > > 'binaryoutput' (and 'binaryoutput.info') when KSPSolve() returns. I > > suppose it could be a "math" reason of the inodes somehow causing an > > incorrect near-null space to be passed to ML, but the interface is not > > supposed to work like this. If you are serious about smoothed aggregation > > for elasticity, you should use MatSetNearNullSpace() to provide the rigid > > body modes. > > > > As a related matter, does -pc_type gamg -pc_gamg_agg_nsmooths 1 > > -mg_levels_ksp_type richardson -mg_levels_pc_type sor converge well? > > > > > > On Mon, Jan 7, 2013 at 12:55 AM, Sanjay Govindjee wrote: > > > >> > >> I am adding ML as an option to our FEA code and was looking for a bit of > >> guidance on > >> options. Generally we solve 1,2, and 3D solids problems (nonlinear > >> elasticity) but > >> we also treat shells, thermal, problems, coupled problems, etc. etc. > >> > >> My basic run line looks like: > >> > >> -@${MPIEXEC} -n $(NPROC) $(MY_PROGRAM) -ksp_type cg -ksp_monitor -pc_type > >> ml -log_summary -ksp_view -options_left > >> > >> but this does not work very well at all with 3D elasticity for example -- > >> in fact it fails to converge after 10K iterations on a rather > >> modest problem. However following ex26 in the ksp tutorials I also tried: > >> > >> -@${MPIEXEC} -n $(NPROC) $(FEAPRUN) -ksp_type cg -ksp_monitor -pc_type ml > >> -mat_no_inode -log_summary -ksp_view -options_left > >> > >> And this worked very very much better -- converged in about 10 > >> iterations. What exactly is -mat_no_inode doing for me? and are there > >> other 'important' options > >> that I should be aware of when using ML. > >> > >> -sanjay > >> > > > > > > From tisaac at ices.utexas.edu Thu Jan 10 19:37:52 2013 From: tisaac at ices.utexas.edu (Tobin Isaac) Date: Thu, 10 Jan 2013 19:37:52 -0600 Subject: [petsc-users] ML options In-Reply-To: References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> <4DA1C887-ADD6-4757-8708-8A7393AD60E7@columbia.edu> <6BA180BD-FE1E-4446-8B42-411A25D46AA2@columbia.edu> Message-ID: <20130111013752.GB21767@ices.utexas.edu> On Tue, Jan 08, 2013 at 08:07:36AM -0600, Jed Brown wrote: > On Mon, Jan 7, 2013 at 4:23 PM, Mark F. Adams wrote: > > > '-pc_ml_reuse_interpolation true' does seem to get ML to reuse some mesh > > setup. The setup time goes from .3 to .1 sec on one of my tests from the > > first to the second solve. > > > > Hong: this looks like a way to infer ML's RAP times. I think the second > > solves are just redoing the RAP with this flag, like what GAMG does by > > default. > > > > Huh, it's changing the sparsity of the coarse grid. There's at least one part of grid generation in ml, I forget where exactly, that computed zeros are eliminated. > > $ ./ex15 -da_grid_x 20 -da_grid_y 20 -p 1.2 -ksp_converged_reason -pc_type > ml -pc_ml_reuse_interpolation > Linear solve converged due to CONVERGED_RTOL iterations 6 > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: New nonzero at (0,8) caused a malloc! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: > cb29460836d903f43276d687c4ba9f5917bf6651 HG Date: Sun Jan 06 14:49:14 2013 > -0600 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./ex15 on a mpich named batura by jed Tue Jan 8 08:06:37 > 2013 > [0]PETSC ERROR: Libraries linked from /home/jed/petsc/mpich/lib > [0]PETSC ERROR: Configure run at Sun Jan 6 15:19:56 2013 > [0]PETSC ERROR: Configure options --download-ams --download-blacs > --download-chaco --download-generator --download-hypre --download-ml > --download-spai --download-spooles --download-sundials --download-superlu > --download-superlu_dist --download-triangle --with-blas-lapack=/usr > --with-c2html --with-cholmod-dir=/usr > --with-clique-dir=/home/jed/usr/clique-mpich > --with-elemental-dir=/home/jed/usr/clique-mpich --with-exodusii-dir=/usr > --with-hdf5-dir=/opt/mpich --with-lgrind > --with-metis-dir=/home/jed/usr/clique-mpich --with-mpi-dir=/opt/mpich > --with-netcdf-dir=/usr --with-openmp > --with-parmetis-dir=/home/jed/usr/clique-mpich --with-pcbddc > --with-pthreadclasses --with-shared-libraries --with-single-library=0 > --with-sowing --with-threadcomm --with-umfpack-dir=/usr --with-x > -PETSC_ARCH=mpich > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatSetValues_SeqAIJ() line 352 in > /home/jed/petsc/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: MatSetValues() line 1083 in > /home/jed/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: MatWrapML_SeqAIJ() line 348 in > /home/jed/petsc/src/ksp/pc/impls/ml/ml.c > [0]PETSC ERROR: PCSetUp_ML() line 639 in > /home/jed/petsc/src/ksp/pc/impls/ml/ml.c > [0]PETSC ERROR: PCSetUp() line 832 in > /home/jed/petsc/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSetUp() line 267 in > /home/jed/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: KSPSolve() line 376 in > /home/jed/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: SNES_KSPSolve() line 4460 in > /home/jed/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: SNESSolve_NEWTONLS() line 216 in > /home/jed/petsc/src/snes/impls/ls/ls.c > [0]PETSC ERROR: SNESSolve() line 3678 in > /home/jed/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: main() line 221 in src/snes/examples/tutorials/ex15.c > application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 From s_g at berkeley.edu Thu Jan 10 21:08:29 2013 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Thu, 10 Jan 2013 19:08:29 -0800 Subject: [petsc-users] ML options In-Reply-To: <20130111012639.GA21767@ices.utexas.edu> References: <50EA7147.3060304@berkeley.edu> <69F9EFAE-5E06-4AFA-9AD6-E54E8192819D@columbia.edu> <20130111012639.GA21767@ices.utexas.edu> Message-ID: <50EF822D.3040602@berkeley.edu> It was version 3.3(-p5) A fix was backported by Jed: https://bitbucket.org/petsc/petsc-3.3/commits/93bbec421cbaa0b3efc445fb992fecd53db60b61 On 1/10/13 5:26 PM, Tobin Isaac wrote: > On Mon, Jan 07, 2013 at 09:36:05AM -0600, Jed Brown wrote: >> On Mon, Jan 7, 2013 at 9:09 AM, Mark F. Adams wrote: >> >>> ex56 is a simple 3D elasticity problem. There is a runex56 target that >>> uses GAMG and a runex56_ml. These have a generic parameters and ML and >>> GAMG work well. >>> >>> The eigen estimates could be bad. This can cause death. I've found that >>> CG converges to the largest eigenvalue faster than the default GMRES so I >>> use: >>> >>> -gamg_est_ksp_max_it 10 # this is the default, you could increase this to >>> test >>> -gamg_est_ksp_type cg >>> >>> Jed could tell you how to set this for ML. >>> >> ML isn't using eigenvalue estimation (doesn't expose the algorithm). Sanjay >> is using the default smoother (Richardson + SOR) rather than >> chebyshev/pbjacobi. > Which version of petsc is being used? I submitted a bug fix for > Richardson + SOR as a smoother for inodes, but I don't know which > versions of petsc have integrated it. It could be the same bug. > >> >>> >>> >>> On Jan 7, 2013, at 8:49 AM, Jed Brown wrote: >>> >>> Could we get an example matrix exhibiting this behavior? If you run with >>> -ksp_view_binary, the solver will write out the matrix to a file called >>> 'binaryoutput' (and 'binaryoutput.info') when KSPSolve() returns. I >>> suppose it could be a "math" reason of the inodes somehow causing an >>> incorrect near-null space to be passed to ML, but the interface is not >>> supposed to work like this. If you are serious about smoothed aggregation >>> for elasticity, you should use MatSetNearNullSpace() to provide the rigid >>> body modes. >>> >>> As a related matter, does -pc_type gamg -pc_gamg_agg_nsmooths 1 >>> -mg_levels_ksp_type richardson -mg_levels_pc_type sor converge well? >>> >>> >>> On Mon, Jan 7, 2013 at 12:55 AM, Sanjay Govindjee wrote: >>> >>>> I am adding ML as an option to our FEA code and was looking for a bit of >>>> guidance on >>>> options. Generally we solve 1,2, and 3D solids problems (nonlinear >>>> elasticity) but >>>> we also treat shells, thermal, problems, coupled problems, etc. etc. >>>> >>>> My basic run line looks like: >>>> >>>> -@${MPIEXEC} -n $(NPROC) $(MY_PROGRAM) -ksp_type cg -ksp_monitor -pc_type >>>> ml -log_summary -ksp_view -options_left >>>> >>>> but this does not work very well at all with 3D elasticity for example -- >>>> in fact it fails to converge after 10K iterations on a rather >>>> modest problem. However following ex26 in the ksp tutorials I also tried: >>>> >>>> -@${MPIEXEC} -n $(NPROC) $(FEAPRUN) -ksp_type cg -ksp_monitor -pc_type ml >>>> -mat_no_inode -log_summary -ksp_view -options_left >>>> >>>> And this worked very very much better -- converged in about 10 >>>> iterations. What exactly is -mat_no_inode doing for me? and are there >>>> other 'important' options >>>> that I should be aware of when using ML. >>>> >>>> -sanjay >>>> >>> >>> -- ----------------------------------------------- Sanjay Govindjee, PhD, PE Professor of Civil Engineering Vice Chair for Academic Affairs 779 Davis Hall Structural Engineering, Mechanics and Materials Department of Civil Engineering University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu http://www.ce.berkeley.edu/~sanjay ----------------------------------------------- New Books: Engineering Mechanics of Deformable Solids: A Presentation with Exercises http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641 http://ukcatalogue.oup.com/product/9780199651641.do http://amzn.com/0199651647 Engineering Mechanics 3 (Dynamics) http://www.springer.com/materials/mechanics/book/978-3-642-14018-1 http://amzn.com/3642140181 ----------------------------------------------- From dominik at itis.ethz.ch Fri Jan 11 00:26:29 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 11 Jan 2013 07:26:29 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: > Dump of file libHYPRE.lib > > File Type: LIBRARY > > Summary > > A198 .bss > 18C .data > 11B28 .debug$S > 135E7 .drectve > CE64 .pdata > F829 .rdata > 2146A6 .text > 12720 .xdata Yes, this looks correctly as a native library. So how did you bring it to compile? Which version of MSVC are you using? I get an error during compilation: cl -MD -nologo -DWIN32 -O2 -I/mpich2-1.3.2p1-win-x86-64/include -DHAVE_CONFIG_H -DMLI_SUPERLU -DMPICH_SKIP_MPICXX -I. -I../../.. -I./.. -I./../../../utilities - I./../../../IJ_mv -I./../../../krylov -I./../../../multivector -I./../../../parc sr_mv -I./../../../parcsr_ls -I./../../../seq_mv -I./../../../distributed_matrix -I./../../../distributed_ls -I./../../../FEI_mv/fei-hypre -I./../../../FEI_mv/f emli -I./../../../FEI_mv/SuperLU -c mli_method.cxx; mv -f mli_method.obj mli_m ethod.o mli_method.cxx C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\INCLUDE\string.h(142) : e rror C2375: '_stricmp' : redefinition; different linkage C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\INCLUDE\string.h( 111) : see declaration of '_stricmp' mli_method.cxx(146) : warning C4551: function call missing argument list mli_method.cxx(146) : error C3861: '_stricmp': identifier not found mli_method.cxx(150) : warning C4551: function call missing argument list mli_method.cxx(150) : error C3861: '_stricmp': identifier not found mli_method.cxx(156) : warning C4551: function call missing argument list mli_method.cxx(156) : error C3861: '_stricmp': identifier not found mli_method.cxx(164) : warning C4551: function call missing argument list mli_method.cxx(164) : error C3861: '_stricmp': identifier not found mli_method.cxx(174) : warning C4551: function call missing argument list mli_method.cxx(174) : error C3861: '_stricmp': identifier not found mli_method.cxx(178) : warning C4551: function call missing argument list mli_method.cxx(178) : error C3861: '_stricmp': identifier not found mv: cannot stat `mli_method.obj': No such file or directory ../../../config/Makefile.config:54: recipe for target `mli_method.o' failed make[3]: *** [mli_method.o] Error 1 make[3]: Leaving directory `/cygdrive/c/pack/hypre-2.8.0b/src/FEI_mv/femli/lib' Makefile:85: recipe for target `all' failed make[2]: *** [all] Error 2 make[2]: Leaving directory `/cygdrive/c/pack/hypre-2.8.0b/src/FEI_mv/femli' Making fei-hypre ... make[2]: Entering directory `/cygdrive/c/pack/hypre-2.8.0b/src/FEI_mv/fei-hypre' From john.fettig at gmail.com Fri Jan 11 09:20:53 2013 From: john.fettig at gmail.com (John Fettig) Date: Fri, 11 Jan 2013 10:20:53 -0500 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Fri, Jan 11, 2013 at 1:26 AM, Dominik Szczerba wrote: > > Dump of file libHYPRE.lib > > > > File Type: LIBRARY > > > > Summary > > > > A198 .bss > > 18C .data > > 11B28 .debug$S > > 135E7 .drtve > > CE64 .pdata > > F829 .rdata > > 2146A6 .text > > 12720 .xdata > > Yes, this looks correctly as a native library. So how did you bring it > to compile? Which version of MSVC are you using? I'm using Visual Studio 2010, which contains Microsoft (R) C/C++ Optimizing Compiler Version 16.00.40219.01 for x64. I had to make some modifications to get the distributed tarball to produce a .lib instead of a .a, but I believe otherwise I haven't made any changes. Here's what PETSc's configure produces when it configures hypre: ./configure --prefix=/tmp/petsc-3.3-p3/arch-mswin-c-opt --libdir=/tmp/petsc-3.3-p3/arch-mswin-c-opt/lib CC="win32fe cl" CFLAGS=" -MD -wd4996 -O2 -DWIN32" CXX="win32fe cl -MD -GR -EHsc -O2 -Zm200 -TP " F77="win32fe ifort -MD -O3 -QxW -fpp " --with-MPI-include="/usr/local/mpi-platform/include/64" --with-MPI-lib-dirs="/usr/local/mpi-platform/lib" --with-MPI-libs="pcmpi64.l" HYPRE_LIBSUFFIX=.lib RANLIB="/usr/bin/true" AR="/tmp/petsc-3.3-p3/bin/win32fe/win32fe lib -a" --with-blas-libs= --with-blas-lib-dir= --with-lapack-libs= --with-lapack-lib-dir= --with-blas=yes --with-lapack=yes --with-fmangle-blas=caps-no-underscores --with-fmangle-lapack=caps-no-underscores --without-babel --without-mli --without-fei --without-superlu Note that it is configuring --without-mli, which would explain why I don't encounter the problem you experience below. John I get an error during > compilation: > > cl -MD -nologo -DWIN32 -O2 -I/mpich2-1.3.2p1-win-x86-64/include > -DHAVE_CONFIG_H > -DMLI_SUPERLU -DMPICH_SKIP_MPICXX -I. -I../../.. -I./.. > -I./../../../utilities - > I./../../../IJ_mv -I./../../../krylov -I./../../../multivector > -I./../../../parc > sr_mv -I./../../../parcsr_ls -I./../../../seq_mv > -I./../../../distributed_matrix > -I./../../../distributed_ls -I./../../../FEI_mv/fei-hypre > -I./../../../FEI_mv/f > emli -I./../../../FEI_mv/SuperLU -c mli_method.cxx; mv -f mli_method.obj > mli_m > ethod.o > mli_method.cxx > C:\Program Files (x86)\Microsoft Visual Studio > 10.0\VC\INCLUDE\string.h(142) : e > rror C2375: '_stricmp' : redefinition; different linkage > C:\Program Files (x86)\Microsoft Visual Studio > 10.0\VC\INCLUDE\string.h( > 111) : see declaration of '_stricmp' > mli_method.cxx(146) : warning C4551: function call missing argument list > mli_method.cxx(146) : error C3861: '_stricmp': identifier not found > mli_method.cxx(150) : warning C4551: function call missing argument list > mli_method.cxx(150) : error C3861: '_stricmp': identifier not found > mli_method.cxx(156) : warning C4551: function call missing argument list > mli_method.cxx(156) : error C3861: '_stricmp': identifier not found > mli_method.cxx(164) : warning C4551: function call missing argument list > mli_method.cxx(164) : error C3861: '_stricmp': identifier not found > mli_method.cxx(174) : warning C4551: function call missing argument list > mli_method.cxx(174) : error C3861: '_stricmp': identifier not found > mli_method.cxx(178) : warning C4551: function call missing argument list > mli_method.cxx(178) : error C3861: '_stricmp': identifier not found > mv: cannot stat `mli_method.obj': No such file or directory > ../../../config/Makefile.config:54: recipe for target `mli_method.o' failed > make[3]: *** [mli_method.o] Error 1 > make[3]: Leaving directory > `/cygdrive/c/pack/hypre-2.8.0b/src/FEI_mv/femli/lib' > Makefile:85: recipe for target `all' failed > make[2]: *** [all] Error 2 > make[2]: Leaving directory `/cygdrive/c/pack/hypre-2.8.0b/src/FEI_mv/femli' > Making fei-hypre ... > make[2]: Entering directory > `/cygdrive/c/pack/hypre-2.8.0b/src/FEI_mv/fei-hypre' > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Fri Jan 11 09:44:50 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 11 Jan 2013 16:44:50 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: > I'm using Visual Studio 2010, which contains Microsoft (R) C/C++ Optimizing > Compiler Version 16.00.40219.01 for x64. I had to make some modifications > to get the distributed tarball to produce a .lib instead of a .a, but I > believe otherwise I haven't made any changes. Here's what PETSc's configure > produces when it configures hypre: > > ./configure --prefix=/tmp/petsc-3.3-p3/arch-mswin-c-opt > --libdir=/tmp/petsc-3.3-p3/arch-mswin-c-opt/lib CC="win32fe cl" CFLAGS=" -MD > -wd4996 -O2 -DWIN32" CXX="win32fe cl -MD -GR -EHsc -O2 -Zm200 -TP " > F77="win32fe ifort -MD -O3 -QxW -fpp " > --with-MPI-include="/usr/local/mpi-platform/include/64" > --with-MPI-lib-dirs="/usr/local/mpi-platform/lib" > --with-MPI-libs="pcmpi64.l" HYPRE_LIBSUFFIX=.lib RANLIB="/usr/bin/true" > AR="/tmp/petsc-3.3-p3/bin/win32fe/win32fe lib -a" --with-blas-libs= > --with-blas-lib-dir= --with-lapack-libs= --with-lapack-lib-dir= > --with-blas=yes --with-lapack=yes --with-fmangle-blas=caps-no-underscores > --with-fmangle-lapack=caps-no-underscores --without-babel --without-mli > --without-fei --without-superlu > > Note that it is configuring --without-mli, which would explain why I don't > encounter the problem you experience below. > > John Yes indeed I usually had to make tweeks to get a Windows library at the end. Thanks for the tip about mli, I will look into it. But first, since 2.9.0b built just fine with no tricks I will first test if it works fine. Many thanks again for your mail, Dominik From w_ang_temp at 163.com Fri Jan 11 11:51:43 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Sat, 12 Jan 2013 01:51:43 +0800 (CST) Subject: [petsc-users] Determine the positive definiteness In-Reply-To: <8671C38B-AA8A-4153-A8B7-05BD71B6FF02@mcs.anl.gov> References: <7d751366.27a62.13c2558d93c.Coremail.w_ang_temp@163.com> <8671C38B-AA8A-4153-A8B7-05BD71B6FF02@mcs.anl.gov> Message-ID: <49d07aae.f5ee.13c2abd1fcc.Coremail.w_ang_temp@163.com> Yes, it isanecessarybutnotsufficientcondition. At 2013-01-11 03:56:25,"Barry Smith" wrote: > > If cg says indefinite then it is indefinite > if cg runs and does not say it is indefinite it may still be indefinite (though probably not all that likely). > > Barry > >On Jan 10, 2013, at 11:46 AM, Matthew Knepley wrote: > >> On Thu, Jan 10, 2013 at 10:44 AM, w_ang_temp wrote: >> Hello, >> >> I want to determine the positive definiteness. So I want to know >> >> if it is a right way. >> >> I use "-ksp_type cg -pc_type none". You know, CG does not work for >> >> indefinite matrices. I get "Linear solve did not converge due to >> >> DIVERGED_INDEFINITE_MAT". From the information, I think it is indefinite. >> >> Is it a right way? >> >> >> Its not full proof, but it could be a diagnostic. >> >> Matt >> >> Thanks. >> >> Jim >> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarua at iit.edu Sat Jan 12 09:40:42 2013 From: abarua at iit.edu (amlan barua) Date: Sat, 12 Jan 2013 09:40:42 -0600 Subject: [petsc-users] vecview command Message-ID: Hi, How to set the precision of the output in the vecview command? I want to see more digits. Amlan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jan 12 09:42:21 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 12 Jan 2013 09:42:21 -0600 Subject: [petsc-users] vecview command In-Reply-To: References: Message-ID: On Sat, Jan 12, 2013 at 9:40 AM, amlan barua wrote: > Hi, > How to set the precision of the output in the vecview command? I want to > see more digits. > That is fixed in VecView. You can just use VecGetArray() and print, or use the binary output and process it with Python. Matt > > Amlan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Jan 12 11:56:26 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 12 Jan 2013 11:56:26 -0600 Subject: [petsc-users] vecview command In-Reply-To: References: Message-ID: On Jan 12, 2013, at 9:42 AM, Matthew Knepley wrote: > On Sat, Jan 12, 2013 at 9:40 AM, amlan barua wrote: > Hi, > How to set the precision of the output in the vecview command? I want to see more digits. > Or use PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_MATLAB); before calling VecView() Barry > That is fixed in VecView. You can just use VecGetArray() and print, or use the binary output > and process it with Python. > > Matt > > > Amlan > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From abarua at iit.edu Sat Jan 12 12:48:56 2013 From: abarua at iit.edu (amlan barua) Date: Sat, 12 Jan 2013 12:48:56 -0600 Subject: [petsc-users] vecview command In-Reply-To: References: Message-ID: Yes. The output is double precision now. Thanks. On Sat, Jan 12, 2013 at 11:56 AM, Barry Smith wrote: > > On Jan 12, 2013, at 9:42 AM, Matthew Knepley wrote: > > > On Sat, Jan 12, 2013 at 9:40 AM, amlan barua wrote: > > Hi, > > How to set the precision of the output in the vecview command? I want to > see more digits. > > > Or use PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_MATLAB); before > calling VecView() > > Barry > > > That is fixed in VecView. You can just use VecGetArray() and print, or > use the binary output > > and process it with Python. > > > > Matt > > > > > > Amlan > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stali at geology.wisc.edu Mon Jan 14 15:27:40 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Mon, 14 Jan 2013 15:27:40 -0600 Subject: [petsc-users] submatrix times subvector Message-ID: <50F4784C.4010403@geology.wisc.edu> Hello I am solving a system of equations of the form: |A C| |u1| = |f1| |C'B| |u2| |f2| After each solve, I need to perform B*f2 before updating f. Should I use MatGetSubMatrix/VecGetSubVector followed by MatMult or is there something simpler. Thanks in advance. T -- No one trusts a model except the one who wrote it; Everyone trusts an observation except the one who made it- Harlow Shapley From jedbrown at mcs.anl.gov Mon Jan 14 15:38:53 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 14 Jan 2013 15:38:53 -0600 Subject: [petsc-users] submatrix times subvector In-Reply-To: <50F4784C.4010403@geology.wisc.edu> References: <50F4784C.4010403@geology.wisc.edu> Message-ID: On Mon, Jan 14, 2013 at 3:27 PM, Tabrez Ali wrote: > Hello > > I am solving a system of equations of the form: > > |A C| |u1| = |f1| > |C'B| |u2| |f2| > > After each solve, I need to perform B*f2 before updating f. Should I use > MatGetSubMatrix/**VecGetSubVector followed by MatMult or is there > something simpler. > Yes, or let PCFIELDSPLIT do all the block solver stuff for you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fengwang85 at gmail.com Mon Jan 14 20:44:20 2013 From: fengwang85 at gmail.com (Wang Feng) Date: Tue, 15 Jan 2013 10:44:20 +0800 Subject: [petsc-users] Petsc crashes with intel compiler Message-ID: That's quite similar problem of my, while in my case, it will happen when I configure petsc with --CFLAGS=-O3. If I don't use the CFLAGS, it works well. -- ?????????????? Feng Wang School of Physics and Optoelctronic Technology, Dalian University of Technology Princeton Plasma Physics Lab. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jan 15 09:39:06 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Jan 2013 09:39:06 -0600 (CST) Subject: [petsc-users] Petsc crashes with intel compiler In-Reply-To: References: Message-ID: Some versions of intel compiler are buggy when IPA optimization is enabled [which -O3 can do]. They don't return proper error codes [when there are errors] - thus confusing/breaking configure. satish On Tue, 15 Jan 2013, Wang Feng wrote: > That's quite similar problem of my, while in my case, it will happen when I > configure > petsc with --CFLAGS=-O3. If I don't use the CFLAGS, it works well. > > > > > From w_ang_temp at 163.com Tue Jan 15 11:41:14 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 16 Jan 2013 01:41:14 +0800 (CST) Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> Message-ID: <2d12f2c6.11c04.13c3f4cf9e6.Coremail.w_ang_temp@163.com> Hello, I am not sure about it. The following is the information under -ksp_monitor_true_residual. So can you tell me that how the DIVERGED_DTOL occurs(||rk||>dtol*||b||). PS: dtol use the default parameter; normb:67139.2122204160. Thanks. Jim 0 KSP preconditioned resid norm 1.145582415879e+00 true resid norm 6.713921222042e+04 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 1.668442371722e-01 true resid norm 9.315411570816e+03 ||r(i)||/||b|| 1.387477044001e-01 2 KSP preconditioned resid norm 7.332643142215e-02 true resid norm 4.580901760869e+03 ||r(i)||/||b|| 6.822990037223e-02 3 KSP preconditioned resid norm 4.350407110218e-02 true resid norm 2.969077057634e+03 ||r(i)||/||b|| 4.422269727990e-02 4 KSP preconditioned resid norm 2.967861353379e-02 true resid norm 2.171406152803e+03 ||r(i)||/||b|| 3.234184734957e-02 5 KSP preconditioned resid norm 2.194027213667e-02 true resid norm 1.697287121259e+03 ||r(i)||/||b|| 2.528011671759e-02 6 KSP preconditioned resid norm 1.709062414900e-02 true resid norm 1.385369331879e+03 ||r(i)||/||b|| 2.063428041621e-02 7 KSP preconditioned resid norm 1.381432438160e-02 true resid norm 1.166961199876e+03 ||r(i)||/||b|| 1.738121674775e-02 8 KSP preconditioned resid norm 1.147659811931e-02 true resid norm 1.004464430978e+03 ||r(i)||/||b|| 1.496092071620e-02 9 KSP preconditioned resid norm 9.735929665267e-03 true resid norm 8.766474922321e+02 ||r(i)||/||b|| 1.305716083403e-02 10 KSP preconditioned resid norm 8.401139127073e-03 true resid norm 7.735891175651e+02 ||r(i)||/||b|| 1.152216554203e-02 11 KSP preconditioned resid norm 7.365394582494e-03 true resid norm 6.915224037997e+02 ||r(i)||/||b|| 1.029982898115e-02 12 KSP preconditioned resid norm 6.581540116011e-03 true resid norm 6.301038366131e+02 ||r(i)||/||b|| 9.385034702887e-03 13 KSP preconditioned resid norm 6.074644880442e-03 true resid norm 5.941485646876e+02 ||r(i)||/||b|| 8.849501580939e-03 14 KSP preconditioned resid norm 6.000465973365e-03 true resid norm 6.039460304962e+02 ||r(i)||/||b|| 8.995429206310e-03 15 KSP preconditioned resid norm 6.700641680203e-03 true resid norm 7.024109463517e+02 ||r(i)||/||b|| 1.046200756788e-02 16 KSP preconditioned resid norm 8.572956854817e-03 true resid norm 9.345547794474e+02 ||r(i)||/||b|| 1.391965661407e-02 17 KSP preconditioned resid norm 1.171098947054e-02 true resid norm 1.308106003451e+03 ||r(i)||/||b|| 1.948348752078e-02 18 KSP preconditioned resid norm 1.553731077786e-02 true resid norm 1.756744729914e+03 ||r(i)||/||b|| 2.616570364494e-02 19 KSP preconditioned resid norm 1.854806156796e-02 true resid norm 2.108867138509e+03 ||r(i)||/||b|| 3.141036465524e-02 20 KSP preconditioned resid norm 1.882735116093e-02 true resid norm 2.144875683978e+03 ||r(i)||/||b|| 3.194669125603e-02 21 KSP preconditioned resid norm 1.581371157031e-02 true resid norm 1.800227274907e+03 ||r(i)||/||b|| 2.681335117542e-02 22 KSP preconditioned resid norm 1.116066908962e-02 true resid norm 1.264488331079e+03 ||r(i)||/||b|| 1.883382734561e-02 23 KSP preconditioned resid norm 6.935989655893e-03 true resid norm 7.781370409044e+02 ||r(i)||/||b|| 1.158990424776e-02 24 KSP preconditioned resid norm 4.056542969364e-03 true resid norm 4.491311011610e+02 ||r(i)||/||b|| 6.689549762462e-03 25 KSP preconditioned resid norm 2.459663949493e-03 true resid norm 2.758154676638e+02 ||r(i)||/||b|| 4.108112957273e-03 26 KSP preconditioned resid norm 1.781038036990e-03 true resid norm 2.198771325293e+02 ||r(i)||/||b|| 3.274943587474e-03 27 KSP preconditioned resid norm 1.686781020289e-03 true resid norm 2.433346354927e+02 ||r(i)||/||b|| 3.624329619685e-03 28 KSP preconditioned resid norm 2.076540169200e-03 true resid norm 3.382587656684e+02 ||r(i)||/||b|| 5.038170012450e-03 29 KSP preconditioned resid norm 2.912174325946e-03 true resid norm 4.994106293188e+02 ||r(i)||/||b|| 7.438434452868e-03 30 KSP preconditioned resid norm 3.668418500885e-03 true resid norm 6.368223901851e+02 ||r(i)||/||b|| 9.485103699079e-03 31 KSP preconditioned resid norm 3.471520993065e-03 true resid norm 6.005845512437e+02 ||r(i)||/||b|| 8.945361903740e-03 32 KSP preconditioned resid norm 2.400628046233e-03 true resid norm 4.087910715821e+02 ||r(i)||/||b|| 6.088708193955e-03 33 KSP preconditioned resid norm 1.392978496225e-03 true resid norm 2.329031165345e+02 ||r(i)||/||b|| 3.468958136862e-03 34 KSP preconditioned resid norm 1.013807019427e-03 true resid norm 1.877543843971e+02 ||r(i)||/||b|| 2.796493705954e-03 35 KSP preconditioned resid norm 1.261815551662e-03 true resid norm 2.614230430124e+02 ||r(i)||/||b|| 3.893746059369e-03 36 KSP preconditioned resid norm 1.853746656434e-03 true resid norm 3.922570169729e+02 ||r(i)||/||b|| 5.842442948022e-03 37 KSP preconditioned resid norm 2.657914774769e-03 true resid norm 5.613289970100e+02 ||r(i)||/||b|| 8.360672972556e-03 38 KSP preconditioned resid norm 3.436994681718e-03 true resid norm 7.215058669001e+02 ||r(i)||/||b|| 1.074641544097e-02 39 KSP preconditioned resid norm 3.614431832954e-03 true resid norm 7.538884668185e+02 ||r(i)||/||b|| 1.122873566558e-02 40 KSP preconditioned resid norm 2.766407868570e-03 true resid norm 5.733943971401e+02 ||r(i)||/||b|| 8.540380176902e-03 41 KSP preconditioned resid norm 1.467719776874e-03 true resid norm 3.023387886260e+02 ||r(i)||/||b|| 4.503162587512e-03 42 KSP preconditioned resid norm 5.524746404481e-04 true resid norm 1.130986298210e+02 ||r(i)||/||b|| 1.684539125209e-03 43 KSP preconditioned resid norm 1.546370264138e-04 true resid norm 3.145301955484e+01 ||r(i)||/||b|| 4.684746590648e-04 44 KSP preconditioned resid norm 3.362252620216e-05 true resid norm 6.790570379860e+00 ||r(i)||/||b|| 1.011416451770e-04 45 KSP preconditioned resid norm 6.273756038479e-06 true resid norm 1.247817148548e+00 ||r(i)||/||b|| 1.858551965805e-05 46 KSP preconditioned resid norm 1.802876108688e-06 true resid norm 3.246014610548e-01 ||r(i)||/||b|| 4.834752305242e-06 47 KSP preconditioned resid norm 1.955901149713e-06 true resid norm 3.605076707485e-01 ||r(i)||/||b|| 5.369554673429e-06 48 KSP preconditioned resid norm 2.146815227031e-06 true resid norm 4.034225428527e-01 ||r(i)||/||b|| 6.008747042314e-06 49 KSP preconditioned resid norm 7.835209417034e-07 true resid norm 1.092344842431e-01 ||r(i)||/||b|| 1.626984896464e-06 50 KSP preconditioned resid norm 5.949209762592e-07 true resid norm 6.093308467935e-02 ||r(i)||/||b|| 9.075632951920e-07 51 KSP preconditioned resid norm 4.843097958216e-07 true resid norm 3.907306790091e-02 ||r(i)||/||b|| 5.819709020808e-07 52 KSP preconditioned resid norm 4.015928439388e-07 true resid norm 2.047742276673e-02 ||r(i)||/||b|| 3.049994494946e-07 53 KSP preconditioned resid norm 3.831231784925e-07 true resid norm 3.214301153383e-02 ||r(i)||/||b|| 4.787516932475e-07 54 KSP preconditioned resid norm 3.936949175538e-07 true resid norm 5.284803093534e-02 ||r(i)||/||b|| 7.871410638815e-07 55 KSP preconditioned resid norm 3.162241186775e-07 true resid norm 3.587463976088e-02 ||r(i)||/||b|| 5.343321521721e-07 56 KSP preconditioned resid norm 3.171835200858e-07 true resid norm 4.396913730536e-02 ||r(i)||/||b|| 6.548950434660e-07 57 KSP preconditioned resid norm 2.173392548323e-07 true resid norm 1.845471373372e-02 ||r(i)||/||b|| 2.748723603299e-07 58 KSP preconditioned resid norm 1.842063440560e-07 true resid norm 1.135583609373e-02 ||r(i)||/||b|| 1.691386556109e-07 59 KSP preconditioned resid norm 1.562305233760e-07 true resid norm 7.915226925674e-03 ||r(i)||/||b|| 1.178927584031e-07 60 KSP preconditioned resid norm 1.395168870151e-07 true resid norm 7.381518126662e-03 ||r(i)||/||b|| 1.099434724141e-07 61 KSP preconditioned resid norm 1.323458776323e-07 true resid norm 1.236767010915e-02 ||r(i)||/||b|| 1.842093420541e-07 62 KSP preconditioned resid norm 1.210573318494e-07 true resid norm 1.246351751392e-02 ||r(i)||/||b|| 1.856369340916e-07 63 KSP preconditioned resid norm 1.518242462597e-07 true resid norm 2.453397483619e-02 ||r(i)||/||b|| 3.654194624097e-07 64 KSP preconditioned resid norm 1.723839793842e-07 true resid norm 3.078519652295e-02 ||r(i)||/||b|| 4.585278186149e-07 65 KSP preconditioned resid norm 6.991080261238e-07 true resid norm 1.418507987285e-01 ||r(i)||/||b|| 2.112786165301e-06 66 KSP preconditioned resid norm 1.030788328966e-05 true resid norm 2.103063209272e+00 ||r(i)||/||b|| 3.132391846314e-05 67 KSP preconditioned resid norm 1.229992774455e-07 true resid norm 2.097942598994e-02 ||r(i)||/||b|| 3.124764991443e-07 68 KSP preconditioned resid norm 6.581561333514e-08 true resid norm 3.405626207360e-03 ||r(i)||/||b|| 5.072484610304e-08 69 KSP preconditioned resid norm 6.343808367654e-08 true resid norm 2.730918192381e-03 ||r(i)||/||b|| 4.067545778488e-08 70 KSP preconditioned resid norm 6.319435025925e-08 true resid norm 2.718071769813e-03 ||r(i)||/||b|| 4.048411769995e-08 71 KSP preconditioned resid norm 6.446816119437e-08 true resid norm 2.784759189575e-03 ||r(i)||/||b|| 4.147738851080e-08 72 KSP preconditioned resid norm 6.716418150551e-08 true resid norm 2.909287512040e-03 ||r(i)||/||b|| 4.333216634250e-08 73 KSP preconditioned resid norm 7.131730758571e-08 true resid norm 3.126610397256e-03 ||r(i)||/||b|| 4.656906588345e-08 74 KSP preconditioned resid norm 7.792929027648e-08 true resid norm 3.511383015577e-03 ||r(i)||/||b|| 5.230003301273e-08 75 KSP preconditioned resid norm 9.337691950640e-08 true resid norm 6.318359239434e-03 ||r(i)||/||b|| 9.410833148728e-08 76 KSP preconditioned resid norm 1.479957854534e-07 true resid norm 1.818912212739e-02 ||r(i)||/||b|| 2.709165259144e-07 77 KSP preconditioned resid norm 6.682161192641e-07 true resid norm 1.212127978501e-01 ||r(i)||/||b|| 1.805394997071e-06 78 KSP preconditioned resid norm 5.663368166860e-05 true resid norm 1.075143624402e+01 ||r(i)||/||b|| 1.601364670279e-04 79 KSP preconditioned resid norm 8.776500145772e-07 true resid norm 1.663312859850e-01 ||r(i)||/||b|| 2.477408960935e-06 80 KSP preconditioned resid norm 1.630840126402e-07 true resid norm 2.931897201954e-02 ||r(i)||/||b|| 4.366892468635e-07 81 KSP preconditioned resid norm 5.046755275791e-08 true resid norm 7.225871257706e-03 ||r(i)||/||b|| 1.076252017075e-07 82 KSP preconditioned resid norm 2.756629667916e-08 true resid norm 2.674186926924e-03 ||r(i)||/||b|| 3.983047817339e-08 83 KSP preconditioned resid norm 1.948064945258e-08 true resid norm 1.002338825836e-03 ||r(i)||/||b|| 1.492926104859e-08 84 KSP preconditioned resid norm 1.686033880678e-08 true resid norm 7.336515948282e-04 ||r(i)||/||b|| 1.092731908173e-08 85 KSP preconditioned resid norm 1.627099807112e-08 true resid norm 7.004183219758e-04 ||r(i)||/||b|| 1.043232857241e-08 86 KSP preconditioned resid norm 1.815047764168e-08 true resid norm 8.235488547093e-04 ||r(i)||/||b|| 1.226628712898e-08 87 KSP preconditioned resid norm 2.850439847332e-08 true resid norm 2.214968823226e-03 ||r(i)||/||b|| 3.299068830231e-08 88 KSP preconditioned resid norm 1.519490344670e-07 true resid norm 1.992168037360e-02 ||r(i)||/||b|| 2.967219857778e-07 89 KSP preconditioned resid norm 6.189262765590e-07 true resid norm 1.040718853983e-01 ||r(i)||/||b|| 1.550090952164e-06 90 KSP preconditioned resid norm 5.042710252729e-08 true resid norm 8.652148341175e-03 ||r(i)||/||b|| 1.288687795855e-07 91 KSP preconditioned resid norm 2.122997506948e-08 true resid norm 3.738624152724e-03 ||r(i)||/||b|| 5.568465921898e-08 92 KSP preconditioned resid norm 1.416290557086e-08 true resid norm 2.531358077791e-03 ||r(i)||/||b|| 3.770312450912e-08 93 KSP preconditioned resid norm 1.113840681888e-08 true resid norm 2.004079623183e-03 ||r(i)||/||b|| 2.984961480637e-08 94 KSP preconditioned resid norm 5.052832639519e-08 true resid norm 9.596273544776e-03 ||r(i)||/||b|| 1.429309821699e-07 95 KSP preconditioned resid norm 3.103683240415e-09 true resid norm 1.340636499506e-04 ||r(i)||/||b|| 1.996801057338e-09 96 KSP preconditioned resid norm 3.429857119447e-09 true resid norm 2.674501337153e-04 ||r(i)||/||b|| 3.983516113316e-09 97 KSP preconditioned resid norm 5.170276146377e-09 true resid norm 6.719440839436e-04 ||r(i)||/||b|| 1.000822115305e-08 98 KSP preconditioned resid norm 1.148497437573e-08 true resid norm 1.829675810596e-03 ||r(i)||/||b|| 2.725197019872e-08 99 KSP preconditioned resid norm 6.490313757291e-08 true resid norm 1.133744740971e-02 ||r(i)||/||b|| 1.688647667252e-07 100 KSP preconditioned resid norm 1.033700268099e-06 true resid norm 1.744999476835e-01 ||r(i)||/||b|| 2.599076484702e-06 101 KSP preconditioned resid norm 2.023863503132e-08 true resid norm 3.274357033310e-03 ||r(i)||/||b|| 4.876966715905e-08 102 KSP preconditioned resid norm 4.803328753117e-09 true resid norm 6.927971746924e-04 ||r(i)||/||b|| 1.031881596135e-08 103 KSP preconditioned resid norm 1.962342036005e-09 true resid norm 1.445569202511e-04 ||r(i)||/||b|| 2.153092290934e-09 104 KSP preconditioned resid norm 1.498768208313e-09 true resid norm 6.802150842044e-05 ||r(i)||/||b|| 1.013141295092e-09 105 KSP preconditioned resid norm 1.395549026495e-09 true resid norm 6.005884495747e-05 ||r(i)||/||b|| 8.945419967142e-10 106 KSP preconditioned resid norm 1.457410891473e-09 true resid norm 6.410049873543e-05 ||r(i)||/||b|| 9.547401081351e-10 107 KSP preconditioned resid norm 1.780342425895e-09 true resid norm 1.209740535326e-04 ||r(i)||/||b|| 1.801839037603e-09 108 KSP preconditioned resid norm 3.312859958719e-09 true resid norm 4.525233578504e-04 ||r(i)||/||b|| 6.740075477275e-09 109 KSP preconditioned resid norm 9.207121903791e-09 true resid norm 1.584434833387e-03 ||r(i)||/||b|| 2.359924671420e-08 110 KSP preconditioned resid norm 8.739767895565e-08 true resid norm 1.687246640666e-02 ||r(i)||/||b|| 2.513056952659e-07 111 KSP preconditioned resid norm 1.156065363139e-06 true resid norm 2.263019811534e-01 ||r(i)||/||b|| 3.370638017177e-06 112 KSP preconditioned resid norm 7.306873252299e-08 true resid norm 1.428339270295e-02 ||r(i)||/||b|| 2.127429296617e-07 113 KSP preconditioned resid norm 3.669015190800e-08 true resid norm 7.144007229935e-03 ||r(i)||/||b|| 1.064058840381e-07 114 KSP preconditioned resid norm 2.027376209045e-08 true resid norm 3.927954323325e-03 ||r(i)||/||b|| 5.850462335527e-08 115 KSP preconditioned resid norm 1.150536971084e-08 true resid norm 2.217667544765e-03 ||r(i)||/||b|| 3.303088420943e-08 116 KSP preconditioned resid norm 4.817493968709e-09 true resid norm 9.229121300958e-04 ||r(i)||/||b|| 1.374624603973e-08 117 KSP preconditioned resid norm 1.318062597149e-09 true resid norm 2.502759879830e-04 ||r(i)||/||b|| 3.727717077784e-09 118 KSP preconditioned resid norm 2.565229681360e-10 true resid norm 4.554155253246e-05 ||r(i)||/||b|| 6.783152650489e-10 119 KSP preconditioned resid norm 1.220621552659e-10 true resid norm 1.887267706055e-05 ||r(i)||/||b|| 2.810976839972e-10 120 KSP preconditioned resid norm 3.007389263037e-10 true resid norm 5.965755276433e-05 ||r(i)||/||b|| 8.885649800071e-10 121 KSP preconditioned resid norm 7.687988210487e-10 true resid norm 1.555121445636e-04 ||r(i)||/||b|| 2.316264064181e-09 122 KSP preconditioned resid norm 3.818221035410e-09 true resid norm 7.723490689110e-04 ||r(i)||/||b|| 1.150369572963e-08 123 KSP preconditioned resid norm 1.126628704389e-07 true resid norm 2.277477391889e-02 ||r(i)||/||b|| 3.392171752645e-07 124 KSP preconditioned resid norm 2.091915962841e-09 true resid norm 4.232853495368e-04 ||r(i)||/||b|| 6.304592138305e-09 125 KSP preconditioned resid norm 6.498729874824e-10 true resid norm 1.317063018752e-04 ||r(i)||/||b|| 1.961689711860e-09 126 KSP preconditioned resid norm 2.326148226133e-10 true resid norm 4.710921144409e-05 ||r(i)||/||b|| 7.016646440449e-10 127 KSP preconditioned resid norm 6.419933487001e-11 true resid norm 1.192761364491e-05 ||r(i)||/||b|| 1.776549537959e-10 128 KSP preconditioned resid norm 3.990889550665e-11 true resid norm 4.736495989458e-06 ||r(i)||/||b|| 7.054738703082e-11 129 KSP preconditioned resid norm 1.449761398531e-10 true resid norm 2.794402799777e-05 ||r(i)||/||b|| 4.162102454528e-10 130 KSP preconditioned resid norm 1.621600598079e-10 true resid norm 3.151012544337e-05 ||r(i)||/||b|| 4.693252184718e-10 131 KSP preconditioned resid norm 8.656752727415e-11 true resid norm 1.541111379317e-05 ||r(i)||/||b|| 2.295396875164e-10 132 KSP preconditioned resid norm 5.162222539791e-11 true resid norm 7.009942258504e-06 ||r(i)||/||b|| 1.044090632981e-10 133 KSP preconditioned resid norm 3.781167023504e-11 true resid norm 3.012802382737e-06 ||r(i)||/||b|| 4.487396088066e-11 134 KSP preconditioned resid norm 3.198899307844e-11 true resid norm 1.735828409010e-06 ||r(i)||/||b|| 2.585416705979e-11 135 KSP preconditioned resid norm 2.800813815582e-11 true resid norm 1.267385630623e-06 ||r(i)||/||b|| 1.887698095805e-11 136 KSP preconditioned resid norm 2.478323040026e-11 true resid norm 1.093735119127e-06 ||r(i)||/||b|| 1.629055633743e-11 137 KSP preconditioned resid norm 2.094802859126e-11 true resid norm 9.461872584172e-07 ||r(i)||/||b|| 1.409291570641e-11 138 KSP preconditioned resid norm 1.475156452329e-11 true resid norm 1.291911076055e-06 ||r(i)||/||b|| 1.924227337987e-11 139 KSP preconditioned resid norm 2.079412078050e-10 true resid norm 3.438188849064e-05 ||r(i)||/||b|| 5.120984794664e-10 140 KSP preconditioned resid norm 1.776813945343e-10 true resid norm 1.130664701479e-05 ||r(i)||/||b|| 1.684060125352e-10 141 KSP preconditioned resid norm 9.791545045192e-11 true resid norm 4.353141785980e-06 ||r(i)||/||b|| 6.483754637586e-11 142 KSP preconditioned resid norm 9.022678471638e-11 true resid norm 3.899337928100e-06 ||r(i)||/||b|| 5.807839858618e-11 143 KSP preconditioned resid norm 9.321246955436e-11 true resid norm 4.122786827556e-06 ||r(i)||/||b|| 6.140654159035e-11 144 KSP preconditioned resid norm 1.010522679933e-10 true resid norm 6.735251544406e-06 ||r(i)||/||b|| 1.003177028991e-10 145 KSP preconditioned resid norm 1.190099575669e-10 true resid norm 1.412264052735e-05 ||r(i)||/||b|| 2.103486183452e-10 146 KSP preconditioned resid norm 1.559766947287e-10 true resid norm 2.516382044354e-05 ||r(i)||/||b|| 3.748006509360e-10 147 KSP preconditioned resid norm 1.549935482860e-10 true resid norm 2.591211842556e-05 ||r(i)||/||b|| 3.859461195417e-10 148 KSP preconditioned resid norm 1.144296191694e-10 true resid norm 1.784161868459e-05 ||r(i)||/||b|| 2.657406617465e-10 149 KSP preconditioned resid norm 6.371964918270e-11 true resid norm 6.643697384486e-06 ||r(i)||/||b|| 9.895405627750e-11 150 KSP preconditioned resid norm 4.845064670450e-11 true resid norm 2.593417735211e-06 ||r(i)||/||b|| 3.862746745817e-11 151 KSP preconditioned resid norm 4.470906007756e-11 true resid norm 1.934674135353e-06 ||r(i)||/||b|| 2.881585993297e-11 152 KSP preconditioned resid norm 5.319998529107e-11 true resid norm 3.264222737453e-06 ||r(i)||/||b|| 4.861872264358e-11 153 KSP preconditioned resid norm 2.352067006596e-10 true resid norm 2.492527941922e-05 ||r(i)||/||b|| 3.712477194012e-10 154 KSP preconditioned resid norm 3.473141928722e-09 true resid norm 3.763914613551e-04 ||r(i)||/||b|| 5.606134610567e-09 155 KSP preconditioned resid norm 1.833222104680e-08 true resid norm 1.977164847191e-03 ||r(i)||/||b|| 2.944873467832e-08 156 KSP preconditioned resid norm 1.545363805860e-09 true resid norm 1.660725936026e-04 ||r(i)||/||b|| 2.473555886497e-09 157 KSP preconditioned resid norm 5.975372365436e-10 true resid norm 6.408112870416e-05 ||r(i)||/||b|| 9.544516026460e-10 158 KSP preconditioned resid norm 2.271151314350e-10 true resid norm 2.430517352175e-05 ||r(i)||/||b|| 3.620115982588e-10 159 KSP preconditioned resid norm 9.284094104344e-11 true resid norm 9.904055955270e-06 ||r(i)||/||b|| 1.475152243782e-10 160 KSP preconditioned resid norm 2.941687346568e-11 true resid norm 3.088975114702e-06 ||r(i)||/||b|| 4.600850996823e-11 161 KSP preconditioned resid norm 6.416123537136e-12 true resid norm 6.096752141640e-07 ||r(i)||/||b|| 9.080762106091e-12 162 KSP preconditioned resid norm 4.077333656404e-12 true resid norm 5.049107356604e-07 ||r(i)||/||b|| 7.520355377462e-12 163 KSP preconditioned resid norm 5.087791292533e-12 true resid norm 7.009318964057e-07 ||r(i)||/||b|| 1.043997796853e-11 164 KSP preconditioned resid norm 7.749563397094e-12 true resid norm 9.156004927546e-07 ||r(i)||/||b|| 1.363734340148e-11 165 KSP preconditioned resid norm 1.052344064381e-11 true resid norm 1.169820734369e-06 ||r(i)||/||b|| 1.742380787145e-11 166 KSP preconditioned resid norm 1.104395410109e-11 true resid norm 1.200080231537e-06 ||r(i)||/||b|| 1.787450570014e-11 167 KSP preconditioned resid norm 1.007023895764e-11 true resid norm 1.084060757795e-06 ||r(i)||/||b|| 1.614646228252e-11 168 KSP preconditioned resid norm 8.344424405547e-12 true resid norm 8.950906940095e-07 ||r(i)||/||b|| 1.333186173038e-11 169 KSP preconditioned resid norm 5.763569089910e-12 true resid norm 6.172251688691e-07 ||r(i)||/||b|| 9.193214344588e-12 170 KSP preconditioned resid norm 3.424168629795e-12 true resid norm 3.659829134192e-07 ||r(i)||/||b|| 5.451105267927e-12 171 KSP preconditioned resid norm 9.348832909193e-13 true resid norm 9.664575613677e-08 ||r(i)||/||b|| 1.439483022522e-12 172 KSP preconditioned resid norm 3.482816687679e-13 true resid norm 2.647798359576e-08 ||r(i)||/||b|| 3.943743562083e-13 173 KSP preconditioned resid norm 1.008488994165e-11 true resid norm 1.080536439819e-06 ||r(i)||/||b|| 1.609396959070e-11 174 KSP preconditioned resid norm 9.378977735316e-11 true resid norm 1.005208011236e-05 ||r(i)||/||b|| 1.497199591702e-10 175 KSP preconditioned resid norm 8.737224485597e-10 true resid norm 9.364289005956e-05 ||r(i)||/||b|| 1.394757057204e-09 176 KSP preconditioned resid norm 1.536767823776e-08 true resid norm 1.647058907589e-03 ||r(i)||/||b|| 2.453199632700e-08 177 KSP preconditioned resid norm 1.839952929174e-07 true resid norm 1.972001883268e-02 ||r(i)||/||b|| 2.937183529640e-07 178 KSP preconditioned resid norm 4.438575380976e-08 true resid norm 4.757119581523e-03 ||r(i)||/||b|| 7.085456358805e-08 179 KSP preconditioned resid norm 4.796782862052e-07 true resid norm 5.141033952542e-02 ||r(i)||/||b|| 7.657274761676e-07 180 KSP preconditioned resid norm 9.921079426648e-09 true resid norm 1.063308787741e-03 ||r(i)||/||b|| 1.583737360889e-08 181 KSP preconditioned resid norm 2.536576210124e-10 true resid norm 2.718624644920e-05 ||r(i)||/||b|| 4.049235245709e-10 182 KSP preconditioned resid norm 1.047157364780e-11 true resid norm 1.122313966041e-06 ||r(i)||/||b|| 1.671622184598e-11 183 KSP preconditioned resid norm 1.244906673812e-10 true resid norm 1.334248080723e-05 ||r(i)||/||b|| 1.987285874524e-10 184 KSP preconditioned resid norm 1.959956371324e-10 true resid norm 2.100614044675e-05 ||r(i)||/||b|| 3.128743956332e-10 185 KSP preconditioned resid norm 2.704883349749e-10 true resid norm 2.899001474168e-05 ||r(i)||/||b|| 4.317896171690e-10 186 KSP preconditioned resid norm 3.521080224859e-10 true resid norm 3.773773546731e-05 ||r(i)||/||b|| 5.620818925224e-10 187 KSP preconditioned resid norm 5.585664451775e-10 true resid norm 5.986524496711e-05 ||r(i)||/||b|| 8.916584360652e-10 188 KSP preconditioned resid norm 8.373951317446e-10 true resid norm 8.974914849925e-05 ||r(i)||/||b|| 1.336762013302e-09 189 KSP preconditioned resid norm 1.025188904578e-09 true resid norm 1.098762320723e-04 ||r(i)||/||b|| 1.636543361747e-09 190 KSP preconditioned resid norm 8.804939847806e-10 true resid norm 9.436831188889e-05 ||r(i)||/||b|| 1.405561798656e-09 191 KSP preconditioned resid norm 3.943144046396e-10 true resid norm 4.226124296393e-05 ||r(i)||/||b|| 6.294569382969e-10 192 KSP preconditioned resid norm 5.355541541046e-11 true resid norm 5.739853081002e-06 ||r(i)||/||b|| 8.549181456224e-11 193 KSP preconditioned resid norm 1.299526839866e-09 true resid norm 1.392788189132e-04 ||r(i)||/||b|| 2.074478003345e-09 194 KSP preconditioned resid norm 2.882900653170e-08 true resid norm 3.089794419764e-03 ||r(i)||/||b|| 4.602071304651e-08 195 KSP preconditioned resid norm 3.502493774106e-09 true resid norm 3.753853139688e-04 ||r(i)||/||b|| 5.591148623198e-09 196 KSP preconditioned resid norm 2.001525286264e-10 true resid norm 2.145165783125e-05 ||r(i)||/||b|| 3.195101211617e-10 197 KSP preconditioned resid norm 1.647406102516e-10 true resid norm 1.765632792834e-05 ||r(i)||/||b|| 2.629808623666e-10 198 KSP preconditioned resid norm 2.535219507377e-10 true resid norm 2.717160853620e-05 ||r(i)||/||b|| 4.047055012650e-10 199 KSP preconditioned resid norm 3.552020174756e-10 true resid norm 3.806933251915e-05 ||r(i)||/||b|| 5.670208401339e-10 200 KSP preconditioned resid norm 3.473561803799e-10 true resid norm 3.722844483670e-05 ||r(i)||/||b|| 5.544963011256e-10 201 KSP preconditioned resid norm 3.040362870159e-10 true resid norm 3.258556793500e-05 ||r(i)||/||b|| 4.853433166302e-10 202 KSP preconditioned resid norm 2.238603169023e-10 true resid norm 2.399258218026e-05 ||r(i)||/||b|| 3.573557297856e-10 203 KSP preconditioned resid norm 2.794466174289e-10 true resid norm 2.995013150895e-05 ||r(i)||/||b|| 4.460900049084e-10 204 KSP preconditioned resid norm 5.892481453566e-10 true resid norm 6.315359820006e-05 ||r(i)||/||b|| 9.406365685783e-10 205 KSP preconditioned resid norm 1.796869246546e-09 true resid norm 1.925822916338e-04 ||r(i)||/||b|| 2.868402611004e-09 206 KSP preconditioned resid norm 5.383703166757e-09 true resid norm 5.770068661128e-04 ||r(i)||/||b|| 8.594185827181e-09 207 KSP preconditioned resid norm 1.513140030683e-08 true resid norm 1.621731659303e-03 ||r(i)||/||b|| 2.415476151223e-08 208 KSP preconditioned resid norm 2.664830499802e-08 true resid norm 2.856074057521e-03 ||r(i)||/||b|| 4.253958250426e-08 209 KSP preconditioned resid norm 1.744796115928e-08 true resid norm 1.870012714072e-03 ||r(i)||/||b|| 2.785276520572e-08 210 KSP preconditioned resid norm 1.709105056825e-09 true resid norm 1.831760214160e-04 ||r(i)||/||b|| 2.728301619248e-09 211 KSP preconditioned resid norm 5.824652325203e-09 true resid norm 6.242662821250e-04 ||r(i)||/||b|| 9.298087681987e-09 212 KSP preconditioned resid norm 1.112672284142e-07 true resid norm 1.192524045420e-02 ||r(i)||/||b|| 1.776196064834e-07 213 KSP preconditioned resid norm 1.643443975106e-06 true resid norm 1.761386966729e-01 ||r(i)||/||b|| 2.623484709571e-06 214 KSP preconditioned resid norm 7.564685658242e-08 true resid norm 8.107571011682e-03 ||r(i)||/||b|| 1.207576130781e-07 215 KSP preconditioned resid norm 2.392757332232e-08 true resid norm 2.564475361971e-03 ||r(i)||/||b|| 3.819638743380e-08 216 KSP preconditioned resid norm 1.178197751769e-08 true resid norm 1.262752000134e-03 ||r(i)||/||b|| 1.880796569355e-08 217 KSP preconditioned resid norm 6.804809693774e-09 true resid norm 7.293161986200e-04 ||r(i)||/||b|| 1.086274584554e-08 218 KSP preconditioned resid norm 4.979204750271e-09 true resid norm 5.336541131997e-04 ||r(i)||/||b|| 7.948471475175e-09 219 KSP preconditioned resid norm 4.201414891575e-09 true resid norm 4.502932602210e-04 ||r(i)||/||b|| 6.706859454096e-09 220 KSP preconditioned resid norm 2.892374518861e-09 true resid norm 3.099947965425e-04 ||r(i)||/||b|| 4.617194427674e-09 221 KSP preconditioned resid norm 7.824978176254e-10 true resid norm 8.386543747579e-05 ||r(i)||/||b|| 1.249127517321e-09 222 KSP preconditioned resid norm 4.025045270489e-12 true resid norm 4.313920255055e-07 ||r(i)||/||b|| 6.425336420232e-12 223 KSP preconditioned resid norm 7.270310185183e-10 true resid norm 7.792069485626e-05 ||r(i)||/||b|| 1.160583990775e-09 224 KSP preconditioned resid norm 2.285667940369e-09 true resid norm 2.449700616763e-04 ||r(i)||/||b|| 3.648688353269e-09 225 KSP preconditioned resid norm 4.666494852894e-09 true resid norm 5.001389357709e-04 ||r(i)||/||b|| 7.449282159120e-09 226 KSP preconditioned resid norm 7.718252461926e-09 true resid norm 8.272158639732e-04 ||r(i)||/||b|| 1.232090512557e-08 227 KSP preconditioned resid norm 9.133913525747e-09 true resid norm 9.789415681285e-04 ||r(i)||/||b|| 1.458077233487e-08 228 KSP preconditioned resid norm 9.380662429943e-09 true resid norm 1.005387270614e-03 ||r(i)||/||b|| 1.497466588249e-08 229 KSP preconditioned resid norm 7.153889400651e-09 true resid norm 7.667293640037e-04 ||r(i)||/||b|| 1.141999345310e-08 230 KSP preconditioned resid norm 3.686222155648e-09 true resid norm 3.950766653994e-04 ||r(i)||/||b|| 5.884439991676e-09 231 KSP preconditioned resid norm 1.553980021933e-09 true resid norm 1.665502564985e-04 ||r(i)||/||b|| 2.480670400953e-09 232 KSP preconditioned resid norm 4.135131754480e-10 true resid norm 4.431892592037e-05 ||r(i)||/||b|| 6.601049439614e-10 233 KSP preconditioned resid norm 3.408443088536e-11 true resid norm 3.653052704994e-06 ||r(i)||/||b|| 5.441012165888e-11 234 KSP preconditioned resid norm 7.336512309518e-11 true resid norm 7.863022750472e-06 ||r(i)||/||b|| 1.171152072005e-10 235 KSP preconditioned resid norm 1.711388154684e-09 true resid norm 1.834207210137e-04 ||r(i)||/||b|| 2.731946279196e-09 236 KSP preconditioned resid norm 1.159702499545e-07 true resid norm 1.242929419260e-02 ||r(i)||/||b|| 1.851271973790e-07 237 KSP preconditioned resid norm 3.857597718932e-09 true resid norm 4.134441113034e-04 ||r(i)||/||b|| 6.158012547810e-09 238 KSP preconditioned resid norm 1.645996729438e-09 true resid norm 1.764122920283e-04 ||r(i)||/||b|| 2.627559755231e-09 239 KSP preconditioned resid norm 3.308806112000e-12 true resid norm 3.546274450365e-07 ||r(i)||/||b|| 5.281972089161e-12 240 KSP preconditioned resid norm 7.382914922234e-08 true resid norm 7.912755349202e-03 ||r(i)||/||b|| 1.178559456912e-07 241 KSP preconditioned resid norm 1.809365457089e-08 true resid norm 1.939215926278e-03 ||r(i)||/||b|| 2.888350729990e-08 242 KSP preconditioned resid norm 6.669136386139e-09 true resid norm 7.147751961904e-04 ||r(i)||/||b|| 1.064616596697e-08 243 KSP preconditioned resid norm 9.709051870072e-10 true resid norm 1.040582926825e-04 ||r(i)||/||b|| 1.549888496470e-09 244 KSP preconditioned resid norm 1.658711636824e-07 true resid norm 1.777750321545e-02 ||r(i)||/||b|| 2.647856986628e-07 245 KSP preconditioned resid norm 4.372029935281e-05 true resid norm 4.685791942789e+00 ||r(i)||/||b|| 6.979217938105e-05 246 KSP preconditioned resid norm 1.108728206741e-06 true resid norm 1.188296918090e-01 ||r(i)||/||b|| 1.769900001491e-06 247 KSP preconditioned resid norm 6.312367692314e-07 true resid norm 6.765379494300e-02 ||r(i)||/||b|| 1.007664414067e-06 248 KSP preconditioned resid norm 4.356226723159e-07 true resid norm 4.668854601307e-02 ||r(i)||/||b|| 6.953990740879e-07 249 KSP preconditioned resid norm 2.372265656235e-07 true resid norm 2.542513080365e-02 ||r(i)||/||b|| 3.786927186482e-07 250 KSP preconditioned resid norm 8.566943433538e-08 true resid norm 9.181756554699e-03 ||r(i)||/||b|| 1.367569897090e-07 251 KSP preconditioned resid norm 2.811298063268e-08 true resid norm 3.013052977489e-03 ||r(i)||/||b|| 4.487769334555e-08 252 KSP preconditioned resid norm 9.927933000520e-09 true resid norm 1.064041855973e-03 ||r(i)||/||b|| 1.584829223911e-08 253 KSP preconditioned resid norm 5.465425608817e-09 true resid norm 5.857655977788e-04 ||r(i)||/||b|| 8.724642104167e-09 254 KSP preconditioned resid norm 2.802709112126e-09 true resid norm 3.003847634638e-04 ||r(i)||/||b|| 4.474058505150e-09 255 KSP preconditioned resid norm 8.749035597488e-10 true resid norm 9.376916709178e-05 ||r(i)||/||b|| 1.396637881063e-09 256 KSP preconditioned resid norm 2.525241256290e-12 true resid norm 2.706479113805e-07 ||r(i)||/||b|| 4.031145174775e-12 257 KSP preconditioned resid norm 1.954487205473e-10 true resid norm 2.094752459268e-05 ||r(i)||/||b|| 3.120013461568e-10 258 KSP preconditioned resid norm 3.558554122752e-10 true resid norm 3.813936440469e-05 ||r(i)||/||b|| 5.680639248413e-10 259 KSP preconditioned resid norm 3.327616628106e-10 true resid norm 3.566425546219e-05 ||r(i)||/||b|| 5.311985988919e-10 260 KSP preconditioned resid norm 3.138738006439e-10 true resid norm 3.363991905729e-05 ||r(i)||/||b|| 5.010472709577e-10 261 KSP preconditioned resid norm 5.105712063176e-10 true resid norm 5.472127327668e-05 ||r(i)||/||b|| 8.150419325301e-10 262 KSP preconditioned resid norm 1.562664105193e-09 true resid norm 1.674809867731e-04 ||r(i)||/||b|| 2.494533093765e-09 263 KSP preconditioned resid norm 2.160588142357e-09 true resid norm 2.315644371948e-04 ||r(i)||/||b|| 3.449019277059e-09 264 KSP preconditioned resid norm 2.958294939588e-08 true resid norm 3.170599194897e-03 ||r(i)||/||b|| 4.722425375632e-08 265 KSP preconditioned resid norm 4.249142745113e-09 true resid norm 4.554085662232e-04 ||r(i)||/||b|| 6.783048998670e-09 266 KSP preconditioned resid norm 3.648294184577e-10 true resid norm 3.910116757905e-05 ||r(i)||/||b|| 5.823894306457e-10 267 KSP preconditioned resid norm 4.149339357988e-08 true resid norm 4.447119809209e-03 ||r(i)||/||b|| 6.623729504912e-08 268 KSP preconditioned resid norm 3.843343031428e-06 true resid norm 4.119163426746e-01 ||r(i)||/||b|| 6.135257311663e-06 269 KSP preconditioned resid norm 6.080537781060e-08 true resid norm 6.516912135594e-03 ||r(i)||/||b|| 9.706566282308e-08 270 KSP preconditioned resid norm 4.757307440420e-09 true resid norm 5.098719177265e-04 ||r(i)||/||b|| 7.594249334542e-09 271 KSP preconditioned resid norm 5.436376140969e-06 true resid norm 5.826521752233e-01 ||r(i)||/||b|| 8.678269463611e-06 272 KSP preconditioned resid norm 6.392279306681e-10 true resid norm 6.851026028837e-05 ||r(i)||/||b|| 1.020420973416e-09 273 KSP preconditioned resid norm 2.599896339650e-06 true resid norm 2.786479850495e-01 ||r(i)||/||b|| 4.150301676682e-06 274 KSP preconditioned resid norm 6.217134530094e-04 true resid norm 6.663311852755e+01 ||r(i)||/||b|| 9.924620251545e-04 275 KSP preconditioned resid norm 1.906648612009e-06 true resid norm 2.043480679715e-01 ||r(i)||/||b|| 3.043647091072e-06 276 KSP preconditioned resid norm 8.789054443529e-06 true resid norm 9.419807527825e-01 ||r(i)||/||b|| 1.403026222128e-05 277 KSP preconditioned resid norm 2.890658496307e-05 true resid norm 3.098108771408e+00 ||r(i)||/||b|| 4.614455053833e-05 278 KSP preconditioned resid norm 2.700565878072e-05 true resid norm 2.894374013848e+00 ||r(i)||/||b|| 4.311003835354e-05 279 KSP preconditioned resid norm 2.127015678842e-05 true resid norm 2.279662554372e+00 ||r(i)||/||b|| 3.395426426643e-05 280 KSP preconditioned resid norm 6.022114971533e-04 true resid norm 6.454296569280e+01 ||r(i)||/||b|| 9.613303993039e-04 281 KSP preconditioned resid norm 4.191696888290e-04 true resid norm 4.492517159409e+01 ||r(i)||/||b|| 6.691346250326e-04 282 KSP preconditioned resid norm 2.128575355843e-02 true resid norm 2.281334162770e+03 ||r(i)||/||b|| 3.397916191332e-02 283 KSP preconditioned resid norm 5.566202222495e-03 true resid norm 5.965664899852e+02 ||r(i)||/||b|| 8.885515189345e-03 284 KSP preconditioned resid norm 1.253993707281e-01 true resid norm 1.343987506229e+04 ||r(i)||/||b|| 2.001792189364e-01 285 KSP preconditioned resid norm 1.655806619282e-02 true resid norm 1.774636823235e+03 ||r(i)||/||b|| 2.643219609740e-02 286 KSP preconditioned resid norm 1.065113628903e-01 true resid norm 1.141552307358e+04 ||r(i)||/||b|| 1.700276588903e-01 287 KSP preconditioned resid norm 6.805321243488e+00 true resid norm 7.293710226785e+05 ||r(i)||/||b|| 1.086356241840e+01 288 KSP preconditioned resid norm 1.283123255580e-01 true resid norm 1.375207558409e+04 ||r(i)||/||b|| 2.048292663748e-01 289 KSP preconditioned resid norm 1.145625946046e+00 true resid norm 1.227842651328e+05 ||r(i)||/||b|| 1.828801099567e+00 290 KSP preconditioned resid norm 4.728734309042e+00 true resid norm 5.068095473462e+05 ||r(i)||/||b|| 7.548637086810e+00 291 KSP preconditioned resid norm 1.345611312328e+00 true resid norm 1.442180117418e+05 ||r(i)||/||b|| 2.148044443362e+00 292 KSP preconditioned resid norm 2.067182052275e+02 true resid norm 2.215534922722e+07 ||r(i)||/||b|| 3.299912002912e+02 293 KSP preconditioned resid norm 2.819056223629e-01 true resid norm 3.021367907919e+04 ||r(i)||/||b|| 4.500153945805e-01 294 KSP preconditioned resid norm 2.473762204291e+00 true resid norm 2.651293604299e+05 ||r(i)||/||b|| 3.948949528325e+00 295 KSP preconditioned resid norm 8.824314765228e-02 true resid norm 9.457598332934e+03 ||r(i)||/||b|| 1.408654945471e-01 296 KSP preconditioned resid norm 3.547759169453e-03 true resid norm 3.802366767064e+02 ||r(i)||/||b|| 5.663406884461e-03 297 KSP preconditioned resid norm 6.629254109013e-04 true resid norm 7.105007502579e+01 ||r(i)||/||b|| 1.058250055013e-03 298 KSP preconditioned resid norm 8.335816942804e-03 true resid norm 8.934043097547e+02 ||r(i)||/||b|| 1.330674400560e-02 299 KSP preconditioned resid norm 2.280133581537e-03 true resid norm 2.443769078227e+02 ||r(i)||/||b|| 3.639853667338e-03 300 KSP preconditioned resid norm 3.042948047775e-02 true resid norm 3.261327492118e+03 ||r(i)||/||b|| 4.857559962741e-02 301 KSP preconditioned resid norm 3.641340291287e-03 true resid norm 3.902663808886e+02 ||r(i)||/||b|| 5.812793567005e-03 302 KSP preconditioned resid norm 5.262062292809e-03 true resid norm 5.639698140863e+02 ||r(i)||/||b|| 8.400006425973e-03 303 KSP preconditioned resid norm 2.767651815672e-04 true resid norm 2.966274423730e+01 ||r(i)||/||b|| 4.418095365778e-04 304 KSP preconditioned resid norm 2.769145142443e-04 true resid norm 2.967874922780e+01 ||r(i)||/||b|| 4.420479217177e-04 305 KSP preconditioned resid norm 2.669690509777e-03 true resid norm 2.861282853351e+02 ||r(i)||/||b|| 4.261716452611e-03 306 KSP preconditioned resid norm 5.477323157568e-03 true resid norm 5.870407366268e+02 ||r(i)||/||b|| 8.743634564843e-03 307 KSP preconditioned resid norm 1.187253595512e-06 true resid norm 1.272457750950e-01 ||r(i)||/||b|| 1.895252727680e-06 308 KSP preconditioned resid norm 2.168305092952e+00 true resid norm 2.323915137191e+05 ||r(i)||/||b|| 3.461338106800e+00 309 KSP preconditioned resid norm 3.674108569258e-03 true resid norm 3.937783732259e+02 ||r(i)||/||b|| 5.865102675514e-03 310 KSP preconditioned resid norm 9.663808831088e-04 true resid norm 1.035733933599e+02 ||r(i)||/||b|| 1.542666199595e-03 311 KSP preconditioned resid norm 2.905881247679e-04 true resid norm 3.114423999927e+01 ||r(i)||/||b|| 4.638755649534e-04 312 KSP preconditioned resid norm 2.276520242446e-04 true resid norm 2.439896429079e+01 ||r(i)||/||b|| 3.634085578884e-04 313 KSP preconditioned resid norm 5.909203310746e-06 true resid norm 6.333281888478e-01 ||r(i)||/||b|| 9.433059577294e-06 314 KSP preconditioned resid norm 1.132991640772e-04 true resid norm 1.214301635826e+01 ||r(i)||/||b|| 1.808632534798e-04 315 KSP preconditioned resid norm 1.149655320396e-01 true resid norm 1.232161196723e+04 ||r(i)||/||b|| 1.835233324869e-01 316 KSP preconditioned resid norm 1.445642724369e-02 true resid norm 1.549390358719e+03 ||r(i)||/||b|| 2.307727939423e-02 317 KSP preconditioned resid norm 1.748966842893e+00 true resid norm 1.874482760043e+05 ||r(i)||/||b|| 2.791934397278e+00 318 KSP preconditioned resid norm 3.200837397862e-03 true resid norm 3.430547894864e+02 ||r(i)||/||b|| 5.109604032292e-03 319 KSP preconditioned resid norm 3.363929224528e+00 true resid norm 3.605344128712e+05 ||r(i)||/||b|| 5.369952981985e+00 320 KSP preconditioned resid norm 6.693942426451e-01 true resid norm 7.174338225998e+04 ||r(i)||/||b|| 1.068576468018e+00 321 KSP preconditioned resid norm 4.239885892382e-02 true resid norm 4.544164483869e+03 ||r(i)||/||b|| 6.768271973389e-02 322 KSP preconditioned resid norm 7.716682076793e-01 true resid norm 8.270475554445e+04 ||r(i)||/||b|| 1.231839826671e+00 323 KSP preconditioned resid norm 8.829139791489e-02 true resid norm 9.462769631425e+03 ||r(i)||/||b|| 1.409425180677e-01 324 KSP preconditioned resid norm 4.184745081336e+00 true resid norm 4.485066451151e+05 ||r(i)||/||b|| 6.680248848357e+00 325 KSP preconditioned resid norm 7.244650997288e+01 true resid norm 7.764568810616e+06 ||r(i)||/||b|| 1.156487923201e+02 326 KSP preconditioned resid norm 6.173923463866e+03 true resid norm 6.616999712610e+08 ||r(i)||/||b|| 9.855640979055e+03 327 KSP preconditioned resid norm 1.959276674036e+00 true resid norm 2.099885634282e+05 ||r(i)||/||b|| 3.127659030892e+00 328 KSP preconditioned resid norm 1.121351920525e+01 true resid norm 1.201826582282e+06 ||r(i)||/||b|| 1.790051659135e+01 329 KSP preconditioned resid norm 1.130971168979e+03 true resid norm 1.212136163323e+08 ||r(i)||/||b|| 1.805407187894e+03 330 KSP preconditioned resid norm 4.420103087306e+00 true resid norm 4.737315101142e+05 ||r(i)||/||b|| 7.055958722883e+00 331 KSP preconditioned resid norm 4.088057471206e+02 true resid norm 4.381439982323e+07 ||r(i)||/||b|| 6.525903175538e+02 332 KSP preconditioned resid norm 3.402449682428e-01 true resid norm 3.646629036299e+04 ||r(i)||/||b|| 5.431444480354e-01 333 KSP preconditioned resid norm 1.669642304015e+00 true resid norm 1.789465436480e+05 ||r(i)||/||b|| 2.665305977385e+00 334 KSP preconditioned resid norm 4.807832112080e+01 true resid norm 5.152869790310e+06 ||r(i)||/||b|| 7.674903562158e+01 335 KSP preconditioned resid norm 1.806983009801e+01 true resid norm 1.936662501047e+06 ||r(i)||/||b|| 2.884547549782e+01 336 KSP preconditioned resid norm 5.609893970183e+03 true resid norm 6.012492219418e+08 ||r(i)||/||b|| 8.955261791990e+03 337 KSP preconditioned resid norm 6.497291386334e+00 true resid norm 6.963574376650e+05 ||r(i)||/||b|| 1.037184403324e+01 338 KSP preconditioned resid norm 1.795335659929e-01 true resid norm 1.924179270349e+04 ||r(i)||/||b|| 2.865954494717e-01 339 KSP preconditioned resid norm 8.220070254050e+00 true resid norm 8.809989762712e+05 ||r(i)||/||b|| 1.312197368922e+01 340 KSP preconditioned resid norm 2.258978929963e-02 true resid norm 2.421096247060e+03 ||r(i)||/||b|| 3.606083787686e-02 341 KSP preconditioned resid norm 3.878103577901e+00 true resid norm 4.156418591179e+05 ||r(i)||/||b|| 6.190746739079e+00 342 KSP preconditioned resid norm 5.952599113346e+00 true resid norm 6.379791855371e+05 ||r(i)||/||b|| 9.502333501362e+00 343 KSP preconditioned resid norm 7.222045044346e+02 true resid norm 7.740340526223e+07 ||r(i)||/||b|| 1.152879259413e+03 344 KSP preconditioned resid norm 6.364182209877e+01 true resid norm 6.820912521576e+06 ||r(i)||/||b|| 1.015935739488e+02 345 KSP preconditioned resid norm 3.827037043269e+02 true resid norm 4.101687228369e+07 ||r(i)||/||b|| 6.109227518045e+02 346 KSP preconditioned resid norm 6.309451684929e+00 true resid norm 6.762254219609e+05 ||r(i)||/||b|| 1.007198922354e+01 347 KSP preconditioned resid norm 2.044414998384e+00 true resid norm 2.191133974822e+05 ||r(i)||/||b|| 3.263568192651e+00 348 KSP preconditioned resid norm 1.457585290772e-02 true resid norm 1.562190008524e+03 ||r(i)||/||b|| 2.326792282571e-02 349 KSP preconditioned resid norm 3.697892503779e-01 true resid norm 3.963274534823e+04 ||r(i)||/||b|| 5.903069761695e-01 350 KSP preconditioned resid norm 1.104685840662e+02 true resid norm 1.183964449407e+07 ||r(i)||/||b|| 1.763447038252e+02 351 KSP preconditioned resid norm 1.199213228986e+02 true resid norm 1.285275666774e+07 ||r(i)||/||b|| 1.914344277014e+02 352 KSP preconditioned resid norm 1.183644579434e+02 true resid norm 1.268589721439e+07 ||r(i)||/||b|| 1.889491519909e+02 353 KSP preconditioned resid norm 1.234968225554e+02 true resid norm 1.323596647557e+07 ||r(i)||/||b|| 1.971421176662e+02 354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm 6.177943120636e+05 ||r(i)||/||b|| 9.201691405543e+00 357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 Linear solve did not converge due to DIVERGED_DTOL iterations 357 >On 2013-01-09 01:58:24?"Jed Brown" ??? >>On Tue, Jan 8, 2013 at 11:50 AM, w_ang_temp wrote: >>I am sorry. >>In my view, preconditioned resid norm:||rp||=||Bb-BAx||(B is the preconditioned matrix); >-ksp_norm_type preconditioned is the default for GMRES, so it's using preconditioned residual. >>true resid norm:||rt||=||b-Ax||; ||r(i)||/||b||: ||rt||/||b||. Is it right? >>(1) Divergence is detected if >> ||rp||/||b|| > dtol or ||rt||/||b|| > dtol ? >Neither, it's |rp|/|min(rp0,rp1,rp2,rp3,...)|. Your solver "converges" a bit at some iteration and then jumps a lot so the denominator is smaller than rp0. >> Both of them (rt/b:1.701448294063e+04 / 6.7139E+4; rt/b:2.716078947576e+04; dtol=1.0E+5 ) >>are not in this example, but it is divergent? -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 15 11:47:34 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Jan 2013 11:47:34 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: <2d12f2c6.11c04.13c3f4cf9e6.Coremail.w_ang_temp@163.com> References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> <2d12f2c6.11c04.13c3f4cf9e6.Coremail.w_ang_temp@163.com> Message-ID: On Tue, Jan 15, 2013 at 11:41 AM, w_ang_temp wrote: > Hello, > I am not sure about it. The following is the information under > -ksp_monitor_true_residual. > So can you tell me that how the DIVERGED_DTOL occurs(||rk||>dtol*||b||). > PS: dtol use the default parameter; normb:67139.2122204160. > Its r_0, not b, so its preconditioned. Matt > Thanks. Jim > > 0 KSP preconditioned resid norm 1.145582415879e+00 true resid norm > 6.713921222042e+04 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.668442371722e-01 true resid norm > 9.315411570816e+03 ||r(i)||/||b|| 1.387477044001e-01 > 2 KSP preconditioned resid norm 7.332643142215e-02 true resid norm > 4.580901760869e+03 ||r(i)||/||b|| 6.822990037223e-02 > 3 KSP preconditioned resid norm 4.350407110218e-02 true resid norm > 2.969077057634e+03 ||r(i)||/||b|| 4.422269727990e-02 > 4 KSP preconditioned resid norm 2.967861353379e-02 true resid norm > 2.171406152803e+03 ||r(i)||/||b|| 3.234184734957e-02 > 5 KSP preconditioned resid norm 2.194027213667e-02 true resid norm > 1.697287121259e+03 ||r(i)||/||b|| 2.528011671759e-02 > 6 KSP preconditioned resid norm 1.709062414900e-02 true resid norm > 1.385369331879e+03 ||r(i)||/||b|| 2.063428041621e-02 > 7 KSP preconditioned resid norm 1.381432438160e-02 true resid norm > 1.166961199876e+03 ||r(i)||/||b|| 1.738121674775e-02 > 8 KSP preconditioned resid norm 1.147659811931e-02 true resid norm > 1.004464430978e+03 ||r(i)||/||b|| 1.496092071620e-02 > 9 KSP preconditioned resid norm 9.735929665267e-03 true resid norm > 8.766474922321e+02 ||r(i)||/||b|| 1.305716083403e-02 > 10 KSP preconditioned resid norm 8.401139127073e-03 true resid norm > 7.735891175651e+02 ||r(i)||/||b|| 1.152216554203e-02 > 11 KSP preconditioned resid norm 7.365394582494e-03 true resid norm > 6.915224037997e+02 ||r(i)||/||b|| 1.029982898115e-02 > 12 KSP preconditioned resid norm 6.581540116011e-03 true resid norm > 6.301038366131e+02 ||r(i)||/||b|| 9.385034702887e-03 > 13 KSP preconditioned resid norm 6.074644880442e-03 true resid norm > 5.941485646876e+02 ||r(i)||/||b|| 8.849501580939e-03 > 14 KSP preconditioned resid norm 6.000465973365e-03 true resid norm > 6.039460304962e+02 ||r(i)||/||b|| 8.995429206310e-03 > 15 KSP preconditioned resid norm 6.700641680203e-03 true resid norm > 7.024109463517e+02 ||r(i)||/||b|| 1.046200756788e-02 > 16 KSP preconditioned resid norm 8.572956854817e-03 true resid norm > 9.345547794474e+02 ||r(i)||/||b|| 1.391965661407e-02 > 17 KSP preconditioned resid norm 1.171098947054e-02 true resid norm > 1.308106003451e+03 ||r(i)||/||b|| 1.948348752078e-02 > 18 KSP preconditioned resid norm 1.553731077786e-02 true resid norm > 1.756744729914e+03 ||r(i)||/||b|| 2.616570364494e-02 > 19 KSP preconditioned resid norm 1.854806156796e-02 true resid norm > 2.108867138509e+03 ||r(i)||/||b|| 3.141036465524e-02 > 20 KSP preconditioned resid norm 1.882735116093e-02 true resid norm > 2.144875683978e+03 ||r(i)||/||b|| 3.194669125603e-02 > 21 KSP preconditioned resid norm 1.581371157031e-02 true resid norm > 1.800227274907e+03 ||r(i)||/||b|| 2.681335117542e-02 > 22 KSP preconditioned resid norm 1.116066908962e-02 true resid norm > 1.264488331079e+03 ||r(i)||/||b|| 1.883382734561e-02 > 23 KSP preconditioned resid norm 6.935989655893e-03 true resid norm > 7.781370409044e+02 ||r(i)||/||b|| 1.158990424776e-02 > 24 KSP preconditioned resid norm 4.056542969364e-03 true resid norm > 4.491311011610e+02 ||r(i)||/||b|| 6.689549762462e-03 > 25 KSP preconditioned resid norm 2.459663949493e-03 true resid norm > 2.758154676638e+02 ||r(i)||/||b|| 4.108112957273e-03 > 26 KSP preconditioned resid norm 1.781038036990e-03 true resid norm > 2.198771325293e+02 ||r(i)||/||b|| 3.274943587474e-03 > 27 KSP preconditioned resid norm 1.686781020289e-03 true resid norm > 2.433346354927e+02 ||r(i)||/||b|| 3.624329619685e-03 > 28 KSP preconditioned resid norm 2.076540169200e-03 true resid norm > 3.382587656684e+02 ||r(i)||/||b|| 5.038170012450e-03 > 29 KSP preconditioned resid norm 2.912174325946e-03 true resid norm > 4.994106293188e+02 ||r(i)||/||b|| 7.438434452868e-03 > 30 KSP preconditioned resid norm 3.668418500885e-03 true resid norm > 6.368223901851e+02 ||r(i)||/||b|| 9.485103699079e-03 > 31 KSP preconditioned resid norm 3.471520993065e-03 true resid norm > 6.005845512437e+02 ||r(i)||/||b|| 8.945361903740e-03 > 32 KSP preconditioned resid norm 2.400628046233e-03 true resid norm > 4.087910715821e+02 ||r(i)||/||b|| 6.088708193955e-03 > 33 KSP preconditioned resid norm 1.392978496225e-03 true resid norm > 2.329031165345e+02 ||r(i)||/||b|| 3.468958136862e-03 > 34 KSP preconditioned resid norm 1.013807019427e-03 true resid norm > 1.877543843971e+02 ||r(i)||/||b|| 2.796493705954e-03 > 35 KSP preconditioned resid norm 1.261815551662e-03 true resid norm > 2.614230430124e+02 ||r(i)||/||b|| 3.893746059369e-03 > 36 KSP preconditioned resid norm 1.853746656434e-03 true resid norm > 3.922570169729e+02 ||r(i)||/||b|| 5.842442948022e-03 > 37 KSP preconditioned resid norm 2.657914774769e-03 true resid norm > 5.613289970100e+02 ||r(i)||/||b|| 8.360672972556e-03 > 38 KSP preconditioned resid norm 3.436994681718e-03 true resid norm > 7.215058669001e+02 ||r(i)||/||b|| 1.074641544097e-02 > 39 KSP preconditioned resid norm 3.614431832954e-03 true resid norm > 7.538884668185e+02 ||r(i)||/||b|| 1.122873566558e-02 > 40 KSP preconditioned resid norm 2.766407868570e-03 true resid norm > 5.733943971401e+02 ||r(i)||/||b|| 8.540380176902e-03 > 41 KSP preconditioned resid norm 1.467719776874e-03 true resid norm > 3.023387886260e+02 ||r(i)||/||b|| 4.503162587512e-03 > 42 KSP preconditioned resid norm 5.524746404481e-04 true resid norm > 1.130986298210e+02 ||r(i)||/||b|| 1.684539125209e-03 > 43 KSP preconditioned resid norm 1.546370264138e-04 true resid norm > 3.145301955484e+01 ||r(i)||/||b|| 4.684746590648e-04 > 44 KSP preconditioned resid norm 3.362252620216e-05 true resid norm > 6.790570379860e+00 ||r(i)||/||b|| 1.011416451770e-04 > 45 KSP preconditioned resid norm 6.273756038479e-06 true resid norm > 1.247817148548e+00 ||r(i)||/||b|| 1.858551965805e-05 > 46 KSP preconditioned resid norm 1.802876108688e-06 true resid norm > 3.246014610548e-01 ||r(i)||/||b|| 4.834752305242e-06 > 47 KSP preconditioned resid norm 1.955901149713e-06 true resid norm > 3.605076707485e-01 ||r(i)||/||b|| 5.369554673429e-06 > 48 KSP preconditioned resid norm 2.146815227031e-06 true resid norm > 4.034225428527e-01 ||r(i)||/||b|| 6.008747042314e-06 > 49 KSP preconditioned resid norm 7.835209417034e-07 true resid norm > 1.092344842431e-01 ||r(i)||/||b|| 1.626984896464e-06 > 50 KSP preconditioned resid norm 5.949209762592e-07 true resid norm > 6.093308467935e-02 ||r(i)||/||b|| 9.075632951920e-07 > 51 KSP preconditioned resid norm 4.843097958216e-07 true resid norm > 3.907306790091e-02 ||r(i)||/||b|| 5.819709020808e-07 > 52 KSP preconditioned resid norm 4.015928439388e-07 true resid norm > 2.047742276673e-02 ||r(i)||/||b|| 3.049994494946e-07 > 53 KSP preconditioned resid norm 3.831231784925e-07 true resid norm > 3.214301153383e-02 ||r(i)||/||b|| 4.787516932475e-07 > 54 KSP preconditioned resid norm 3.936949175538e-07 true resid norm > 5.284803093534e-02 ||r(i)||/||b|| 7.871410638815e-07 > 55 KSP preconditioned resid norm 3.162241186775e-07 true resid norm > 3.587463976088e-02 ||r(i)||/||b|| 5.343321521721e-07 > 56 KSP preconditioned resid norm 3.171835200858e-07 true resid norm > 4.396913730536e-02 ||r(i)||/||b|| 6.548950434660e-07 > 57 KSP preconditioned resid norm 2.173392548323e-07 true resid norm > 1.845471373372e-02 ||r(i)||/||b|| 2.748723603299e-07 > 58 KSP preconditioned resid norm 1.842063440560e-07 true resid norm > 1.135583609373e-02 ||r(i)||/||b|| 1.691386556109e-07 > 59 KSP preconditioned resid norm 1.562305233760e-07 true resid norm > 7.915226925674e-03 ||r(i)||/||b|| 1.178927584031e-07 > 60 KSP preconditioned resid norm 1.395168870151e-07 true resid norm > 7.381518126662e-03 ||r(i)||/||b|| 1.099434724141e-07 > 61 KSP preconditioned resid norm 1.323458776323e-07 true resid norm > 1.236767010915e-02 ||r(i)||/||b|| 1.842093420541e-07 > 62 KSP preconditioned resid norm 1.210573318494e-07 true resid norm > 1.246351751392e-02 ||r(i)||/||b|| 1.856369340916e-07 > 63 KSP preconditioned resid norm 1.518242462597e-07 true resid norm > 2.453397483619e-02 ||r(i)||/||b|| 3.654194624097e-07 > 64 KSP preconditioned resid norm 1.723839793842e-07 true resid norm > 3.078519652295e-02 ||r(i)||/||b|| 4.585278186149e-07 > 65 KSP preconditioned resid norm 6.991080261238e-07 true resid norm > 1.418507987285e-01 ||r(i)||/||b|| 2.112786165301e-06 > 66 KSP preconditioned resid norm 1.030788328966e-05 true resid norm > 2.103063209272e+00 ||r(i)||/||b|| 3.132391846314e-05 > 67 KSP preconditioned resid norm 1.229992774455e-07 true resid norm > 2.097942598994e-02 ||r(i)||/||b|| 3.124764991443e-07 > 68 KSP preconditioned resid norm 6.581561333514e-08 true resid norm > 3.405626207360e-03 ||r(i)||/||b|| 5.072484610304e-08 > 69 KSP preconditioned resid norm 6.343808367654e-08 true resid norm > 2.730918192381e-03 ||r(i)||/||b|| 4.067545778488e-08 > 70 KSP preconditioned resid norm 6.319435025925e-08 true resid norm > 2.718071769813e-03 ||r(i)||/||b|| 4.048411769995e-08 > 71 KSP preconditioned resid norm 6.446816119437e-08 true resid norm > 2.784759189575e-03 ||r(i)||/||b|| 4.147738851080e-08 > 72 KSP preconditioned resid norm 6.716418150551e-08 true resid norm > 2.909287512040e-03 ||r(i)||/||b|| 4.333216634250e-08 > 73 KSP preconditioned resid norm 7.131730758571e-08 true resid norm > 3.126610397256e-03 ||r(i)||/||b|| 4.656906588345e-08 > 74 KSP preconditioned resid norm 7.792929027648e-08 true resid norm > 3.511383015577e-03 ||r(i)||/||b|| 5.230003301273e-08 > 75 KSP preconditioned resid norm 9.337691950640e-08 true resid norm > 6.318359239434e-03 ||r(i)||/||b|| 9.410833148728e-08 > 76 KSP preconditioned resid norm 1.479957854534e-07 true resid norm > 1.818912212739e-02 ||r(i)||/||b|| 2.709165259144e-07 > 77 KSP preconditioned resid norm 6.682161192641e-07 true resid norm > 1.212127978501e-01 ||r(i)||/||b|| 1.805394997071e-06 > 78 KSP preconditioned resid norm 5.663368166860e-05 true resid norm > 1.075143624402e+01 ||r(i)||/||b|| 1.601364670279e-04 > 79 KSP preconditioned resid norm 8.776500145772e-07 true resid norm > 1.663312859850e-01 ||r(i)||/||b|| 2.477408960935e-06 > 80 KSP preconditioned resid norm 1.630840126402e-07 true resid norm > 2.931897201954e-02 ||r(i)||/||b|| 4.366892468635e-07 > 81 KSP preconditioned resid norm 5.046755275791e-08 true resid norm > 7.225871257706e-03 ||r(i)||/||b|| 1.076252017075e-07 > 82 KSP preconditioned resid norm 2.756629667916e-08 true resid norm > 2.674186926924e-03 ||r(i)||/||b|| 3.983047817339e-08 > 83 KSP preconditioned resid norm 1.948064945258e-08 true resid norm > 1.002338825836e-03 ||r(i)||/||b|| 1.492926104859e-08 > 84 KSP preconditioned resid norm 1.686033880678e-08 true resid norm > 7.336515948282e-04 ||r(i)||/||b|| 1.092731908173e-08 > 85 KSP preconditioned resid norm 1.627099807112e-08 true resid norm > 7.004183219758e-04 ||r(i)||/||b|| 1.043232857241e-08 > 86 KSP preconditioned resid norm 1.815047764168e-08 true resid norm > 8.235488547093e-04 ||r(i)||/||b|| 1.226628712898e-08 > 87 KSP preconditioned resid norm 2.850439847332e-08 true resid norm > 2.214968823226e-03 ||r(i)||/||b|| 3.299068830231e-08 > 88 KSP preconditioned resid norm 1.519490344670e-07 true resid norm > 1.992168037360e-02 ||r(i)||/||b|| 2.967219857778e-07 > 89 KSP preconditioned resid norm 6.189262765590e-07 true resid norm > 1.040718853983e-01 ||r(i)||/||b|| 1.550090952164e-06 > 90 KSP preconditioned resid norm 5.042710252729e-08 true resid norm > 8.652148341175e-03 ||r(i)||/||b|| 1.288687795855e-07 > 91 KSP preconditioned resid norm 2.122997506948e-08 true resid norm > 3.738624152724e-03 ||r(i)||/||b|| 5.568465921898e-08 > 92 KSP preconditioned resid norm 1.416290557086e-08 true resid norm > 2.531358077791e-03 ||r(i)||/||b|| 3.770312450912e-08 > 93 KSP preconditioned resid norm 1.113840681888e-08 true resid norm > 2.004079623183e-03 ||r(i)||/||b|| 2.984961480637e-08 > 94 KSP preconditioned resid norm 5.052832639519e-08 true resid norm > 9.596273544776e-03 ||r(i)||/||b|| 1.429309821699e-07 > 95 KSP preconditioned resid norm 3.103683240415e-09 true resid norm > 1.340636499506e-04 ||r(i)||/||b|| 1.996801057338e-09 > 96 KSP preconditioned resid norm 3.429857119447e-09 true resid norm > 2.674501337153e-04 ||r(i)||/||b|| 3.983516113316e-09 > 97 KSP preconditioned resid norm 5.170276146377e-09 true resid norm > 6.719440839436e-04 ||r(i)||/||b|| 1.000822115305e-08 > 98 KSP preconditioned resid norm 1.148497437573e-08 true resid norm > 1.829675810596e-03 ||r(i)||/||b|| 2.725197019872e-08 > 99 KSP preconditioned resid norm 6.490313757291e-08 true resid norm > 1.133744740971e-02 ||r(i)||/||b|| 1.688647667252e-07 > 100 KSP preconditioned resid norm 1.033700268099e-06 true resid norm > 1.744999476835e-01 ||r(i)||/||b|| 2.599076484702e-06 > 101 KSP preconditioned resid norm 2.023863503132e-08 true resid norm > 3.274357033310e-03 ||r(i)||/||b|| 4.876966715905e-08 > 102 KSP preconditioned resid norm 4.803328753117e-09 true resid norm > 6.927971746924e-04 ||r(i)||/||b|| 1.031881596135e-08 > 103 KSP preconditioned resid norm 1.962342036005e-09 true resid norm > 1.445569202511e-04 ||r(i)||/||b|| 2.153092290934e-09 > 104 KSP preconditioned resid norm 1.498768208313e-09 true resid norm > 6.802150842044e-05 ||r(i)||/||b|| 1.013141295092e-09 > 105 KSP preconditioned resid norm 1.395549026495e-09 true resid norm > 6.005884495747e-05 ||r(i)||/||b|| 8.945419967142e-10 > 106 KSP preconditioned resid norm 1.457410891473e-09 true resid norm > 6.410049873543e-05 ||r(i)||/||b|| 9.547401081351e-10 > 107 KSP preconditioned resid norm 1.780342425895e-09 true resid norm > 1.209740535326e-04 ||r(i)||/||b|| 1.801839037603e-09 > 108 KSP preconditioned resid norm 3.312859958719e-09 true resid norm > 4.525233578504e-04 ||r(i)||/||b|| 6.740075477275e-09 > 109 KSP preconditioned resid norm 9.207121903791e-09 true resid norm > 1.584434833387e-03 ||r(i)||/||b|| 2.359924671420e-08 > 110 KSP preconditioned resid norm 8.739767895565e-08 true resid norm > 1.687246640666e-02 ||r(i)||/||b|| 2.513056952659e-07 > 111 KSP preconditioned resid norm 1.156065363139e-06 true resid norm > 2.263019811534e-01 ||r(i)||/||b|| 3.370638017177e-06 > 112 KSP preconditioned resid norm 7.306873252299e-08 true resid norm > 1.428339270295e-02 ||r(i)||/||b|| 2.127429296617e-07 > 113 KSP preconditioned resid norm 3.669015190800e-08 true resid norm > 7.144007229935e-03 ||r(i)||/||b|| 1.064058840381e-07 > 114 KSP preconditioned resid norm 2.027376209045e-08 true resid norm > 3.927954323325e-03 ||r(i)||/||b|| 5.850462335527e-08 > 115 KSP preconditioned resid norm 1.150536971084e-08 true resid norm > 2.217667544765e-03 ||r(i)||/||b|| 3.303088420943e-08 > 116 KSP preconditioned resid norm 4.817493968709e-09 true resid norm > 9.229121300958e-04 ||r(i)||/||b|| 1.374624603973e-08 > 117 KSP preconditioned resid norm 1.318062597149e-09 true resid norm > 2.502759879830e-04 ||r(i)||/||b|| 3.727717077784e-09 > 118 KSP preconditioned resid norm 2.565229681360e-10 true resid norm > 4.554155253246e-05 ||r(i)||/||b|| 6.783152650489e-10 > 119 KSP preconditioned resid norm 1.220621552659e-10 true resid norm > 1.887267706055e-05 ||r(i)||/||b|| 2.810976839972e-10 > 120 KSP preconditioned resid norm 3.007389263037e-10 true resid norm > 5.965755276433e-05 ||r(i)||/||b|| 8.885649800071e-10 > 121 KSP preconditioned resid norm 7.687988210487e-10 true resid norm > 1.555121445636e-04 ||r(i)||/||b|| 2.316264064181e-09 > 122 KSP preconditioned resid norm 3.818221035410e-09 true resid norm > 7.723490689110e-04 ||r(i)||/||b|| 1.150369572963e-08 > 123 KSP preconditioned resid norm 1.126628704389e-07 true resid norm > 2.277477391889e-02 ||r(i)||/||b|| 3.392171752645e-07 > 124 KSP preconditioned resid norm 2.091915962841e-09 true resid norm > 4.232853495368e-04 ||r(i)||/||b|| 6.304592138305e-09 > 125 KSP preconditioned resid norm 6.498729874824e-10 true resid norm > 1.317063018752e-04 ||r(i)||/||b|| 1.961689711860e-09 > 126 KSP preconditioned resid norm 2.326148226133e-10 true resid norm > 4.710921144409e-05 ||r(i)||/||b|| 7.016646440449e-10 > 127 KSP preconditioned resid norm 6.419933487001e-11 true resid norm > 1.192761364491e-05 ||r(i)||/||b|| 1.776549537959e-10 > 128 KSP preconditioned resid norm 3.990889550665e-11 true resid norm > 4.736495989458e-06 ||r(i)||/||b|| 7.054738703082e-11 > 129 KSP preconditioned resid norm 1.449761398531e-10 true resid norm > 2.794402799777e-05 ||r(i)||/||b|| 4.162102454528e-10 > 130 KSP preconditioned resid norm 1.621600598079e-10 true resid norm > 3.151012544337e-05 ||r(i)||/||b|| 4.693252184718e-10 > 131 KSP preconditioned resid norm 8.656752727415e-11 true resid norm > 1.541111379317e-05 ||r(i)||/||b|| 2.295396875164e-10 > 132 KSP preconditioned resid norm 5.162222539791e-11 true resid norm > 7.009942258504e-06 ||r(i)||/||b|| 1.044090632981e-10 > 133 KSP preconditioned resid norm 3.781167023504e-11 true resid norm > 3.012802382737e-06 ||r(i)||/||b|| 4.487396088066e-11 > 134 KSP preconditioned resid norm 3.198899307844e-11 true resid norm > 1.735828409010e-06 ||r(i)||/||b|| 2.585416705979e-11 > 135 KSP preconditioned resid norm 2.800813815582e-11 true resid norm > 1.267385630623e-06 ||r(i)||/||b|| 1.887698095805e-11 > 136 KSP preconditioned resid norm 2.478323040026e-11 true resid norm > 1.093735119127e-06 ||r(i)||/||b|| 1.629055633743e-11 > 137 KSP preconditioned resid norm 2.094802859126e-11 true resid norm > 9.461872584172e-07 ||r(i)||/||b|| 1.409291570641e-11 > 138 KSP preconditioned resid norm 1.475156452329e-11 true resid norm > 1.291911076055e-06 ||r(i)||/||b|| 1.924227337987e-11 > 139 KSP preconditioned resid norm 2.079412078050e-10 true resid norm > 3.438188849064e-05 ||r(i)||/||b|| 5.120984794664e-10 > 140 KSP preconditioned resid norm 1.776813945343e-10 true resid norm > 1.130664701479e-05 ||r(i)||/||b|| 1.684060125352e-10 > 141 KSP preconditioned resid norm 9.791545045192e-11 true resid norm > 4.353141785980e-06 ||r(i)||/||b|| 6.483754637586e-11 > 142 KSP preconditioned resid norm 9.022678471638e-11 true resid norm > 3.899337928100e-06 ||r(i)||/||b|| 5.807839858618e-11 > 143 KSP preconditioned resid norm 9.321246955436e-11 true resid norm > 4.122786827556e-06 ||r(i)||/||b|| 6.140654159035e-11 > 144 KSP preconditioned resid norm 1.010522679933e-10 true resid norm > 6.735251544406e-06 ||r(i)||/||b|| 1.003177028991e-10 > 145 KSP preconditioned resid norm 1.190099575669e-10 true resid norm > 1.412264052735e-05 ||r(i)||/||b|| 2.103486183452e-10 > 146 KSP preconditioned resid norm 1.559766947287e-10 true resid norm > 2.516382044354e-05 ||r(i)||/||b|| 3.748006509360e-10 > 147 KSP preconditioned resid norm 1.549935482860e-10 true resid norm > 2.591211842556e-05 ||r(i)||/||b|| 3.859461195417e-10 > 148 KSP preconditioned resid norm 1.144296191694e-10 true resid norm > 1.784161868459e-05 ||r(i)||/||b|| 2.657406617465e-10 > 149 KSP preconditioned resid norm 6.371964918270e-11 true resid norm > 6.643697384486e-06 ||r(i)||/||b|| 9.895405627750e-11 > 150 KSP preconditioned resid norm 4.845064670450e-11 true resid norm > 2.593417735211e-06 ||r(i)||/||b|| 3.862746745817e-11 > 151 KSP preconditioned resid norm 4.470906007756e-11 true resid norm > 1.934674135353e-06 ||r(i)||/||b|| 2.881585993297e-11 > 152 KSP preconditioned resid norm 5.319998529107e-11 true resid norm > 3.264222737453e-06 ||r(i)||/||b|| 4.861872264358e-11 > 153 KSP preconditioned resid norm 2.352067006596e-10 true resid norm > 2.492527941922e-05 ||r(i)||/||b|| 3.712477194012e-10 > 154 KSP preconditioned resid norm 3.473141928722e-09 true resid norm > 3.763914613551e-04 ||r(i)||/||b|| 5.606134610567e-09 > 155 KSP preconditioned resid norm 1.833222104680e-08 true resid norm > 1.977164847191e-03 ||r(i)||/||b|| 2.944873467832e-08 > 156 KSP preconditioned resid norm 1.545363805860e-09 true resid norm > 1.660725936026e-04 ||r(i)||/||b|| 2.473555886497e-09 > 157 KSP preconditioned resid norm 5.975372365436e-10 true resid norm > 6.408112870416e-05 ||r(i)||/||b|| 9.544516026460e-10 > 158 KSP preconditioned resid norm 2.271151314350e-10 true resid norm > 2.430517352175e-05 ||r(i)||/||b|| 3.620115982588e-10 > 159 KSP preconditioned resid norm 9.284094104344e-11 true resid norm > 9.904055955270e-06 ||r(i)||/||b|| 1.475152243782e-10 > 160 KSP preconditioned resid norm 2.941687346568e-11 true resid norm > 3.088975114702e-06 ||r(i)||/||b|| 4.600850996823e-11 > 161 KSP preconditioned resid norm 6.416123537136e-12 true resid norm > 6.096752141640e-07 ||r(i)||/||b|| 9.080762106091e-12 > 162 KSP preconditioned resid norm 4.077333656404e-12 true resid norm > 5.049107356604e-07 ||r(i)||/||b|| 7.520355377462e-12 > 163 KSP preconditioned resid norm 5.087791292533e-12 true resid norm > 7.009318964057e-07 ||r(i)||/||b|| 1.043997796853e-11 > 164 KSP preconditioned resid norm 7.749563397094e-12 true resid norm > 9.156004927546e-07 ||r(i)||/||b|| 1.363734340148e-11 > 165 KSP preconditioned resid norm 1.052344064381e-11 true resid norm > 1.169820734369e-06 ||r(i)||/||b|| 1.742380787145e-11 > 166 KSP preconditioned resid norm 1.104395410109e-11 true resid norm > 1.200080231537e-06 ||r(i)||/||b|| 1.787450570014e-11 > 167 KSP preconditioned resid norm 1.007023895764e-11 true resid norm > 1.084060757795e-06 ||r(i)||/||b|| 1.614646228252e-11 > 168 KSP preconditioned resid norm 8.344424405547e-12 true resid norm > 8.950906940095e-07 ||r(i)||/||b|| 1.333186173038e-11 > 169 KSP preconditioned resid norm 5.763569089910e-12 true resid norm > 6.172251688691e-07 ||r(i)||/||b|| 9.193214344588e-12 > 170 KSP preconditioned resid norm 3.424168629795e-12 true resid norm > 3.659829134192e-07 ||r(i)||/||b|| 5.451105267927e-12 > 171 KSP preconditioned resid norm 9.348832909193e-13 true resid norm > 9.664575613677e-08 ||r(i)||/||b|| 1.439483022522e-12 > 172 KSP preconditioned resid norm 3.482816687679e-13 true resid norm > 2.647798359576e-08 ||r(i)||/||b|| 3.943743562083e-13 > 173 KSP preconditioned resid norm 1.008488994165e-11 true resid norm > 1.080536439819e-06 ||r(i)||/||b|| 1.609396959070e-11 > 174 KSP preconditioned resid norm 9.378977735316e-11 true resid norm > 1.005208011236e-05 ||r(i)||/||b|| 1.497199591702e-10 > 175 KSP preconditioned resid norm 8.737224485597e-10 true resid norm > 9.364289005956e-05 ||r(i)||/||b|| 1.394757057204e-09 > 176 KSP preconditioned resid norm 1.536767823776e-08 true resid norm > 1.647058907589e-03 ||r(i)||/||b|| 2.453199632700e-08 > 177 KSP preconditioned resid norm 1.839952929174e-07 true resid norm > 1.972001883268e-02 ||r(i)||/||b|| 2.937183529640e-07 > 178 KSP preconditioned resid norm 4.438575380976e-08 true resid norm > 4.757119581523e-03 ||r(i)||/||b|| 7.085456358805e-08 > 179 KSP preconditioned resid norm 4.796782862052e-07 true resid norm > 5.141033952542e-02 ||r(i)||/||b|| 7.657274761676e-07 > 180 KSP preconditioned resid norm 9.921079426648e-09 true resid norm > 1.063308787741e-03 ||r(i)||/||b|| 1.583737360889e-08 > 181 KSP preconditioned resid norm 2.536576210124e-10 true resid norm > 2.718624644920e-05 ||r(i)||/||b|| 4.049235245709e-10 > 182 KSP preconditioned resid norm 1.047157364780e-11 true resid norm > 1.122313966041e-06 ||r(i)||/||b|| 1.671622184598e-11 > 183 KSP preconditioned resid norm 1.244906673812e-10 true resid norm > 1.334248080723e-05 ||r(i)||/||b|| 1.987285874524e-10 > 184 KSP preconditioned resid norm 1.959956371324e-10 true resid norm > 2.100614044675e-05 ||r(i)||/||b|| 3.128743956332e-10 > 185 KSP preconditioned resid norm 2.704883349749e-10 true resid norm > 2.899001474168e-05 ||r(i)||/||b|| 4.317896171690e-10 > 186 KSP preconditioned resid norm 3.521080224859e-10 true resid norm > 3.773773546731e-05 ||r(i)||/||b|| 5.620818925224e-10 > 187 KSP preconditioned resid norm 5.585664451775e-10 true resid norm > 5.986524496711e-05 ||r(i)||/||b|| 8.916584360652e-10 > 188 KSP preconditioned resid norm 8.373951317446e-10 true resid norm > 8.974914849925e-05 ||r(i)||/||b|| 1.336762013302e-09 > 189 KSP preconditioned resid norm 1.025188904578e-09 true resid norm > 1.098762320723e-04 ||r(i)||/||b|| 1.636543361747e-09 > 190 KSP preconditioned resid norm 8.804939847806e-10 true resid norm > 9.436831188889e-05 ||r(i)||/||b|| 1.405561798656e-09 > 191 KSP preconditioned resid norm 3.943144046396e-10 true resid norm > 4.226124296393e-05 ||r(i)||/||b|| 6.294569382969e-10 > 192 KSP preconditioned resid norm 5.355541541046e-11 true resid norm > 5.739853081002e-06 ||r(i)||/||b|| 8.549181456224e-11 > 193 KSP preconditioned resid norm 1.299526839866e-09 true resid norm > 1.392788189132e-04 ||r(i)||/||b|| 2.074478003345e-09 > 194 KSP preconditioned resid norm 2.882900653170e-08 true resid norm > 3.089794419764e-03 ||r(i)||/||b|| 4.602071304651e-08 > 195 KSP preconditioned resid norm 3.502493774106e-09 true resid norm > 3.753853139688e-04 ||r(i)||/||b|| 5.591148623198e-09 > 196 KSP preconditioned resid norm 2.001525286264e-10 true resid norm > 2.145165783125e-05 ||r(i)||/||b|| 3.195101211617e-10 > 197 KSP preconditioned resid norm 1.647406102516e-10 true resid norm > 1.765632792834e-05 ||r(i)||/||b|| 2.629808623666e-10 > 198 KSP preconditioned resid norm 2.535219507377e-10 true resid norm > 2.717160853620e-05 ||r(i)||/||b|| 4.047055012650e-10 > 199 KSP preconditioned resid norm 3.552020174756e-10 true resid norm > 3.806933251915e-05 ||r(i)||/||b|| 5.670208401339e-10 > 200 KSP preconditioned resid norm 3.473561803799e-10 true resid norm > 3.722844483670e-05 ||r(i)||/||b|| 5.544963011256e-10 > 201 KSP preconditioned resid norm 3.040362870159e-10 true resid norm > 3.258556793500e-05 ||r(i)||/||b|| 4.853433166302e-10 > 202 KSP preconditioned resid norm 2.238603169023e-10 true resid norm > 2.399258218026e-05 ||r(i)||/||b|| 3.573557297856e-10 > 203 KSP preconditioned resid norm 2.794466174289e-10 true resid norm > 2.995013150895e-05 ||r(i)||/||b|| 4.460900049084e-10 > 204 KSP preconditioned resid norm 5.892481453566e-10 true resid norm > 6.315359820006e-05 ||r(i)||/||b|| 9.406365685783e-10 > 205 KSP preconditioned resid norm 1.796869246546e-09 true resid norm > 1.925822916338e-04 ||r(i)||/||b|| 2.868402611004e-09 > 206 KSP preconditioned resid norm 5.383703166757e-09 true resid norm > 5.770068661128e-04 ||r(i)||/||b|| 8.594185827181e-09 > 207 KSP preconditioned resid norm 1.513140030683e-08 true resid norm > 1.621731659303e-03 ||r(i)||/||b|| 2.415476151223e-08 > 208 KSP preconditioned resid norm 2.664830499802e-08 true resid norm > 2.856074057521e-03 ||r(i)||/||b|| 4.253958250426e-08 > 209 KSP preconditioned resid norm 1.744796115928e-08 true resid norm > 1.870012714072e-03 ||r(i)||/||b|| 2.785276520572e-08 > 210 KSP preconditioned resid norm 1.709105056825e-09 true resid norm > 1.831760214160e-04 ||r(i)||/||b|| 2.728301619248e-09 > 211 KSP preconditioned resid norm 5.824652325203e-09 true resid norm > 6.242662821250e-04 ||r(i)||/||b|| 9.298087681987e-09 > 212 KSP preconditioned resid norm 1.112672284142e-07 true resid norm > 1.192524045420e-02 ||r(i)||/||b|| 1.776196064834e-07 > 213 KSP preconditioned resid norm 1.643443975106e-06 true resid norm > 1.761386966729e-01 ||r(i)||/||b|| 2.623484709571e-06 > 214 KSP preconditioned resid norm 7.564685658242e-08 true resid norm > 8.107571011682e-03 ||r(i)||/||b|| 1.207576130781e-07 > 215 KSP preconditioned resid norm 2.392757332232e-08 true resid norm > 2.564475361971e-03 ||r(i)||/||b|| 3.819638743380e-08 > 216 KSP preconditioned resid norm 1.178197751769e-08 true resid norm > 1.262752000134e-03 ||r(i)||/||b|| 1.880796569355e-08 > 217 KSP preconditioned resid norm 6.804809693774e-09 true resid norm > 7.293161986200e-04 ||r(i)||/||b|| 1.086274584554e-08 > 218 KSP preconditioned resid norm 4.979204750271e-09 true resid norm > 5.336541131997e-04 ||r(i)||/||b|| 7.948471475175e-09 > 219 KSP preconditioned resid norm 4.201414891575e-09 true resid norm > 4.502932602210e-04 ||r(i)||/||b|| 6.706859454096e-09 > 220 KSP preconditioned resid norm 2.892374518861e-09 true resid norm > 3.099947965425e-04 ||r(i)||/||b|| 4.617194427674e-09 > 221 KSP preconditioned resid norm 7.824978176254e-10 true resid norm > 8.386543747579e-05 ||r(i)||/||b|| 1.249127517321e-09 > 222 KSP preconditioned resid norm 4.025045270489e-12 true resid norm > 4.313920255055e-07 ||r(i)||/||b|| 6.425336420232e-12 > 223 KSP preconditioned resid norm 7.270310185183e-10 true resid norm > 7.792069485626e-05 ||r(i)||/||b|| 1.160583990775e-09 > 224 KSP preconditioned resid norm 2.285667940369e-09 true resid norm > 2.449700616763e-04 ||r(i)||/||b|| 3.648688353269e-09 > 225 KSP preconditioned resid norm 4.666494852894e-09 true resid norm > 5.001389357709e-04 ||r(i)||/||b|| 7.449282159120e-09 > 226 KSP preconditioned resid norm 7.718252461926e-09 true resid norm > 8.272158639732e-04 ||r(i)||/||b|| 1.232090512557e-08 > 227 KSP preconditioned resid norm 9.133913525747e-09 true resid norm > 9.789415681285e-04 ||r(i)||/||b|| 1.458077233487e-08 > 228 KSP preconditioned resid norm 9.380662429943e-09 true resid norm > 1.005387270614e-03 ||r(i)||/||b|| 1.497466588249e-08 > 229 KSP preconditioned resid norm 7.153889400651e-09 true resid norm > 7.667293640037e-04 ||r(i)||/||b|| 1.141999345310e-08 > 230 KSP preconditioned resid norm 3.686222155648e-09 true resid norm > 3.950766653994e-04 ||r(i)||/||b|| 5.884439991676e-09 > 231 KSP preconditioned resid norm 1.553980021933e-09 true resid norm > 1.665502564985e-04 ||r(i)||/||b|| 2.480670400953e-09 > 232 KSP preconditioned resid norm 4.135131754480e-10 true resid norm > 4.431892592037e-05 ||r(i)||/||b|| 6.601049439614e-10 > 233 KSP preconditioned resid norm 3.408443088536e-11 true resid norm > 3.653052704994e-06 ||r(i)||/||b|| 5.441012165888e-11 > 234 KSP preconditioned resid norm 7.336512309518e-11 true resid norm > 7.863022750472e-06 ||r(i)||/||b|| 1.171152072005e-10 > 235 KSP preconditioned resid norm 1.711388154684e-09 true resid norm > 1.834207210137e-04 ||r(i)||/||b|| 2.731946279196e-09 > 236 KSP preconditioned resid norm 1.159702499545e-07 true resid norm > 1.242929419260e-02 ||r(i)||/||b|| 1.851271973790e-07 > 237 KSP preconditioned resid norm 3.857597718932e-09 true resid norm > 4.134441113034e-04 ||r(i)||/||b|| 6.158012547810e-09 > 238 KSP preconditioned resid norm 1.645996729438e-09 true resid norm > 1.764122920283e-04 ||r(i)||/||b|| 2.627559755231e-09 > 239 KSP preconditioned resid norm 3.308806112000e-12 true resid norm > 3.546274450365e-07 ||r(i)||/||b|| 5.281972089161e-12 > 240 KSP preconditioned resid norm 7.382914922234e-08 true resid norm > 7.912755349202e-03 ||r(i)||/||b|| 1.178559456912e-07 > 241 KSP preconditioned resid norm 1.809365457089e-08 true resid norm > 1.939215926278e-03 ||r(i)||/||b|| 2.888350729990e-08 > 242 KSP preconditioned resid norm 6.669136386139e-09 true resid norm > 7.147751961904e-04 ||r(i)||/||b|| 1.064616596697e-08 > 243 KSP preconditioned resid norm 9.709051870072e-10 true resid norm > 1.040582926825e-04 ||r(i)||/||b|| 1.549888496470e-09 > 244 KSP preconditioned resid norm 1.658711636824e-07 true resid norm > 1.777750321545e-02 ||r(i)||/||b|| 2.647856986628e-07 > 245 KSP preconditioned resid norm 4.372029935281e-05 true resid norm > 4.685791942789e+00 ||r(i)||/||b|| 6.979217938105e-05 > 246 KSP preconditioned resid norm 1.108728206741e-06 true resid norm > 1.188296918090e-01 ||r(i)||/||b|| 1.769900001491e-06 > 247 KSP preconditioned resid norm 6.312367692314e-07 true resid norm > 6.765379494300e-02 ||r(i)||/||b|| 1.007664414067e-06 > 248 KSP preconditioned resid norm 4.356226723159e-07 true resid norm > 4.668854601307e-02 ||r(i)||/||b|| 6.953990740879e-07 > 249 KSP preconditioned resid norm 2.372265656235e-07 true resid norm > 2.542513080365e-02 ||r(i)||/||b|| 3.786927186482e-07 > 250 KSP preconditioned resid norm 8.566943433538e-08 true resid norm > 9.181756554699e-03 ||r(i)||/||b|| 1.367569897090e-07 > 251 KSP preconditioned resid norm 2.811298063268e-08 true resid norm > 3.013052977489e-03 ||r(i)||/||b|| 4.487769334555e-08 > 252 KSP preconditioned resid norm 9.927933000520e-09 true resid norm > 1.064041855973e-03 ||r(i)||/||b|| 1.584829223911e-08 > 253 KSP preconditioned resid norm 5.465425608817e-09 true resid norm > 5.857655977788e-04 ||r(i)||/||b|| 8.724642104167e-09 > 254 KSP preconditioned resid norm 2.802709112126e-09 true resid norm > 3.003847634638e-04 ||r(i)||/||b|| 4.474058505150e-09 > 255 KSP preconditioned resid norm 8.749035597488e-10 true resid norm > 9.376916709178e-05 ||r(i)||/||b|| 1.396637881063e-09 > 256 KSP preconditioned resid norm 2.525241256290e-12 true resid norm > 2.706479113805e-07 ||r(i)||/||b|| 4.031145174775e-12 > 257 KSP preconditioned resid norm 1.954487205473e-10 true resid norm > 2.094752459268e-05 ||r(i)||/||b|| 3.120013461568e-10 > 258 KSP preconditioned resid norm 3.558554122752e-10 true resid norm > 3.813936440469e-05 ||r(i)||/||b|| 5.680639248413e-10 > 259 KSP preconditioned resid norm 3.327616628106e-10 true resid norm > 3.566425546219e-05 ||r(i)||/||b|| 5.311985988919e-10 > 260 KSP preconditioned resid norm 3.138738006439e-10 true resid norm > 3.363991905729e-05 ||r(i)||/||b|| 5.010472709577e-10 > 261 KSP preconditioned resid norm 5.105712063176e-10 true resid norm > 5.472127327668e-05 ||r(i)||/||b|| 8.150419325301e-10 > 262 KSP preconditioned resid norm 1.562664105193e-09 true resid norm > 1.674809867731e-04 ||r(i)||/||b|| 2.494533093765e-09 > 263 KSP preconditioned resid norm 2.160588142357e-09 true resid norm > 2.315644371948e-04 ||r(i)||/||b|| 3.449019277059e-09 > 264 KSP preconditioned resid norm 2.958294939588e-08 true resid norm > 3.170599194897e-03 ||r(i)||/||b|| 4.722425375632e-08 > 265 KSP preconditioned resid norm 4.249142745113e-09 true resid norm > 4.554085662232e-04 ||r(i)||/||b|| 6.783048998670e-09 > 266 KSP preconditioned resid norm 3.648294184577e-10 true resid norm > 3.910116757905e-05 ||r(i)||/||b|| 5.823894306457e-10 > 267 KSP preconditioned resid norm 4.149339357988e-08 true resid norm > 4.447119809209e-03 ||r(i)||/||b|| 6.623729504912e-08 > 268 KSP preconditioned resid norm 3.843343031428e-06 true resid norm > 4.119163426746e-01 ||r(i)||/||b|| 6.135257311663e-06 > 269 KSP preconditioned resid norm 6.080537781060e-08 true resid norm > 6.516912135594e-03 ||r(i)||/||b|| 9.706566282308e-08 > 270 KSP preconditioned resid norm 4.757307440420e-09 true resid norm > 5.098719177265e-04 ||r(i)||/||b|| 7.594249334542e-09 > 271 KSP preconditioned resid norm 5.436376140969e-06 true resid norm > 5.826521752233e-01 ||r(i)||/||b|| 8.678269463611e-06 > 272 KSP preconditioned resid norm 6.392279306681e-10 true resid norm > 6.851026028837e-05 ||r(i)||/||b|| 1.020420973416e-09 > 273 KSP preconditioned resid norm 2.599896339650e-06 true resid norm > 2.786479850495e-01 ||r(i)||/||b|| 4.150301676682e-06 > 274 KSP preconditioned resid norm 6.217134530094e-04 true resid norm > 6.663311852755e+01 ||r(i)||/||b|| 9.924620251545e-04 > 275 KSP preconditioned resid norm 1.906648612009e-06 true resid norm > 2.043480679715e-01 ||r(i)||/||b|| 3.043647091072e-06 > 276 KSP preconditioned resid norm 8.789054443529e-06 true resid norm > 9.419807527825e-01 ||r(i)||/||b|| 1.403026222128e-05 > 277 KSP preconditioned resid norm 2.890658496307e-05 true resid norm > 3.098108771408e+00 ||r(i)||/||b|| 4.614455053833e-05 > 278 KSP preconditioned resid norm 2.700565878072e-05 true resid norm > 2.894374013848e+00 ||r(i)||/||b|| 4.311003835354e-05 > 279 KSP preconditioned resid norm 2.127015678842e-05 true resid norm > 2.279662554372e+00 ||r(i)||/||b|| 3.395426426643e-05 > 280 KSP preconditioned resid norm 6.022114971533e-04 true resid norm > 6.454296569280e+01 ||r(i)||/||b|| 9.613303993039e-04 > 281 KSP preconditioned resid norm 4.191696888290e-04 true resid norm > 4.492517159409e+01 ||r(i)||/||b|| 6.691346250326e-04 > 282 KSP preconditioned resid norm 2.128575355843e-02 true resid norm > 2.281334162770e+03 ||r(i)||/||b|| 3.397916191332e-02 > 283 KSP preconditioned resid norm 5.566202222495e-03 true resid norm > 5.965664899852e+02 ||r(i)||/||b|| 8.885515189345e-03 > 284 KSP preconditioned resid norm 1.253993707281e-01 true resid norm > 1.343987506229e+04 ||r(i)||/||b|| 2.001792189364e-01 > 285 KSP preconditioned resid norm 1.655806619282e-02 true resid norm > 1.774636823235e+03 ||r(i)||/||b|| 2.643219609740e-02 > 286 KSP preconditioned resid norm 1.065113628903e-01 true resid norm > 1.141552307358e+04 ||r(i)||/||b|| 1.700276588903e-01 > 287 KSP preconditioned resid norm 6.805321243488e+00 true resid norm > 7.293710226785e+05 ||r(i)||/||b|| 1.086356241840e+01 > 288 KSP preconditioned resid norm 1.283123255580e-01 true resid norm > 1.375207558409e+04 ||r(i)||/||b|| 2.048292663748e-01 > 289 KSP preconditioned resid norm 1.145625946046e+00 true resid norm > 1.227842651328e+05 ||r(i)||/||b|| 1.828801099567e+00 > 290 KSP preconditioned resid norm 4.728734309042e+00 true resid norm > 5.068095473462e+05 ||r(i)||/||b|| 7.548637086810e+00 > 291 KSP preconditioned resid norm 1.345611312328e+00 true resid norm > 1.442180117418e+05 ||r(i)||/||b|| 2.148044443362e+00 > 292 KSP preconditioned resid norm 2.067182052275e+02 true resid norm > 2.215534922722e+07 ||r(i)||/||b|| 3.299912002912e+02 > 293 KSP preconditioned resid norm 2.819056223629e-01 true resid norm > 3.021367907919e+04 ||r(i)||/||b|| 4.500153945805e-01 > 294 KSP preconditioned resid norm 2.473762204291e+00 true resid norm > 2.651293604299e+05 ||r(i)||/||b|| 3.948949528325e+00 > 295 KSP preconditioned resid norm 8.824314765228e-02 true resid norm > 9.457598332934e+03 ||r(i)||/||b|| 1.408654945471e-01 > 296 KSP preconditioned resid norm 3.547759169453e-03 true resid norm > 3.802366767064e+02 ||r(i)||/||b|| 5.663406884461e-03 > 297 KSP preconditioned resid norm 6.629254109013e-04 true resid norm > 7.105007502579e+01 ||r(i)||/||b|| 1.058250055013e-03 > 298 KSP preconditioned resid norm 8.335816942804e-03 true resid norm > 8.934043097547e+02 ||r(i)||/||b|| 1.330674400560e-02 > 299 KSP preconditioned resid norm 2.280133581537e-03 true resid norm > 2.443769078227e+02 ||r(i)||/||b|| 3.639853667338e-03 > 300 KSP preconditioned resid norm 3.042948047775e-02 true resid norm > 3.261327492118e+03 ||r(i)||/||b|| 4.857559962741e-02 > 301 KSP preconditioned resid norm 3.641340291287e-03 true resid norm > 3.902663808886e+02 ||r(i)||/||b|| 5.812793567005e-03 > 302 KSP preconditioned resid norm 5.262062292809e-03 true resid norm > 5.639698140863e+02 ||r(i)||/||b|| 8.400006425973e-03 > 303 KSP preconditioned resid norm 2.767651815672e-04 true resid norm > 2.966274423730e+01 ||r(i)||/||b|| 4.418095365778e-04 > 304 KSP preconditioned resid norm 2.769145142443e-04 true resid norm > 2.967874922780e+01 ||r(i)||/||b|| 4.420479217177e-04 > 305 KSP preconditioned resid norm 2.669690509777e-03 true resid norm > 2.861282853351e+02 ||r(i)||/||b|| 4.261716452611e-03 > 306 KSP preconditioned resid norm 5.477323157568e-03 true resid norm > 5.870407366268e+02 ||r(i)||/||b|| 8.743634564843e-03 > 307 KSP preconditioned resid norm 1.187253595512e-06 true resid norm > 1.272457750950e-01 ||r(i)||/||b|| 1.895252727680e-06 > 308 KSP preconditioned resid norm 2.168305092952e+00 true resid norm > 2.323915137191e+05 ||r(i)||/||b|| 3.461338106800e+00 > 309 KSP preconditioned resid norm 3.674108569258e-03 true resid norm > 3.937783732259e+02 ||r(i)||/||b|| 5.865102675514e-03 > 310 KSP preconditioned resid norm 9.663808831088e-04 true resid norm > 1.035733933599e+02 ||r(i)||/||b|| 1.542666199595e-03 > 311 KSP preconditioned resid norm 2.905881247679e-04 true resid norm > 3.114423999927e+01 ||r(i)||/||b|| 4.638755649534e-04 > 312 KSP preconditioned resid norm 2.276520242446e-04 true resid norm > 2.439896429079e+01 ||r(i)||/||b|| 3.634085578884e-04 > 313 KSP preconditioned resid norm 5.909203310746e-06 true resid norm > 6.333281888478e-01 ||r(i)||/||b|| 9.433059577294e-06 > 314 KSP preconditioned resid norm 1.132991640772e-04 true resid norm > 1.214301635826e+01 ||r(i)||/||b|| 1.808632534798e-04 > 315 KSP preconditioned resid norm 1.149655320396e-01 true resid norm > 1.232161196723e+04 ||r(i)||/||b|| 1.835233324869e-01 > 316 KSP preconditioned resid norm 1.445642724369e-02 true resid norm > 1.549390358719e+03 ||r(i)||/||b|| 2.307727939423e-02 > 317 KSP preconditioned resid norm 1.748966842893e+00 true resid norm > 1.874482760043e+05 ||r(i)||/||b|| 2.791934397278e+00 > 318 KSP preconditioned resid norm 3.200837397862e-03 true resid norm > 3.430547894864e+02 ||r(i)||/||b|| 5.109604032292e-03 > 319 KSP preconditioned resid norm 3.363929224528e+00 true resid norm > 3.605344128712e+05 ||r(i)||/||b|| 5.369952981985e+00 > 320 KSP preconditioned resid norm 6.693942426451e-01 true resid norm > 7.174338225998e+04 ||r(i)||/||b|| 1.068576468018e+00 > 321 KSP preconditioned resid norm 4.239885892382e-02 true resid norm > 4.544164483869e+03 ||r(i)||/||b|| 6.768271973389e-02 > 322 KSP preconditioned resid norm 7.716682076793e-01 true resid norm > 8.270475554445e+04 ||r(i)||/||b|| 1.231839826671e+00 > 323 KSP preconditioned resid norm 8.829139791489e-02 true resid norm > 9.462769631425e+03 ||r(i)||/||b|| 1.409425180677e-01 > 324 KSP preconditioned resid norm 4.184745081336e+00 true resid norm > 4.485066451151e+05 ||r(i)||/||b|| 6.680248848357e+00 > 325 KSP preconditioned resid norm 7.244650997288e+01 true resid norm > 7.764568810616e+06 ||r(i)||/||b|| 1.156487923201e+02 > 326 KSP preconditioned resid norm 6.173923463866e+03 true resid norm > 6.616999712610e+08 ||r(i)||/||b|| 9.855640979055e+03 > 327 KSP preconditioned resid norm 1.959276674036e+00 true resid norm > 2.099885634282e+05 ||r(i)||/||b|| 3.127659030892e+00 > 328 KSP preconditioned resid norm 1.121351920525e+01 true resid norm > 1.201826582282e+06 ||r(i)||/||b|| 1.790051659135e+01 > 329 KSP preconditioned resid norm 1.130971168979e+03 true resid norm > 1.212136163323e+08 ||r(i)||/||b|| 1.805407187894e+03 > 330 KSP preconditioned resid norm 4.420103087306e+00 true resid norm > 4.737315101142e+05 ||r(i)||/||b|| 7.055958722883e+00 > 331 KSP preconditioned resid norm 4.088057471206e+02 true resid norm > 4.381439982323e+07 ||r(i)||/||b|| 6.525903175538e+02 > 332 KSP preconditioned resid norm 3.402449682428e-01 true resid norm > 3.646629036299e+04 ||r(i)||/||b|| 5.431444480354e-01 > 333 KSP preconditioned resid norm 1.669642304015e+00 true resid norm > 1.789465436480e+05 ||r(i)||/||b|| 2.665305977385e+00 > 334 KSP preconditioned resid norm 4.807832112080e+01 true resid norm > 5.152869790310e+06 ||r(i)||/||b|| 7.674903562158e+01 > 335 KSP preconditioned resid norm 1.806983009801e+01 true resid norm > 1.936662501047e+06 ||r(i)||/||b|| 2.884547549782e+01 > 336 KSP preconditioned resid norm 5.609893970183e+03 true resid norm > 6.012492219418e+08 ||r(i)||/||b|| 8.955261791990e+03 > 337 KSP preconditioned resid norm 6.497291386334e+00 true resid norm > 6.963574376650e+05 ||r(i)||/||b|| 1.037184403324e+01 > 338 KSP preconditioned resid norm 1.795335659929e-01 true resid norm > 1.924179270349e+04 ||r(i)||/||b|| 2.865954494717e-01 > 339 KSP preconditioned resid norm 8.220070254050e+00 true resid norm > 8.809989762712e+05 ||r(i)||/||b|| 1.312197368922e+01 > 340 KSP preconditioned resid norm 2.258978929963e-02 true resid norm > 2.421096247060e+03 ||r(i)||/||b|| 3.606083787686e-02 > 341 KSP preconditioned resid norm 3.878103577901e+00 true resid norm > 4.156418591179e+05 ||r(i)||/||b|| 6.190746739079e+00 > 342 KSP preconditioned resid norm 5.952599113346e+00 true resid norm > 6.379791855371e+05 ||r(i)||/||b|| 9.502333501362e+00 > 343 KSP preconditioned resid norm 7.222045044346e+02 true resid norm > 7.740340526223e+07 ||r(i)||/||b|| 1.152879259413e+03 > 344 KSP preconditioned resid norm 6.364182209877e+01 true resid norm > 6.820912521576e+06 ||r(i)||/||b|| 1.015935739488e+02 > 345 KSP preconditioned resid norm 3.827037043269e+02 true resid norm > 4.101687228369e+07 ||r(i)||/||b|| 6.109227518045e+02 > 346 KSP preconditioned resid norm 6.309451684929e+00 true resid norm > 6.762254219609e+05 ||r(i)||/||b|| 1.007198922354e+01 > 347 KSP preconditioned resid norm 2.044414998384e+00 true resid norm > 2.191133974822e+05 ||r(i)||/||b|| 3.263568192651e+00 > 348 KSP preconditioned resid norm 1.457585290772e-02 true resid norm > 1.562190008524e+03 ||r(i)||/||b|| 2.326792282571e-02 > 349 KSP preconditioned resid norm 3.697892503779e-01 true resid norm > 3.963274534823e+04 ||r(i)||/||b|| 5.903069761695e-01 > 350 KSP preconditioned resid norm 1.104685840662e+02 true resid norm > 1.183964449407e+07 ||r(i)||/||b|| 1.763447038252e+02 > 351 KSP preconditioned resid norm 1.199213228986e+02 true resid norm > 1.285275666774e+07 ||r(i)||/||b|| 1.914344277014e+02 > 352 KSP preconditioned resid norm 1.183644579434e+02 true resid norm > 1.268589721439e+07 ||r(i)||/||b|| 1.889491519909e+02 > 353 KSP preconditioned resid norm 1.234968225554e+02 true resid norm > 1.323596647557e+07 ||r(i)||/||b|| 1.971421176662e+02 > 354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm > 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 > 355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm > 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 > 356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm > 6.177943120636e+05 ||r(i)||/||b|| 9.201691405543e+00 > 357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm > 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 > Linear solve did not converge due to DIVERGED_DTOL iterations 357 > > > > > > > >On 2013-01-09 01:58:24?"Jed Brown" ??? > > >>On Tue, Jan 8, 2013 at 11:50 AM, w_ang_temp wrote: > >> >> >>I am sorry. >> >>In my view, preconditioned resid norm:||rp||=||Bb-BAx||(B is the >> preconditioned matrix); >> > > >-ksp_norm_type preconditioned is the default for GMRES, so it's using > preconditioned residual. > > >> >>true resid norm:||rt||=||b-Ax||; ||r(i)||/||b||: ||rt||/||b||. Is >> it right? >> >>(1) Divergence is detected if >> >> >> ||rp||/||b|| > dtol or ||rt||/||b|| > dtol ? >> > > >Neither, it's |rp|/|min(rp0,rp1,rp2,rp3,...)|. Your solver "converges" a > bit at some iteration and then jumps a lot so the denominator is smaller > than rp0. > > >> >> Both of them (rt/b:1.701448294063e+04 / 6.7139E+4; >> rt/b:2.716078947576e+04; dtol=1.0E+5 ) >> >>are not in this example, but it is divergent? >> >> >> >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefonseca at gmail.com Tue Jan 15 11:54:23 2013 From: jefonseca at gmail.com (Jim Fonseca) Date: Tue, 15 Jan 2013 12:54:23 -0500 Subject: [petsc-users] MatMatMult size error Message-ID: Hi, We are in the process of upgrading from Petsc 3.2 to 3.3p5. We are creating matrices A and B in this way. petsc_matrix = new Mat; ierr = MatCreateDense(comm, m, num_cols ,num_rows,num_cols,data,A); Elsewhere, we have this. It gets called a few times, and on the 4th time, the size of matrix is C is wrong. Please see the output below. What could be the problem? C = new Mat; double fill = PETSC_DEFAULT; MatMatMult(A,B,MAT_INITIAL_MATRIX, fill, C); { int m,n; MatGetOwnershipRange(A, &m, &n); cerr << "A.m = " << m << "\n"; cerr << "A.n = " << n << "\n"; MatGetSize(A,&m,&n); cerr << "A global rows = " << m << "\n"; cerr << "A global cols = " << n << "\n"; MatGetOwnershipRange(B, &m, &n); cerr << "B.m = " << m << "\n"; cerr << "B.n = " << n << "\n"; MatGetSize(B,&m,&n); cerr << "B global rows = " << m << "\n"; cerr << "B global cols = " << n << "\n"; MatGetOwnershipRange(*C, &m, &n); cerr << "C.m = " << m << "\n"; cerr << "C.n = " << n << "\n"; MatGetSize(*C,&m,&n); cerr << "C global rows = " << m << "\n"; cerr << "C global cols = " << n << "\n"; } A.m = 0 A.n = 59 A global rows = 59 A global cols = 320 B.m = 0 B.n = 320 B global rows = 320 B global cols = 320 C.m = 0 C.n = 59 C global rows = 59 C global cols = 320 A.m = 0 A.n = 59 A global rows = 59 A global cols = 320 B.m = 0 B.n = 320 B global rows = 320 B global cols = 59 C.m = 10922 C.n = -1389327096 C global rows = -1389327112 C global cols = -1389327112 Thanks, Jim -- Jim Fonseca, PhD Research Scientist Network for Computational Nanotechnology Purdue University 765-496-6495 www.jimfonseca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 15 12:00:30 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 12:00:30 -0600 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: Message-ID: This looks like memory corruption. Can you run in valgrind? On Tue, Jan 15, 2013 at 11:54 AM, Jim Fonseca wrote: > Hi, > We are in the process of upgrading from Petsc 3.2 to 3.3p5. > > We are creating matrices A and B in this way. > petsc_matrix = new Mat; > ierr = MatCreateDense(comm, m, num_cols ,num_rows,num_cols,data,A); > > Elsewhere, we have this. It gets called a few times, and on the 4th time, > the size of matrix is C is wrong. Please see the output below. What could > be the problem? > C = new Mat; > double fill = PETSC_DEFAULT; > MatMatMult(A,B,MAT_INITIAL_MATRIX, fill, C); > { > int m,n; > MatGetOwnershipRange(A, &m, &n); > cerr << "A.m = " << m << "\n"; > cerr << "A.n = " << n << "\n"; > MatGetSize(A,&m,&n); > cerr << "A global rows = " << m << "\n"; > cerr << "A global cols = " << n << "\n"; > > MatGetOwnershipRange(B, &m, &n); > cerr << "B.m = " << m << "\n"; > cerr << "B.n = " << n << "\n"; > MatGetSize(B,&m,&n); > cerr << "B global rows = " << m << "\n"; > cerr << "B global cols = " << n << "\n"; > > MatGetOwnershipRange(*C, &m, &n); > cerr << "C.m = " << m << "\n"; > cerr << "C.n = " << n << "\n"; > > MatGetSize(*C,&m,&n); > cerr << "C global rows = " << m << "\n"; > cerr << "C global cols = " << n << "\n"; > > } > > A.m = 0 > A.n = 59 > A global rows = 59 > A global cols = 320 > B.m = 0 > B.n = 320 > B global rows = 320 > B global cols = 320 > C.m = 0 > C.n = 59 > C global rows = 59 > C global cols = 320 > A.m = 0 > A.n = 59 > A global rows = 59 > A global cols = 320 > B.m = 0 > B.n = 320 > B global rows = 320 > B global cols = 59 > C.m = 10922 > C.n = -1389327096 > C global rows = -1389327112 > C global cols = -1389327112 > > > Thanks, > Jim > -- > Jim Fonseca, PhD > Research Scientist > Network for Computational Nanotechnology > Purdue University > 765-496-6495 > www.jimfonseca.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stali at geology.wisc.edu Tue Jan 15 12:09:46 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Tue, 15 Jan 2013 12:09:46 -0600 Subject: [petsc-users] submatrix times subvector In-Reply-To: References: <50F4784C.4010403@geology.wisc.edu> Message-ID: <50F59B6A.9000804@geology.wisc.edu> Jed The problem with MatGetSubMatrix is that iscol isn't available easily. I want to get all associated columns but according to the man page it is not possible in Fortran. "If iscol is PETSC_NULL then all columns are obtained (not supported in Fortran)." Is there a workaround? Tabrez On 01/14/2013 03:38 PM, Jed Brown wrote: > On Mon, Jan 14, 2013 at 3:27 PM, Tabrez Ali > wrote: > > Hello > > I am solving a system of equations of the form: > > |A C| |u1| = |f1| > |C'B| |u2| |f2| > > After each solve, I need to perform B*f2 before updating f. Should > I use MatGetSubMatrix/VecGetSubVector followed by MatMult or is > there something simpler. > > > Yes, or let PCFIELDSPLIT do all the block solver stuff for you. -- No one trusts a model except the one who wrote it; Everyone trusts an observation except the one who made it- Harlow Shapley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 15 12:23:04 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 12:23:04 -0600 Subject: [petsc-users] submatrix times subvector In-Reply-To: <50F59B6A.9000804@geology.wisc.edu> References: <50F4784C.4010403@geology.wisc.edu> <50F59B6A.9000804@geology.wisc.edu> Message-ID: On Tue, Jan 15, 2013 at 12:09 PM, Tabrez Ali wrote: > Jed > > The problem with MatGetSubMatrix is that iscol isn't available easily. I > want to get all associated columns but according to the man page it is not > possible in Fortran. > > "If iscol is PETSC_NULL then all columns are obtained (not supported in > Fortran)." > Well, we could add Fortran support for that case (it currently needs a custom binding to do that) or you can call MatGetColumnOwnershipRange(A,colstart,colend,ierr) ncols = colend - colstart call ISCreateStride(comm,ncols,colstart,one,iscol,ierr) and pass iscol to MatGetSubMatrix(). This is what's done internally when you pass iscol=NULL: if (!iscol) { ierr = ISCreateStride(((PetscObject)mat)->comm,mat->cmap->n,mat->cmap->rstart,1,&iscoltmp);CHKERRQ(ierr); > > Is there a workaround? > > Tabrez > > > On 01/14/2013 03:38 PM, Jed Brown wrote: > > On Mon, Jan 14, 2013 at 3:27 PM, Tabrez Ali wrote: > >> Hello >> >> I am solving a system of equations of the form: >> >> |A C| |u1| = |f1| >> |C'B| |u2| |f2| >> >> After each solve, I need to perform B*f2 before updating f. Should I use >> MatGetSubMatrix/VecGetSubVector followed by MatMult or is there something >> simpler. >> > > Yes, or let PCFIELDSPLIT do all the block solver stuff for you. > > > > -- > No one trusts a model except the one who wrote it; Everyone trusts an observation except the one who made it- Harlow Shapley > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stali at geology.wisc.edu Tue Jan 15 12:35:22 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Tue, 15 Jan 2013 12:35:22 -0600 Subject: [petsc-users] submatrix times subvector In-Reply-To: References: <50F4784C.4010403@geology.wisc.edu> <50F59B6A.9000804@geology.wisc.edu> Message-ID: <50F5A16A.3000104@geology.wisc.edu> Thanks! I will try this. Tabrez On 01/15/2013 12:23 PM, Jed Brown wrote: > On Tue, Jan 15, 2013 at 12:09 PM, Tabrez Ali > wrote: > > Jed > > The problem with MatGetSubMatrix is that iscol isn't available > easily. I want to get all associated columns but according to the > man page it is not possible in Fortran. > > "If iscol is PETSC_NULL then all columns are obtained (not > supported in Fortran)." > > > Well, we could add Fortran support for that case (it currently needs a > custom binding to do that) or you can > > call MatGetColumnOwnershipRange(A,colstart,colend,ierr) > ncols = colend - colstart > call ISCreateStride(comm,ncols,colstart,one,iscol,ierr) > > and pass iscol to MatGetSubMatrix(). This is what's done internally > when you pass iscol=NULL: > > if (!iscol) { > ierr = > ISCreateStride(((PetscObject)mat)->comm,mat->cmap->n,mat->cmap->rstart,1,&iscoltmp);CHKERRQ(ierr); > > > Is there a workaround? > > Tabrez > > > On 01/14/2013 03:38 PM, Jed Brown wrote: >> On Mon, Jan 14, 2013 at 3:27 PM, Tabrez Ali >> > wrote: >> >> Hello >> >> I am solving a system of equations of the form: >> >> |A C| |u1| = |f1| >> |C'B| |u2| |f2| >> >> After each solve, I need to perform B*f2 before updating f. >> Should I use MatGetSubMatrix/VecGetSubVector followed by >> MatMult or is there something simpler. >> >> >> Yes, or let PCFIELDSPLIT do all the block solver stuff for you. > > > -- > No one trusts a model except the one who wrote it; Everyone trusts an observation except the one who made it- Harlow Shapley > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Jan 15 13:02:32 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 15 Jan 2013 13:02:32 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> <2d12f2c6.11c04.13c3f4cf9e6.Coremail.w_ang_temp@163.com> Message-ID: > 1.701448294063e+04 > 1.e4*1.145582415879e+00 hence it declares divergence Note that at iteration 171 the preconditioned residual is 9.348832909193e-13 < 1.e-12 * 1.145582415879e+00 very good convergence. You seem to have set an unreasonably tight convergence criteria. In double precision you can never realistically expect to use a rtol smaller than e-12. In fact normally it is not reasonable to use more than like 1.e-8. Those extra digits don't mean anything. Barry On Jan 15, 2013, at 11:47 AM, Matthew Knepley wrote: > On Tue, Jan 15, 2013 at 11:41 AM, w_ang_temp wrote: > Hello, > I am not sure about it. The following is the information under -ksp_monitor_true_residual. > So can you tell me that how the DIVERGED_DTOL occurs(||rk||>dtol*||b||). > PS: dtol use the default parameter; normb:67139.2122204160. > > Its r_0, not b, so its preconditioned. > > Matt > > Thanks. Jim > > 0 KSP preconditioned resid norm 1.145582415879e+00 true resid norm 6.713921222042e+04 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 1.668442371722e-01 true resid norm 9.315411570816e+03 ||r(i)||/||b|| 1.387477044001e-01 > 2 KSP preconditioned resid norm 7.332643142215e-02 true resid norm 4.580901760869e+03 ||r(i)||/||b|| 6.822990037223e-02 > 3 KSP preconditioned resid norm 4.350407110218e-02 true resid norm 2.969077057634e+03 ||r(i)||/||b|| 4.422269727990e-02 > 4 KSP preconditioned resid norm 2.967861353379e-02 true resid norm 2.171406152803e+03 ||r(i)||/||b|| 3.234184734957e-02 > 5 KSP preconditioned resid norm 2.194027213667e-02 true resid norm 1.697287121259e+03 ||r(i)||/||b|| 2.528011671759e-02 > 6 KSP preconditioned resid norm 1.709062414900e-02 true resid norm 1.385369331879e+03 ||r(i)||/||b|| 2.063428041621e-02 > 7 KSP preconditioned resid norm 1.381432438160e-02 true resid norm 1.166961199876e+03 ||r(i)||/||b|| 1.738121674775e-02 > 8 KSP preconditioned resid norm 1.147659811931e-02 true resid norm 1.004464430978e+03 ||r(i)||/||b|| 1.496092071620e-02 > 9 KSP preconditioned resid norm 9.735929665267e-03 true resid norm 8.766474922321e+02 ||r(i)||/||b|| 1.305716083403e-02 > 10 KSP preconditioned resid norm 8.401139127073e-03 true resid norm 7.735891175651e+02 ||r(i)||/||b|| 1.152216554203e-02 > 11 KSP preconditioned resid norm 7.365394582494e-03 true resid norm 6.915224037997e+02 ||r(i)||/||b|| 1.029982898115e-02 > 12 KSP preconditioned resid norm 6.581540116011e-03 true resid norm 6.301038366131e+02 ||r(i)||/||b|| 9.385034702887e-03 > 13 KSP preconditioned resid norm 6.074644880442e-03 true resid norm 5.941485646876e+02 ||r(i)||/||b|| 8.849501580939e-03 > 14 KSP preconditioned resid norm 6.000465973365e-03 true resid norm 6.039460304962e+02 ||r(i)||/||b|| 8.995429206310e-03 > 15 KSP preconditioned resid norm 6.700641680203e-03 true resid norm 7.024109463517e+02 ||r(i)||/||b|| 1.046200756788e-02 > 16 KSP preconditioned resid norm 8.572956854817e-03 true resid norm 9.345547794474e+02 ||r(i)||/||b|| 1.391965661407e-02 > 17 KSP preconditioned resid norm 1.171098947054e-02 true resid norm 1.308106003451e+03 ||r(i)||/||b|| 1.948348752078e-02 > 18 KSP preconditioned resid norm 1.553731077786e-02 true resid norm 1.756744729914e+03 ||r(i)||/||b|| 2.616570364494e-02 > 19 KSP preconditioned resid norm 1.854806156796e-02 true resid norm 2.108867138509e+03 ||r(i)||/||b|| 3.141036465524e-02 > 20 KSP preconditioned resid norm 1.882735116093e-02 true resid norm 2.144875683978e+03 ||r(i)||/||b|| 3.194669125603e-02 > 21 KSP preconditioned resid norm 1.581371157031e-02 true resid norm 1.800227274907e+03 ||r(i)||/||b|| 2.681335117542e-02 > 22 KSP preconditioned resid norm 1.116066908962e-02 true resid norm 1.264488331079e+03 ||r(i)||/||b|| 1.883382734561e-02 > 23 KSP preconditioned resid norm 6.935989655893e-03 true resid norm 7.781370409044e+02 ||r(i)||/||b|| 1.158990424776e-02 > 24 KSP preconditioned resid norm 4.056542969364e-03 true resid norm 4.491311011610e+02 ||r(i)||/||b|| 6.689549762462e-03 > 25 KSP preconditioned resid norm 2.459663949493e-03 true resid norm 2.758154676638e+02 ||r(i)||/||b|| 4.108112957273e-03 > 26 KSP preconditioned resid norm 1.781038036990e-03 true resid norm 2.198771325293e+02 ||r(i)||/||b|| 3.274943587474e-03 > 27 KSP preconditioned resid norm 1.686781020289e-03 true resid norm 2.433346354927e+02 ||r(i)||/||b|| 3.624329619685e-03 > 28 KSP preconditioned resid norm 2.076540169200e-03 true resid norm 3.382587656684e+02 ||r(i)||/||b|| 5.038170012450e-03 > 29 KSP preconditioned resid norm 2.912174325946e-03 true resid norm 4.994106293188e+02 ||r(i)||/||b|| 7.438434452868e-03 > 30 KSP preconditioned resid norm 3.668418500885e-03 true resid norm 6.368223901851e+02 ||r(i)||/||b|| 9.485103699079e-03 > 31 KSP preconditioned resid norm 3.471520993065e-03 true resid norm 6.005845512437e+02 ||r(i)||/||b|| 8.945361903740e-03 > 32 KSP preconditioned resid norm 2.400628046233e-03 true resid norm 4.087910715821e+02 ||r(i)||/||b|| 6.088708193955e-03 > 33 KSP preconditioned resid norm 1.392978496225e-03 true resid norm 2.329031165345e+02 ||r(i)||/||b|| 3.468958136862e-03 > 34 KSP preconditioned resid norm 1.013807019427e-03 true resid norm 1.877543843971e+02 ||r(i)||/||b|| 2.796493705954e-03 > 35 KSP preconditioned resid norm 1.261815551662e-03 true resid norm 2.614230430124e+02 ||r(i)||/||b|| 3.893746059369e-03 > 36 KSP preconditioned resid norm 1.853746656434e-03 true resid norm 3.922570169729e+02 ||r(i)||/||b|| 5.842442948022e-03 > 37 KSP preconditioned resid norm 2.657914774769e-03 true resid norm 5.613289970100e+02 ||r(i)||/||b|| 8.360672972556e-03 > 38 KSP preconditioned resid norm 3.436994681718e-03 true resid norm 7.215058669001e+02 ||r(i)||/||b|| 1.074641544097e-02 > 39 KSP preconditioned resid norm 3.614431832954e-03 true resid norm 7.538884668185e+02 ||r(i)||/||b|| 1.122873566558e-02 > 40 KSP preconditioned resid norm 2.766407868570e-03 true resid norm 5.733943971401e+02 ||r(i)||/||b|| 8.540380176902e-03 > 41 KSP preconditioned resid norm 1.467719776874e-03 true resid norm 3.023387886260e+02 ||r(i)||/||b|| 4.503162587512e-03 > 42 KSP preconditioned resid norm 5.524746404481e-04 true resid norm 1.130986298210e+02 ||r(i)||/||b|| 1.684539125209e-03 > 43 KSP preconditioned resid norm 1.546370264138e-04 true resid norm 3.145301955484e+01 ||r(i)||/||b|| 4.684746590648e-04 > 44 KSP preconditioned resid norm 3.362252620216e-05 true resid norm 6.790570379860e+00 ||r(i)||/||b|| 1.011416451770e-04 > 45 KSP preconditioned resid norm 6.273756038479e-06 true resid norm 1.247817148548e+00 ||r(i)||/||b|| 1.858551965805e-05 > 46 KSP preconditioned resid norm 1.802876108688e-06 true resid norm 3.246014610548e-01 ||r(i)||/||b|| 4.834752305242e-06 > 47 KSP preconditioned resid norm 1.955901149713e-06 true resid norm 3.605076707485e-01 ||r(i)||/||b|| 5.369554673429e-06 > 48 KSP preconditioned resid norm 2.146815227031e-06 true resid norm 4.034225428527e-01 ||r(i)||/||b|| 6.008747042314e-06 > 49 KSP preconditioned resid norm 7.835209417034e-07 true resid norm 1.092344842431e-01 ||r(i)||/||b|| 1.626984896464e-06 > 50 KSP preconditioned resid norm 5.949209762592e-07 true resid norm 6.093308467935e-02 ||r(i)||/||b|| 9.075632951920e-07 > 51 KSP preconditioned resid norm 4.843097958216e-07 true resid norm 3.907306790091e-02 ||r(i)||/||b|| 5.819709020808e-07 > 52 KSP preconditioned resid norm 4.015928439388e-07 true resid norm 2.047742276673e-02 ||r(i)||/||b|| 3.049994494946e-07 > 53 KSP preconditioned resid norm 3.831231784925e-07 true resid norm 3.214301153383e-02 ||r(i)||/||b|| 4.787516932475e-07 > 54 KSP preconditioned resid norm 3.936949175538e-07 true resid norm 5.284803093534e-02 ||r(i)||/||b|| 7.871410638815e-07 > 55 KSP preconditioned resid norm 3.162241186775e-07 true resid norm 3.587463976088e-02 ||r(i)||/||b|| 5.343321521721e-07 > 56 KSP preconditioned resid norm 3.171835200858e-07 true resid norm 4.396913730536e-02 ||r(i)||/||b|| 6.548950434660e-07 > 57 KSP preconditioned resid norm 2.173392548323e-07 true resid norm 1.845471373372e-02 ||r(i)||/||b|| 2.748723603299e-07 > 58 KSP preconditioned resid norm 1.842063440560e-07 true resid norm 1.135583609373e-02 ||r(i)||/||b|| 1.691386556109e-07 > 59 KSP preconditioned resid norm 1.562305233760e-07 true resid norm 7.915226925674e-03 ||r(i)||/||b|| 1.178927584031e-07 > 60 KSP preconditioned resid norm 1.395168870151e-07 true resid norm 7.381518126662e-03 ||r(i)||/||b|| 1.099434724141e-07 > 61 KSP preconditioned resid norm 1.323458776323e-07 true resid norm 1.236767010915e-02 ||r(i)||/||b|| 1.842093420541e-07 > 62 KSP preconditioned resid norm 1.210573318494e-07 true resid norm 1.246351751392e-02 ||r(i)||/||b|| 1.856369340916e-07 > 63 KSP preconditioned resid norm 1.518242462597e-07 true resid norm 2.453397483619e-02 ||r(i)||/||b|| 3.654194624097e-07 > 64 KSP preconditioned resid norm 1.723839793842e-07 true resid norm 3.078519652295e-02 ||r(i)||/||b|| 4.585278186149e-07 > 65 KSP preconditioned resid norm 6.991080261238e-07 true resid norm 1.418507987285e-01 ||r(i)||/||b|| 2.112786165301e-06 > 66 KSP preconditioned resid norm 1.030788328966e-05 true resid norm 2.103063209272e+00 ||r(i)||/||b|| 3.132391846314e-05 > 67 KSP preconditioned resid norm 1.229992774455e-07 true resid norm 2.097942598994e-02 ||r(i)||/||b|| 3.124764991443e-07 > 68 KSP preconditioned resid norm 6.581561333514e-08 true resid norm 3.405626207360e-03 ||r(i)||/||b|| 5.072484610304e-08 > 69 KSP preconditioned resid norm 6.343808367654e-08 true resid norm 2.730918192381e-03 ||r(i)||/||b|| 4.067545778488e-08 > 70 KSP preconditioned resid norm 6.319435025925e-08 true resid norm 2.718071769813e-03 ||r(i)||/||b|| 4.048411769995e-08 > 71 KSP preconditioned resid norm 6.446816119437e-08 true resid norm 2.784759189575e-03 ||r(i)||/||b|| 4.147738851080e-08 > 72 KSP preconditioned resid norm 6.716418150551e-08 true resid norm 2.909287512040e-03 ||r(i)||/||b|| 4.333216634250e-08 > 73 KSP preconditioned resid norm 7.131730758571e-08 true resid norm 3.126610397256e-03 ||r(i)||/||b|| 4.656906588345e-08 > 74 KSP preconditioned resid norm 7.792929027648e-08 true resid norm 3.511383015577e-03 ||r(i)||/||b|| 5.230003301273e-08 > 75 KSP preconditioned resid norm 9.337691950640e-08 true resid norm 6.318359239434e-03 ||r(i)||/||b|| 9.410833148728e-08 > 76 KSP preconditioned resid norm 1.479957854534e-07 true resid norm 1.818912212739e-02 ||r(i)||/||b|| 2.709165259144e-07 > 77 KSP preconditioned resid norm 6.682161192641e-07 true resid norm 1.212127978501e-01 ||r(i)||/||b|| 1.805394997071e-06 > 78 KSP preconditioned resid norm 5.663368166860e-05 true resid norm 1.075143624402e+01 ||r(i)||/||b|| 1.601364670279e-04 > 79 KSP preconditioned resid norm 8.776500145772e-07 true resid norm 1.663312859850e-01 ||r(i)||/||b|| 2.477408960935e-06 > 80 KSP preconditioned resid norm 1.630840126402e-07 true resid norm 2.931897201954e-02 ||r(i)||/||b|| 4.366892468635e-07 > 81 KSP preconditioned resid norm 5.046755275791e-08 true resid norm 7.225871257706e-03 ||r(i)||/||b|| 1.076252017075e-07 > 82 KSP preconditioned resid norm 2.756629667916e-08 true resid norm 2.674186926924e-03 ||r(i)||/||b|| 3.983047817339e-08 > 83 KSP preconditioned resid norm 1.948064945258e-08 true resid norm 1.002338825836e-03 ||r(i)||/||b|| 1.492926104859e-08 > 84 KSP preconditioned resid norm 1.686033880678e-08 true resid norm 7.336515948282e-04 ||r(i)||/||b|| 1.092731908173e-08 > 85 KSP preconditioned resid norm 1.627099807112e-08 true resid norm 7.004183219758e-04 ||r(i)||/||b|| 1.043232857241e-08 > 86 KSP preconditioned resid norm 1.815047764168e-08 true resid norm 8.235488547093e-04 ||r(i)||/||b|| 1.226628712898e-08 > 87 KSP preconditioned resid norm 2.850439847332e-08 true resid norm 2.214968823226e-03 ||r(i)||/||b|| 3.299068830231e-08 > 88 KSP preconditioned resid norm 1.519490344670e-07 true resid norm 1.992168037360e-02 ||r(i)||/||b|| 2.967219857778e-07 > 89 KSP preconditioned resid norm 6.189262765590e-07 true resid norm 1.040718853983e-01 ||r(i)||/||b|| 1.550090952164e-06 > 90 KSP preconditioned resid norm 5.042710252729e-08 true resid norm 8.652148341175e-03 ||r(i)||/||b|| 1.288687795855e-07 > 91 KSP preconditioned resid norm 2.122997506948e-08 true resid norm 3.738624152724e-03 ||r(i)||/||b|| 5.568465921898e-08 > 92 KSP preconditioned resid norm 1.416290557086e-08 true resid norm 2.531358077791e-03 ||r(i)||/||b|| 3.770312450912e-08 > 93 KSP preconditioned resid norm 1.113840681888e-08 true resid norm 2.004079623183e-03 ||r(i)||/||b|| 2.984961480637e-08 > 94 KSP preconditioned resid norm 5.052832639519e-08 true resid norm 9.596273544776e-03 ||r(i)||/||b|| 1.429309821699e-07 > 95 KSP preconditioned resid norm 3.103683240415e-09 true resid norm 1.340636499506e-04 ||r(i)||/||b|| 1.996801057338e-09 > 96 KSP preconditioned resid norm 3.429857119447e-09 true resid norm 2.674501337153e-04 ||r(i)||/||b|| 3.983516113316e-09 > 97 KSP preconditioned resid norm 5.170276146377e-09 true resid norm 6.719440839436e-04 ||r(i)||/||b|| 1.000822115305e-08 > 98 KSP preconditioned resid norm 1.148497437573e-08 true resid norm 1.829675810596e-03 ||r(i)||/||b|| 2.725197019872e-08 > 99 KSP preconditioned resid norm 6.490313757291e-08 true resid norm 1.133744740971e-02 ||r(i)||/||b|| 1.688647667252e-07 > 100 KSP preconditioned resid norm 1.033700268099e-06 true resid norm 1.744999476835e-01 ||r(i)||/||b|| 2.599076484702e-06 > 101 KSP preconditioned resid norm 2.023863503132e-08 true resid norm 3.274357033310e-03 ||r(i)||/||b|| 4.876966715905e-08 > 102 KSP preconditioned resid norm 4.803328753117e-09 true resid norm 6.927971746924e-04 ||r(i)||/||b|| 1.031881596135e-08 > 103 KSP preconditioned resid norm 1.962342036005e-09 true resid norm 1.445569202511e-04 ||r(i)||/||b|| 2.153092290934e-09 > 104 KSP preconditioned resid norm 1.498768208313e-09 true resid norm 6.802150842044e-05 ||r(i)||/||b|| 1.013141295092e-09 > 105 KSP preconditioned resid norm 1.395549026495e-09 true resid norm 6.005884495747e-05 ||r(i)||/||b|| 8.945419967142e-10 > 106 KSP preconditioned resid norm 1.457410891473e-09 true resid norm 6.410049873543e-05 ||r(i)||/||b|| 9.547401081351e-10 > 107 KSP preconditioned resid norm 1.780342425895e-09 true resid norm 1.209740535326e-04 ||r(i)||/||b|| 1.801839037603e-09 > 108 KSP preconditioned resid norm 3.312859958719e-09 true resid norm 4.525233578504e-04 ||r(i)||/||b|| 6.740075477275e-09 > 109 KSP preconditioned resid norm 9.207121903791e-09 true resid norm 1.584434833387e-03 ||r(i)||/||b|| 2.359924671420e-08 > 110 KSP preconditioned resid norm 8.739767895565e-08 true resid norm 1.687246640666e-02 ||r(i)||/||b|| 2.513056952659e-07 > 111 KSP preconditioned resid norm 1.156065363139e-06 true resid norm 2.263019811534e-01 ||r(i)||/||b|| 3.370638017177e-06 > 112 KSP preconditioned resid norm 7.306873252299e-08 true resid norm 1.428339270295e-02 ||r(i)||/||b|| 2.127429296617e-07 > 113 KSP preconditioned resid norm 3.669015190800e-08 true resid norm 7.144007229935e-03 ||r(i)||/||b|| 1.064058840381e-07 > 114 KSP preconditioned resid norm 2.027376209045e-08 true resid norm 3.927954323325e-03 ||r(i)||/||b|| 5.850462335527e-08 > 115 KSP preconditioned resid norm 1.150536971084e-08 true resid norm 2.217667544765e-03 ||r(i)||/||b|| 3.303088420943e-08 > 116 KSP preconditioned resid norm 4.817493968709e-09 true resid norm 9.229121300958e-04 ||r(i)||/||b|| 1.374624603973e-08 > 117 KSP preconditioned resid norm 1.318062597149e-09 true resid norm 2.502759879830e-04 ||r(i)||/||b|| 3.727717077784e-09 > 118 KSP preconditioned resid norm 2.565229681360e-10 true resid norm 4.554155253246e-05 ||r(i)||/||b|| 6.783152650489e-10 > 119 KSP preconditioned resid norm 1.220621552659e-10 true resid norm 1.887267706055e-05 ||r(i)||/||b|| 2.810976839972e-10 > 120 KSP preconditioned resid norm 3.007389263037e-10 true resid norm 5.965755276433e-05 ||r(i)||/||b|| 8.885649800071e-10 > 121 KSP preconditioned resid norm 7.687988210487e-10 true resid norm 1.555121445636e-04 ||r(i)||/||b|| 2.316264064181e-09 > 122 KSP preconditioned resid norm 3.818221035410e-09 true resid norm 7.723490689110e-04 ||r(i)||/||b|| 1.150369572963e-08 > 123 KSP preconditioned resid norm 1.126628704389e-07 true resid norm 2.277477391889e-02 ||r(i)||/||b|| 3.392171752645e-07 > 124 KSP preconditioned resid norm 2.091915962841e-09 true resid norm 4.232853495368e-04 ||r(i)||/||b|| 6.304592138305e-09 > 125 KSP preconditioned resid norm 6.498729874824e-10 true resid norm 1.317063018752e-04 ||r(i)||/||b|| 1.961689711860e-09 > 126 KSP preconditioned resid norm 2.326148226133e-10 true resid norm 4.710921144409e-05 ||r(i)||/||b|| 7.016646440449e-10 > 127 KSP preconditioned resid norm 6.419933487001e-11 true resid norm 1.192761364491e-05 ||r(i)||/||b|| 1.776549537959e-10 > 128 KSP preconditioned resid norm 3.990889550665e-11 true resid norm 4.736495989458e-06 ||r(i)||/||b|| 7.054738703082e-11 > 129 KSP preconditioned resid norm 1.449761398531e-10 true resid norm 2.794402799777e-05 ||r(i)||/||b|| 4.162102454528e-10 > 130 KSP preconditioned resid norm 1.621600598079e-10 true resid norm 3.151012544337e-05 ||r(i)||/||b|| 4.693252184718e-10 > 131 KSP preconditioned resid norm 8.656752727415e-11 true resid norm 1.541111379317e-05 ||r(i)||/||b|| 2.295396875164e-10 > 132 KSP preconditioned resid norm 5.162222539791e-11 true resid norm 7.009942258504e-06 ||r(i)||/||b|| 1.044090632981e-10 > 133 KSP preconditioned resid norm 3.781167023504e-11 true resid norm 3.012802382737e-06 ||r(i)||/||b|| 4.487396088066e-11 > 134 KSP preconditioned resid norm 3.198899307844e-11 true resid norm 1.735828409010e-06 ||r(i)||/||b|| 2.585416705979e-11 > 135 KSP preconditioned resid norm 2.800813815582e-11 true resid norm 1.267385630623e-06 ||r(i)||/||b|| 1.887698095805e-11 > 136 KSP preconditioned resid norm 2.478323040026e-11 true resid norm 1.093735119127e-06 ||r(i)||/||b|| 1.629055633743e-11 > 137 KSP preconditioned resid norm 2.094802859126e-11 true resid norm 9.461872584172e-07 ||r(i)||/||b|| 1.409291570641e-11 > 138 KSP preconditioned resid norm 1.475156452329e-11 true resid norm 1.291911076055e-06 ||r(i)||/||b|| 1.924227337987e-11 > 139 KSP preconditioned resid norm 2.079412078050e-10 true resid norm 3.438188849064e-05 ||r(i)||/||b|| 5.120984794664e-10 > 140 KSP preconditioned resid norm 1.776813945343e-10 true resid norm 1.130664701479e-05 ||r(i)||/||b|| 1.684060125352e-10 > 141 KSP preconditioned resid norm 9.791545045192e-11 true resid norm 4.353141785980e-06 ||r(i)||/||b|| 6.483754637586e-11 > 142 KSP preconditioned resid norm 9.022678471638e-11 true resid norm 3.899337928100e-06 ||r(i)||/||b|| 5.807839858618e-11 > 143 KSP preconditioned resid norm 9.321246955436e-11 true resid norm 4.122786827556e-06 ||r(i)||/||b|| 6.140654159035e-11 > 144 KSP preconditioned resid norm 1.010522679933e-10 true resid norm 6.735251544406e-06 ||r(i)||/||b|| 1.003177028991e-10 > 145 KSP preconditioned resid norm 1.190099575669e-10 true resid norm 1.412264052735e-05 ||r(i)||/||b|| 2.103486183452e-10 > 146 KSP preconditioned resid norm 1.559766947287e-10 true resid norm 2.516382044354e-05 ||r(i)||/||b|| 3.748006509360e-10 > 147 KSP preconditioned resid norm 1.549935482860e-10 true resid norm 2.591211842556e-05 ||r(i)||/||b|| 3.859461195417e-10 > 148 KSP preconditioned resid norm 1.144296191694e-10 true resid norm 1.784161868459e-05 ||r(i)||/||b|| 2.657406617465e-10 > 149 KSP preconditioned resid norm 6.371964918270e-11 true resid norm 6.643697384486e-06 ||r(i)||/||b|| 9.895405627750e-11 > 150 KSP preconditioned resid norm 4.845064670450e-11 true resid norm 2.593417735211e-06 ||r(i)||/||b|| 3.862746745817e-11 > 151 KSP preconditioned resid norm 4.470906007756e-11 true resid norm 1.934674135353e-06 ||r(i)||/||b|| 2.881585993297e-11 > 152 KSP preconditioned resid norm 5.319998529107e-11 true resid norm 3.264222737453e-06 ||r(i)||/||b|| 4.861872264358e-11 > 153 KSP preconditioned resid norm 2.352067006596e-10 true resid norm 2.492527941922e-05 ||r(i)||/||b|| 3.712477194012e-10 > 154 KSP preconditioned resid norm 3.473141928722e-09 true resid norm 3.763914613551e-04 ||r(i)||/||b|| 5.606134610567e-09 > 155 KSP preconditioned resid norm 1.833222104680e-08 true resid norm 1.977164847191e-03 ||r(i)||/||b|| 2.944873467832e-08 > 156 KSP preconditioned resid norm 1.545363805860e-09 true resid norm 1.660725936026e-04 ||r(i)||/||b|| 2.473555886497e-09 > 157 KSP preconditioned resid norm 5.975372365436e-10 true resid norm 6.408112870416e-05 ||r(i)||/||b|| 9.544516026460e-10 > 158 KSP preconditioned resid norm 2.271151314350e-10 true resid norm 2.430517352175e-05 ||r(i)||/||b|| 3.620115982588e-10 > 159 KSP preconditioned resid norm 9.284094104344e-11 true resid norm 9.904055955270e-06 ||r(i)||/||b|| 1.475152243782e-10 > 160 KSP preconditioned resid norm 2.941687346568e-11 true resid norm 3.088975114702e-06 ||r(i)||/||b|| 4.600850996823e-11 > 161 KSP preconditioned resid norm 6.416123537136e-12 true resid norm 6.096752141640e-07 ||r(i)||/||b|| 9.080762106091e-12 > 162 KSP preconditioned resid norm 4.077333656404e-12 true resid norm 5.049107356604e-07 ||r(i)||/||b|| 7.520355377462e-12 > 163 KSP preconditioned resid norm 5.087791292533e-12 true resid norm 7.009318964057e-07 ||r(i)||/||b|| 1.043997796853e-11 > 164 KSP preconditioned resid norm 7.749563397094e-12 true resid norm 9.156004927546e-07 ||r(i)||/||b|| 1.363734340148e-11 > 165 KSP preconditioned resid norm 1.052344064381e-11 true resid norm 1.169820734369e-06 ||r(i)||/||b|| 1.742380787145e-11 > 166 KSP preconditioned resid norm 1.104395410109e-11 true resid norm 1.200080231537e-06 ||r(i)||/||b|| 1.787450570014e-11 > 167 KSP preconditioned resid norm 1.007023895764e-11 true resid norm 1.084060757795e-06 ||r(i)||/||b|| 1.614646228252e-11 > 168 KSP preconditioned resid norm 8.344424405547e-12 true resid norm 8.950906940095e-07 ||r(i)||/||b|| 1.333186173038e-11 > 169 KSP preconditioned resid norm 5.763569089910e-12 true resid norm 6.172251688691e-07 ||r(i)||/||b|| 9.193214344588e-12 > 170 KSP preconditioned resid norm 3.424168629795e-12 true resid norm 3.659829134192e-07 ||r(i)||/||b|| 5.451105267927e-12 > 171 KSP preconditioned resid norm 9.348832909193e-13 true resid norm 9.664575613677e-08 ||r(i)||/||b|| 1.439483022522e-12 > 172 KSP preconditioned resid norm 3.482816687679e-13 true resid norm 2.647798359576e-08 ||r(i)||/||b|| 3.943743562083e-13 > 173 KSP preconditioned resid norm 1.008488994165e-11 true resid norm 1.080536439819e-06 ||r(i)||/||b|| 1.609396959070e-11 > 174 KSP preconditioned resid norm 9.378977735316e-11 true resid norm 1.005208011236e-05 ||r(i)||/||b|| 1.497199591702e-10 > 175 KSP preconditioned resid norm 8.737224485597e-10 true resid norm 9.364289005956e-05 ||r(i)||/||b|| 1.394757057204e-09 > 176 KSP preconditioned resid norm 1.536767823776e-08 true resid norm 1.647058907589e-03 ||r(i)||/||b|| 2.453199632700e-08 > 177 KSP preconditioned resid norm 1.839952929174e-07 true resid norm 1.972001883268e-02 ||r(i)||/||b|| 2.937183529640e-07 > 178 KSP preconditioned resid norm 4.438575380976e-08 true resid norm 4.757119581523e-03 ||r(i)||/||b|| 7.085456358805e-08 > 179 KSP preconditioned resid norm 4.796782862052e-07 true resid norm 5.141033952542e-02 ||r(i)||/||b|| 7.657274761676e-07 > 180 KSP preconditioned resid norm 9.921079426648e-09 true resid norm 1.063308787741e-03 ||r(i)||/||b|| 1.583737360889e-08 > 181 KSP preconditioned resid norm 2.536576210124e-10 true resid norm 2.718624644920e-05 ||r(i)||/||b|| 4.049235245709e-10 > 182 KSP preconditioned resid norm 1.047157364780e-11 true resid norm 1.122313966041e-06 ||r(i)||/||b|| 1.671622184598e-11 > 183 KSP preconditioned resid norm 1.244906673812e-10 true resid norm 1.334248080723e-05 ||r(i)||/||b|| 1.987285874524e-10 > 184 KSP preconditioned resid norm 1.959956371324e-10 true resid norm 2.100614044675e-05 ||r(i)||/||b|| 3.128743956332e-10 > 185 KSP preconditioned resid norm 2.704883349749e-10 true resid norm 2.899001474168e-05 ||r(i)||/||b|| 4.317896171690e-10 > 186 KSP preconditioned resid norm 3.521080224859e-10 true resid norm 3.773773546731e-05 ||r(i)||/||b|| 5.620818925224e-10 > 187 KSP preconditioned resid norm 5.585664451775e-10 true resid norm 5.986524496711e-05 ||r(i)||/||b|| 8.916584360652e-10 > 188 KSP preconditioned resid norm 8.373951317446e-10 true resid norm 8.974914849925e-05 ||r(i)||/||b|| 1.336762013302e-09 > 189 KSP preconditioned resid norm 1.025188904578e-09 true resid norm 1.098762320723e-04 ||r(i)||/||b|| 1.636543361747e-09 > 190 KSP preconditioned resid norm 8.804939847806e-10 true resid norm 9.436831188889e-05 ||r(i)||/||b|| 1.405561798656e-09 > 191 KSP preconditioned resid norm 3.943144046396e-10 true resid norm 4.226124296393e-05 ||r(i)||/||b|| 6.294569382969e-10 > 192 KSP preconditioned resid norm 5.355541541046e-11 true resid norm 5.739853081002e-06 ||r(i)||/||b|| 8.549181456224e-11 > 193 KSP preconditioned resid norm 1.299526839866e-09 true resid norm 1.392788189132e-04 ||r(i)||/||b|| 2.074478003345e-09 > 194 KSP preconditioned resid norm 2.882900653170e-08 true resid norm 3.089794419764e-03 ||r(i)||/||b|| 4.602071304651e-08 > 195 KSP preconditioned resid norm 3.502493774106e-09 true resid norm 3.753853139688e-04 ||r(i)||/||b|| 5.591148623198e-09 > 196 KSP preconditioned resid norm 2.001525286264e-10 true resid norm 2.145165783125e-05 ||r(i)||/||b|| 3.195101211617e-10 > 197 KSP preconditioned resid norm 1.647406102516e-10 true resid norm 1.765632792834e-05 ||r(i)||/||b|| 2.629808623666e-10 > 198 KSP preconditioned resid norm 2.535219507377e-10 true resid norm 2.717160853620e-05 ||r(i)||/||b|| 4.047055012650e-10 > 199 KSP preconditioned resid norm 3.552020174756e-10 true resid norm 3.806933251915e-05 ||r(i)||/||b|| 5.670208401339e-10 > 200 KSP preconditioned resid norm 3.473561803799e-10 true resid norm 3.722844483670e-05 ||r(i)||/||b|| 5.544963011256e-10 > 201 KSP preconditioned resid norm 3.040362870159e-10 true resid norm 3.258556793500e-05 ||r(i)||/||b|| 4.853433166302e-10 > 202 KSP preconditioned resid norm 2.238603169023e-10 true resid norm 2.399258218026e-05 ||r(i)||/||b|| 3.573557297856e-10 > 203 KSP preconditioned resid norm 2.794466174289e-10 true resid norm 2.995013150895e-05 ||r(i)||/||b|| 4.460900049084e-10 > 204 KSP preconditioned resid norm 5.892481453566e-10 true resid norm 6.315359820006e-05 ||r(i)||/||b|| 9.406365685783e-10 > 205 KSP preconditioned resid norm 1.796869246546e-09 true resid norm 1.925822916338e-04 ||r(i)||/||b|| 2.868402611004e-09 > 206 KSP preconditioned resid norm 5.383703166757e-09 true resid norm 5.770068661128e-04 ||r(i)||/||b|| 8.594185827181e-09 > 207 KSP preconditioned resid norm 1.513140030683e-08 true resid norm 1.621731659303e-03 ||r(i)||/||b|| 2.415476151223e-08 > 208 KSP preconditioned resid norm 2.664830499802e-08 true resid norm 2.856074057521e-03 ||r(i)||/||b|| 4.253958250426e-08 > 209 KSP preconditioned resid norm 1.744796115928e-08 true resid norm 1.870012714072e-03 ||r(i)||/||b|| 2.785276520572e-08 > 210 KSP preconditioned resid norm 1.709105056825e-09 true resid norm 1.831760214160e-04 ||r(i)||/||b|| 2.728301619248e-09 > 211 KSP preconditioned resid norm 5.824652325203e-09 true resid norm 6.242662821250e-04 ||r(i)||/||b|| 9.298087681987e-09 > 212 KSP preconditioned resid norm 1.112672284142e-07 true resid norm 1.192524045420e-02 ||r(i)||/||b|| 1.776196064834e-07 > 213 KSP preconditioned resid norm 1.643443975106e-06 true resid norm 1.761386966729e-01 ||r(i)||/||b|| 2.623484709571e-06 > 214 KSP preconditioned resid norm 7.564685658242e-08 true resid norm 8.107571011682e-03 ||r(i)||/||b|| 1.207576130781e-07 > 215 KSP preconditioned resid norm 2.392757332232e-08 true resid norm 2.564475361971e-03 ||r(i)||/||b|| 3.819638743380e-08 > 216 KSP preconditioned resid norm 1.178197751769e-08 true resid norm 1.262752000134e-03 ||r(i)||/||b|| 1.880796569355e-08 > 217 KSP preconditioned resid norm 6.804809693774e-09 true resid norm 7.293161986200e-04 ||r(i)||/||b|| 1.086274584554e-08 > 218 KSP preconditioned resid norm 4.979204750271e-09 true resid norm 5.336541131997e-04 ||r(i)||/||b|| 7.948471475175e-09 > 219 KSP preconditioned resid norm 4.201414891575e-09 true resid norm 4.502932602210e-04 ||r(i)||/||b|| 6.706859454096e-09 > 220 KSP preconditioned resid norm 2.892374518861e-09 true resid norm 3.099947965425e-04 ||r(i)||/||b|| 4.617194427674e-09 > 221 KSP preconditioned resid norm 7.824978176254e-10 true resid norm 8.386543747579e-05 ||r(i)||/||b|| 1.249127517321e-09 > 222 KSP preconditioned resid norm 4.025045270489e-12 true resid norm 4.313920255055e-07 ||r(i)||/||b|| 6.425336420232e-12 > 223 KSP preconditioned resid norm 7.270310185183e-10 true resid norm 7.792069485626e-05 ||r(i)||/||b|| 1.160583990775e-09 > 224 KSP preconditioned resid norm 2.285667940369e-09 true resid norm 2.449700616763e-04 ||r(i)||/||b|| 3.648688353269e-09 > 225 KSP preconditioned resid norm 4.666494852894e-09 true resid norm 5.001389357709e-04 ||r(i)||/||b|| 7.449282159120e-09 > 226 KSP preconditioned resid norm 7.718252461926e-09 true resid norm 8.272158639732e-04 ||r(i)||/||b|| 1.232090512557e-08 > 227 KSP preconditioned resid norm 9.133913525747e-09 true resid norm 9.789415681285e-04 ||r(i)||/||b|| 1.458077233487e-08 > 228 KSP preconditioned resid norm 9.380662429943e-09 true resid norm 1.005387270614e-03 ||r(i)||/||b|| 1.497466588249e-08 > 229 KSP preconditioned resid norm 7.153889400651e-09 true resid norm 7.667293640037e-04 ||r(i)||/||b|| 1.141999345310e-08 > 230 KSP preconditioned resid norm 3.686222155648e-09 true resid norm 3.950766653994e-04 ||r(i)||/||b|| 5.884439991676e-09 > 231 KSP preconditioned resid norm 1.553980021933e-09 true resid norm 1.665502564985e-04 ||r(i)||/||b|| 2.480670400953e-09 > 232 KSP preconditioned resid norm 4.135131754480e-10 true resid norm 4.431892592037e-05 ||r(i)||/||b|| 6.601049439614e-10 > 233 KSP preconditioned resid norm 3.408443088536e-11 true resid norm 3.653052704994e-06 ||r(i)||/||b|| 5.441012165888e-11 > 234 KSP preconditioned resid norm 7.336512309518e-11 true resid norm 7.863022750472e-06 ||r(i)||/||b|| 1.171152072005e-10 > 235 KSP preconditioned resid norm 1.711388154684e-09 true resid norm 1.834207210137e-04 ||r(i)||/||b|| 2.731946279196e-09 > 236 KSP preconditioned resid norm 1.159702499545e-07 true resid norm 1.242929419260e-02 ||r(i)||/||b|| 1.851271973790e-07 > 237 KSP preconditioned resid norm 3.857597718932e-09 true resid norm 4.134441113034e-04 ||r(i)||/||b|| 6.158012547810e-09 > 238 KSP preconditioned resid norm 1.645996729438e-09 true resid norm 1.764122920283e-04 ||r(i)||/||b|| 2.627559755231e-09 > 239 KSP preconditioned resid norm 3.308806112000e-12 true resid norm 3.546274450365e-07 ||r(i)||/||b|| 5.281972089161e-12 > 240 KSP preconditioned resid norm 7.382914922234e-08 true resid norm 7.912755349202e-03 ||r(i)||/||b|| 1.178559456912e-07 > 241 KSP preconditioned resid norm 1.809365457089e-08 true resid norm 1.939215926278e-03 ||r(i)||/||b|| 2.888350729990e-08 > 242 KSP preconditioned resid norm 6.669136386139e-09 true resid norm 7.147751961904e-04 ||r(i)||/||b|| 1.064616596697e-08 > 243 KSP preconditioned resid norm 9.709051870072e-10 true resid norm 1.040582926825e-04 ||r(i)||/||b|| 1.549888496470e-09 > 244 KSP preconditioned resid norm 1.658711636824e-07 true resid norm 1.777750321545e-02 ||r(i)||/||b|| 2.647856986628e-07 > 245 KSP preconditioned resid norm 4.372029935281e-05 true resid norm 4.685791942789e+00 ||r(i)||/||b|| 6.979217938105e-05 > 246 KSP preconditioned resid norm 1.108728206741e-06 true resid norm 1.188296918090e-01 ||r(i)||/||b|| 1.769900001491e-06 > 247 KSP preconditioned resid norm 6.312367692314e-07 true resid norm 6.765379494300e-02 ||r(i)||/||b|| 1.007664414067e-06 > 248 KSP preconditioned resid norm 4.356226723159e-07 true resid norm 4.668854601307e-02 ||r(i)||/||b|| 6.953990740879e-07 > 249 KSP preconditioned resid norm 2.372265656235e-07 true resid norm 2.542513080365e-02 ||r(i)||/||b|| 3.786927186482e-07 > 250 KSP preconditioned resid norm 8.566943433538e-08 true resid norm 9.181756554699e-03 ||r(i)||/||b|| 1.367569897090e-07 > 251 KSP preconditioned resid norm 2.811298063268e-08 true resid norm 3.013052977489e-03 ||r(i)||/||b|| 4.487769334555e-08 > 252 KSP preconditioned resid norm 9.927933000520e-09 true resid norm 1.064041855973e-03 ||r(i)||/||b|| 1.584829223911e-08 > 253 KSP preconditioned resid norm 5.465425608817e-09 true resid norm 5.857655977788e-04 ||r(i)||/||b|| 8.724642104167e-09 > 254 KSP preconditioned resid norm 2.802709112126e-09 true resid norm 3.003847634638e-04 ||r(i)||/||b|| 4.474058505150e-09 > 255 KSP preconditioned resid norm 8.749035597488e-10 true resid norm 9.376916709178e-05 ||r(i)||/||b|| 1.396637881063e-09 > 256 KSP preconditioned resid norm 2.525241256290e-12 true resid norm 2.706479113805e-07 ||r(i)||/||b|| 4.031145174775e-12 > 257 KSP preconditioned resid norm 1.954487205473e-10 true resid norm 2.094752459268e-05 ||r(i)||/||b|| 3.120013461568e-10 > 258 KSP preconditioned resid norm 3.558554122752e-10 true resid norm 3.813936440469e-05 ||r(i)||/||b|| 5.680639248413e-10 > 259 KSP preconditioned resid norm 3.327616628106e-10 true resid norm 3.566425546219e-05 ||r(i)||/||b|| 5.311985988919e-10 > 260 KSP preconditioned resid norm 3.138738006439e-10 true resid norm 3.363991905729e-05 ||r(i)||/||b|| 5.010472709577e-10 > 261 KSP preconditioned resid norm 5.105712063176e-10 true resid norm 5.472127327668e-05 ||r(i)||/||b|| 8.150419325301e-10 > 262 KSP preconditioned resid norm 1.562664105193e-09 true resid norm 1.674809867731e-04 ||r(i)||/||b|| 2.494533093765e-09 > 263 KSP preconditioned resid norm 2.160588142357e-09 true resid norm 2.315644371948e-04 ||r(i)||/||b|| 3.449019277059e-09 > 264 KSP preconditioned resid norm 2.958294939588e-08 true resid norm 3.170599194897e-03 ||r(i)||/||b|| 4.722425375632e-08 > 265 KSP preconditioned resid norm 4.249142745113e-09 true resid norm 4.554085662232e-04 ||r(i)||/||b|| 6.783048998670e-09 > 266 KSP preconditioned resid norm 3.648294184577e-10 true resid norm 3.910116757905e-05 ||r(i)||/||b|| 5.823894306457e-10 > 267 KSP preconditioned resid norm 4.149339357988e-08 true resid norm 4.447119809209e-03 ||r(i)||/||b|| 6.623729504912e-08 > 268 KSP preconditioned resid norm 3.843343031428e-06 true resid norm 4.119163426746e-01 ||r(i)||/||b|| 6.135257311663e-06 > 269 KSP preconditioned resid norm 6.080537781060e-08 true resid norm 6.516912135594e-03 ||r(i)||/||b|| 9.706566282308e-08 > 270 KSP preconditioned resid norm 4.757307440420e-09 true resid norm 5.098719177265e-04 ||r(i)||/||b|| 7.594249334542e-09 > 271 KSP preconditioned resid norm 5.436376140969e-06 true resid norm 5.826521752233e-01 ||r(i)||/||b|| 8.678269463611e-06 > 272 KSP preconditioned resid norm 6.392279306681e-10 true resid norm 6.851026028837e-05 ||r(i)||/||b|| 1.020420973416e-09 > 273 KSP preconditioned resid norm 2.599896339650e-06 true resid norm 2.786479850495e-01 ||r(i)||/||b|| 4.150301676682e-06 > 274 KSP preconditioned resid norm 6.217134530094e-04 true resid norm 6.663311852755e+01 ||r(i)||/||b|| 9.924620251545e-04 > 275 KSP preconditioned resid norm 1.906648612009e-06 true resid norm 2.043480679715e-01 ||r(i)||/||b|| 3.043647091072e-06 > 276 KSP preconditioned resid norm 8.789054443529e-06 true resid norm 9.419807527825e-01 ||r(i)||/||b|| 1.403026222128e-05 > 277 KSP preconditioned resid norm 2.890658496307e-05 true resid norm 3.098108771408e+00 ||r(i)||/||b|| 4.614455053833e-05 > 278 KSP preconditioned resid norm 2.700565878072e-05 true resid norm 2.894374013848e+00 ||r(i)||/||b|| 4.311003835354e-05 > 279 KSP preconditioned resid norm 2.127015678842e-05 true resid norm 2.279662554372e+00 ||r(i)||/||b|| 3.395426426643e-05 > 280 KSP preconditioned resid norm 6.022114971533e-04 true resid norm 6.454296569280e+01 ||r(i)||/||b|| 9.613303993039e-04 > 281 KSP preconditioned resid norm 4.191696888290e-04 true resid norm 4.492517159409e+01 ||r(i)||/||b|| 6.691346250326e-04 > 282 KSP preconditioned resid norm 2.128575355843e-02 true resid norm 2.281334162770e+03 ||r(i)||/||b|| 3.397916191332e-02 > 283 KSP preconditioned resid norm 5.566202222495e-03 true resid norm 5.965664899852e+02 ||r(i)||/||b|| 8.885515189345e-03 > 284 KSP preconditioned resid norm 1.253993707281e-01 true resid norm 1.343987506229e+04 ||r(i)||/||b|| 2.001792189364e-01 > 285 KSP preconditioned resid norm 1.655806619282e-02 true resid norm 1.774636823235e+03 ||r(i)||/||b|| 2.643219609740e-02 > 286 KSP preconditioned resid norm 1.065113628903e-01 true resid norm 1.141552307358e+04 ||r(i)||/||b|| 1.700276588903e-01 > 287 KSP preconditioned resid norm 6.805321243488e+00 true resid norm 7.293710226785e+05 ||r(i)||/||b|| 1.086356241840e+01 > 288 KSP preconditioned resid norm 1.283123255580e-01 true resid norm 1.375207558409e+04 ||r(i)||/||b|| 2.048292663748e-01 > 289 KSP preconditioned resid norm 1.145625946046e+00 true resid norm 1.227842651328e+05 ||r(i)||/||b|| 1.828801099567e+00 > 290 KSP preconditioned resid norm 4.728734309042e+00 true resid norm 5.068095473462e+05 ||r(i)||/||b|| 7.548637086810e+00 > 291 KSP preconditioned resid norm 1.345611312328e+00 true resid norm 1.442180117418e+05 ||r(i)||/||b|| 2.148044443362e+00 > 292 KSP preconditioned resid norm 2.067182052275e+02 true resid norm 2.215534922722e+07 ||r(i)||/||b|| 3.299912002912e+02 > 293 KSP preconditioned resid norm 2.819056223629e-01 true resid norm 3.021367907919e+04 ||r(i)||/||b|| 4.500153945805e-01 > 294 KSP preconditioned resid norm 2.473762204291e+00 true resid norm 2.651293604299e+05 ||r(i)||/||b|| 3.948949528325e+00 > 295 KSP preconditioned resid norm 8.824314765228e-02 true resid norm 9.457598332934e+03 ||r(i)||/||b|| 1.408654945471e-01 > 296 KSP preconditioned resid norm 3.547759169453e-03 true resid norm 3.802366767064e+02 ||r(i)||/||b|| 5.663406884461e-03 > 297 KSP preconditioned resid norm 6.629254109013e-04 true resid norm 7.105007502579e+01 ||r(i)||/||b|| 1.058250055013e-03 > 298 KSP preconditioned resid norm 8.335816942804e-03 true resid norm 8.934043097547e+02 ||r(i)||/||b|| 1.330674400560e-02 > 299 KSP preconditioned resid norm 2.280133581537e-03 true resid norm 2.443769078227e+02 ||r(i)||/||b|| 3.639853667338e-03 > 300 KSP preconditioned resid norm 3.042948047775e-02 true resid norm 3.261327492118e+03 ||r(i)||/||b|| 4.857559962741e-02 > 301 KSP preconditioned resid norm 3.641340291287e-03 true resid norm 3.902663808886e+02 ||r(i)||/||b|| 5.812793567005e-03 > 302 KSP preconditioned resid norm 5.262062292809e-03 true resid norm 5.639698140863e+02 ||r(i)||/||b|| 8.400006425973e-03 > 303 KSP preconditioned resid norm 2.767651815672e-04 true resid norm 2.966274423730e+01 ||r(i)||/||b|| 4.418095365778e-04 > 304 KSP preconditioned resid norm 2.769145142443e-04 true resid norm 2.967874922780e+01 ||r(i)||/||b|| 4.420479217177e-04 > 305 KSP preconditioned resid norm 2.669690509777e-03 true resid norm 2.861282853351e+02 ||r(i)||/||b|| 4.261716452611e-03 > 306 KSP preconditioned resid norm 5.477323157568e-03 true resid norm 5.870407366268e+02 ||r(i)||/||b|| 8.743634564843e-03 > 307 KSP preconditioned resid norm 1.187253595512e-06 true resid norm 1.272457750950e-01 ||r(i)||/||b|| 1.895252727680e-06 > 308 KSP preconditioned resid norm 2.168305092952e+00 true resid norm 2.323915137191e+05 ||r(i)||/||b|| 3.461338106800e+00 > 309 KSP preconditioned resid norm 3.674108569258e-03 true resid norm 3.937783732259e+02 ||r(i)||/||b|| 5.865102675514e-03 > 310 KSP preconditioned resid norm 9.663808831088e-04 true resid norm 1.035733933599e+02 ||r(i)||/||b|| 1.542666199595e-03 > 311 KSP preconditioned resid norm 2.905881247679e-04 true resid norm 3.114423999927e+01 ||r(i)||/||b|| 4.638755649534e-04 > 312 KSP preconditioned resid norm 2.276520242446e-04 true resid norm 2.439896429079e+01 ||r(i)||/||b|| 3.634085578884e-04 > 313 KSP preconditioned resid norm 5.909203310746e-06 true resid norm 6.333281888478e-01 ||r(i)||/||b|| 9.433059577294e-06 > 314 KSP preconditioned resid norm 1.132991640772e-04 true resid norm 1.214301635826e+01 ||r(i)||/||b|| 1.808632534798e-04 > 315 KSP preconditioned resid norm 1.149655320396e-01 true resid norm 1.232161196723e+04 ||r(i)||/||b|| 1.835233324869e-01 > 316 KSP preconditioned resid norm 1.445642724369e-02 true resid norm 1.549390358719e+03 ||r(i)||/||b|| 2.307727939423e-02 > 317 KSP preconditioned resid norm 1.748966842893e+00 true resid norm 1.874482760043e+05 ||r(i)||/||b|| 2.791934397278e+00 > 318 KSP preconditioned resid norm 3.200837397862e-03 true resid norm 3.430547894864e+02 ||r(i)||/||b|| 5.109604032292e-03 > 319 KSP preconditioned resid norm 3.363929224528e+00 true resid norm 3.605344128712e+05 ||r(i)||/||b|| 5.369952981985e+00 > 320 KSP preconditioned resid norm 6.693942426451e-01 true resid norm 7.174338225998e+04 ||r(i)||/||b|| 1.068576468018e+00 > 321 KSP preconditioned resid norm 4.239885892382e-02 true resid norm 4.544164483869e+03 ||r(i)||/||b|| 6.768271973389e-02 > 322 KSP preconditioned resid norm 7.716682076793e-01 true resid norm 8.270475554445e+04 ||r(i)||/||b|| 1.231839826671e+00 > 323 KSP preconditioned resid norm 8.829139791489e-02 true resid norm 9.462769631425e+03 ||r(i)||/||b|| 1.409425180677e-01 > 324 KSP preconditioned resid norm 4.184745081336e+00 true resid norm 4.485066451151e+05 ||r(i)||/||b|| 6.680248848357e+00 > 325 KSP preconditioned resid norm 7.244650997288e+01 true resid norm 7.764568810616e+06 ||r(i)||/||b|| 1.156487923201e+02 > 326 KSP preconditioned resid norm 6.173923463866e+03 true resid norm 6.616999712610e+08 ||r(i)||/||b|| 9.855640979055e+03 > 327 KSP preconditioned resid norm 1.959276674036e+00 true resid norm 2.099885634282e+05 ||r(i)||/||b|| 3.127659030892e+00 > 328 KSP preconditioned resid norm 1.121351920525e+01 true resid norm 1.201826582282e+06 ||r(i)||/||b|| 1.790051659135e+01 > 329 KSP preconditioned resid norm 1.130971168979e+03 true resid norm 1.212136163323e+08 ||r(i)||/||b|| 1.805407187894e+03 > 330 KSP preconditioned resid norm 4.420103087306e+00 true resid norm 4.737315101142e+05 ||r(i)||/||b|| 7.055958722883e+00 > 331 KSP preconditioned resid norm 4.088057471206e+02 true resid norm 4.381439982323e+07 ||r(i)||/||b|| 6.525903175538e+02 > 332 KSP preconditioned resid norm 3.402449682428e-01 true resid norm 3.646629036299e+04 ||r(i)||/||b|| 5.431444480354e-01 > 333 KSP preconditioned resid norm 1.669642304015e+00 true resid norm 1.789465436480e+05 ||r(i)||/||b|| 2.665305977385e+00 > 334 KSP preconditioned resid norm 4.807832112080e+01 true resid norm 5.152869790310e+06 ||r(i)||/||b|| 7.674903562158e+01 > 335 KSP preconditioned resid norm 1.806983009801e+01 true resid norm 1.936662501047e+06 ||r(i)||/||b|| 2.884547549782e+01 > 336 KSP preconditioned resid norm 5.609893970183e+03 true resid norm 6.012492219418e+08 ||r(i)||/||b|| 8.955261791990e+03 > 337 KSP preconditioned resid norm 6.497291386334e+00 true resid norm 6.963574376650e+05 ||r(i)||/||b|| 1.037184403324e+01 > 338 KSP preconditioned resid norm 1.795335659929e-01 true resid norm 1.924179270349e+04 ||r(i)||/||b|| 2.865954494717e-01 > 339 KSP preconditioned resid norm 8.220070254050e+00 true resid norm 8.809989762712e+05 ||r(i)||/||b|| 1.312197368922e+01 > 340 KSP preconditioned resid norm 2.258978929963e-02 true resid norm 2.421096247060e+03 ||r(i)||/||b|| 3.606083787686e-02 > 341 KSP preconditioned resid norm 3.878103577901e+00 true resid norm 4.156418591179e+05 ||r(i)||/||b|| 6.190746739079e+00 > 342 KSP preconditioned resid norm 5.952599113346e+00 true resid norm 6.379791855371e+05 ||r(i)||/||b|| 9.502333501362e+00 > 343 KSP preconditioned resid norm 7.222045044346e+02 true resid norm 7.740340526223e+07 ||r(i)||/||b|| 1.152879259413e+03 > 344 KSP preconditioned resid norm 6.364182209877e+01 true resid norm 6.820912521576e+06 ||r(i)||/||b|| 1.015935739488e+02 > 345 KSP preconditioned resid norm 3.827037043269e+02 true resid norm 4.101687228369e+07 ||r(i)||/||b|| 6.109227518045e+02 > 346 KSP preconditioned resid norm 6.309451684929e+00 true resid norm 6.762254219609e+05 ||r(i)||/||b|| 1.007198922354e+01 > 347 KSP preconditioned resid norm 2.044414998384e+00 true resid norm 2.191133974822e+05 ||r(i)||/||b|| 3.263568192651e+00 > 348 KSP preconditioned resid norm 1.457585290772e-02 true resid norm 1.562190008524e+03 ||r(i)||/||b|| 2.326792282571e-02 > 349 KSP preconditioned resid norm 3.697892503779e-01 true resid norm 3.963274534823e+04 ||r(i)||/||b|| 5.903069761695e-01 > 350 KSP preconditioned resid norm 1.104685840662e+02 true resid norm 1.183964449407e+07 ||r(i)||/||b|| 1.763447038252e+02 > 351 KSP preconditioned resid norm 1.199213228986e+02 true resid norm 1.285275666774e+07 ||r(i)||/||b|| 1.914344277014e+02 > 352 KSP preconditioned resid norm 1.183644579434e+02 true resid norm 1.268589721439e+07 ||r(i)||/||b|| 1.889491519909e+02 > 353 KSP preconditioned resid norm 1.234968225554e+02 true resid norm 1.323596647557e+07 ||r(i)||/||b|| 1.971421176662e+02 > 354 KSP preconditioned resid norm 2.882557881065e-01 true resid norm 3.089426814670e+04 ||r(i)||/||b|| 4.601523777979e-01 > 355 KSP preconditioned resid norm 2.170676916299e+02 true resid norm 2.326457175403e+07 ||r(i)||/||b|| 3.465124326697e+02 > 356 KSP preconditioned resid norm 5.764266225925e+00 true resid norm 6.177943120636e+05 ||r(i)||/||b|| 9.201691405543e+00 > 357 KSP preconditioned resid norm 1.701448294063e+04 true resid norm 1.823554008687e+09 ||r(i)||/||b|| 2.716078947576e+04 > Linear solve did not converge due to DIVERGED_DTOL iterations 357 > > > > > > > >On 2013-01-09 01:58:24?"Jed Brown" ??? > >>On Tue, Jan 8, 2013 at 11:50 AM, w_ang_temp wrote: > > >>I am sorry. > >>In my view, preconditioned resid norm:||rp||=||Bb-BAx||(B is the preconditioned matrix); > > >-ksp_norm_type preconditioned is the default for GMRES, so it's using preconditioned residual. > > >>true resid norm:||rt||=||b-Ax||; ||r(i)||/||b||: ||rt||/||b||. Is it right? > >>(1) Divergence is detected if > > >> ||rp||/||b|| > dtol or ||rt||/||b|| > dtol ? > > >Neither, it's |rp|/|min(rp0,rp1,rp2,rp3,...)|. Your solver "converges" a bit at some iteration and then jumps a lot so the denominator is smaller than rp0. > > >> Both of them (rt/b:1.701448294063e+04 / 6.7139E+4; rt/b:2.716078947576e+04; dtol=1.0E+5 ) > >>are not in this example, but it is divergent? > > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From jefonseca at gmail.com Tue Jan 15 15:05:40 2013 From: jefonseca at gmail.com (Jim Fonseca) Date: Tue, 15 Jan 2013 16:05:40 -0500 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: Message-ID: Hi Jed, We didn't see obvious memory corruption with valgrind. We printed matrix A and B and they both appear correct. We think it may be related to the matrices being non-square. Could it be related to that? Also, if we run outside the debugger, we get the following error, but I'm not sure which line that is coming from. terminate called after throwing an instance of 'std::runtime_error' what(): [PetscMatrixNemo] PETSc gave error with code 73: Object is in wrong state Thanks, Jim On Tue, Jan 15, 2013 at 1:00 PM, Jed Brown wrote: > This looks like memory corruption. Can you run in valgrind? > > > On Tue, Jan 15, 2013 at 11:54 AM, Jim Fonseca wrote: > >> Hi, >> We are in the process of upgrading from Petsc 3.2 to 3.3p5. >> >> We are creating matrices A and B in this way. >> petsc_matrix = new Mat; >> ierr = MatCreateDense(comm, m, num_cols ,num_rows,num_cols,data,A); >> >> Elsewhere, we have this. It gets called a few times, and on the 4th time, >> the size of matrix is C is wrong. Please see the output below. What could >> be the problem? >> C = new Mat; >> double fill = PETSC_DEFAULT; >> MatMatMult(A,B,MAT_INITIAL_MATRIX, fill, C); >> { >> int m,n; >> MatGetOwnershipRange(A, &m, &n); >> cerr << "A.m = " << m << "\n"; >> cerr << "A.n = " << n << "\n"; >> MatGetSize(A,&m,&n); >> cerr << "A global rows = " << m << "\n"; >> cerr << "A global cols = " << n << "\n"; >> >> MatGetOwnershipRange(B, &m, &n); >> cerr << "B.m = " << m << "\n"; >> cerr << "B.n = " << n << "\n"; >> MatGetSize(B,&m,&n); >> cerr << "B global rows = " << m << "\n"; >> cerr << "B global cols = " << n << "\n"; >> >> MatGetOwnershipRange(*C, &m, &n); >> cerr << "C.m = " << m << "\n"; >> cerr << "C.n = " << n << "\n"; >> >> MatGetSize(*C,&m,&n); >> cerr << "C global rows = " << m << "\n"; >> cerr << "C global cols = " << n << "\n"; >> >> } >> >> A.m = 0 >> A.n = 59 >> A global rows = 59 >> A global cols = 320 >> B.m = 0 >> B.n = 320 >> B global rows = 320 >> B global cols = 320 >> C.m = 0 >> C.n = 59 >> C global rows = 59 >> C global cols = 320 >> A.m = 0 >> A.n = 59 >> A global rows = 59 >> A global cols = 320 >> B.m = 0 >> B.n = 320 >> B global rows = 320 >> B global cols = 59 >> C.m = 10922 >> C.n = -1389327096 >> C global rows = -1389327112 >> C global cols = -1389327112 >> >> >> Thanks, >> Jim >> -- >> Jim Fonseca, PhD >> Research Scientist >> Network for Computational Nanotechnology >> Purdue University >> 765-496-6495 >> www.jimfonseca.com >> >> >> > -- Jim Fonseca, PhD Research Scientist Network for Computational Nanotechnology Purdue University 765-496-6495 www.jimfonseca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 15 15:12:00 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 15:12:00 -0600 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: Message-ID: On Tue, Jan 15, 2013 at 3:05 PM, Jim Fonseca wrote: > Hi Jed, > We didn't see obvious memory corruption with valgrind. Does this mean it runs without any valgrind errors? > We printed matrix A and B and they both appear correct. > We think it may be related to the matrices being non-square. Could it be > related to that? > > Also, if we run outside the debugger, we get the following error, but I'm > not sure which line that is coming from. > terminate called after throwing an instance of 'std::runtime_error' > what(): [PetscMatrixNemo] PETSc gave error with code 73: > Object is in wrong state > If you have a debug version of PETSc, it should give you a trace. Maybe you aren't checking error codes? Are you using MAT_INITIAL_MATRIX in all cases? Nonsquareness shouldn't be a problem. Can you set a breakpoint in PetscError or use -on_error_abort (or -on_error_attach_debugger) to get a trace from the place that raises "Object is in wrong state"? > > Thanks, > Jim > > On Tue, Jan 15, 2013 at 1:00 PM, Jed Brown wrote: > >> This looks like memory corruption. Can you run in valgrind? >> >> >> On Tue, Jan 15, 2013 at 11:54 AM, Jim Fonseca wrote: >> >>> Hi, >>> We are in the process of upgrading from Petsc 3.2 to 3.3p5. >>> >>> We are creating matrices A and B in this way. >>> petsc_matrix = new Mat; >>> ierr = MatCreateDense(comm, m, num_cols ,num_rows,num_cols,data,A); >>> >>> Elsewhere, we have this. It gets called a few times, and on the 4th >>> time, the size of matrix is C is wrong. Please see the output below. What >>> could be the problem? >>> C = new Mat; >>> double fill = PETSC_DEFAULT; >>> MatMatMult(A,B,MAT_INITIAL_MATRIX, fill, C); >>> { >>> int m,n; >>> MatGetOwnershipRange(A, &m, &n); >>> cerr << "A.m = " << m << "\n"; >>> cerr << "A.n = " << n << "\n"; >>> MatGetSize(A,&m,&n); >>> cerr << "A global rows = " << m << "\n"; >>> cerr << "A global cols = " << n << "\n"; >>> >>> MatGetOwnershipRange(B, &m, &n); >>> cerr << "B.m = " << m << "\n"; >>> cerr << "B.n = " << n << "\n"; >>> MatGetSize(B,&m,&n); >>> cerr << "B global rows = " << m << "\n"; >>> cerr << "B global cols = " << n << "\n"; >>> >>> MatGetOwnershipRange(*C, &m, &n); >>> cerr << "C.m = " << m << "\n"; >>> cerr << "C.n = " << n << "\n"; >>> >>> MatGetSize(*C,&m,&n); >>> cerr << "C global rows = " << m << "\n"; >>> cerr << "C global cols = " << n << "\n"; >>> >>> } >>> >>> A.m = 0 >>> A.n = 59 >>> A global rows = 59 >>> A global cols = 320 >>> B.m = 0 >>> B.n = 320 >>> B global rows = 320 >>> B global cols = 320 >>> C.m = 0 >>> C.n = 59 >>> C global rows = 59 >>> C global cols = 320 >>> A.m = 0 >>> A.n = 59 >>> A global rows = 59 >>> A global cols = 320 >>> B.m = 0 >>> B.n = 320 >>> B global rows = 320 >>> B global cols = 59 >>> C.m = 10922 >>> C.n = -1389327096 >>> C global rows = -1389327112 >>> C global cols = -1389327112 >>> >>> >>> Thanks, >>> Jim >>> -- >>> Jim Fonseca, PhD >>> Research Scientist >>> Network for Computational Nanotechnology >>> Purdue University >>> 765-496-6495 >>> www.jimfonseca.com >>> >>> >>> >> > > > -- > Jim Fonseca, PhD > Research Scientist > Network for Computational Nanotechnology > Purdue University > 765-496-6495 > www.jimfonseca.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tisaac at ices.utexas.edu Tue Jan 15 15:35:11 2013 From: tisaac at ices.utexas.edu (Tobin Isaac) Date: Tue, 15 Jan 2013 15:35:11 -0600 Subject: [petsc-users] make dist + hg Message-ID: <20130115213511.GA19867@ices.utexas.edu> I'm sorry if this is more of a mercurial question, but here goes: I'm keeping track of petsc-dev with mercurial. I'm trying to make a tarball of my source, because I have some local tweaks that I'd like to keep when I build on a new system. Two issues: 1) "make dist" doesn't seem to run correctly. It makes tarballs that don't include the src/ directory. Output below. 2) mercurial wants to revert my changes before it builds the tarball. How do I stop this? Thanks, Toby make dist output: /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev/bin/maint/builddist /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev Starting date: Tue Jan 15 15:27:39 CST 2013 cd: 56: can't cd to /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev/config/BuildSystem saving current version of src/ksp/pc/impls/asm/asm.c as src/ksp/pc/impls/asm/asm.c.orig reverting src/ksp/pc/impls/asm/asm.c saving current version of src/ksp/pc/impls/ml/ml.c as src/ksp/pc/impls/ml/ml.c.orig reverting src/ksp/pc/impls/ml/ml.c saving current version of src/ksp/pc/impls/sor/sor.c as src/ksp/pc/impls/sor/sor.c.orig reverting src/ksp/pc/impls/sor/sor.c saving current version of src/mat/impls/aij/mpi/mmaij.c as src/mat/impls/aij/mpi/mmaij.c.orig reverting src/mat/impls/aij/mpi/mmaij.c saving current version of src/mat/impls/baij/seq/baij.c as src/mat/impls/baij/seq/baij.c.orig reverting src/mat/impls/baij/seq/baij.c saving current version of src/mat/impls/sbaij/seq/sbaijfact.c as src/mat/impls/sbaij/seq/sbaijfact.c.orig reverting src/mat/impls/sbaij/seq/sbaijfact.c hg: unknown command 'clean' Mercurial Distributed SCM (version 1.4.3) Copyright (C) 2005-2010 Matt Mackall and others This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. basic commands: add: add the specified files on the next commit annotate, blame: show changeset information by line for each file clone: make a copy of an existing repository commit, ci: commit the specified files or all outstanding changes diff: diff repository (or selected files) export: dump the header and diffs for one or more changesets forget: forget the specified files on the next commit init: create a new repository in the given directory log, history: show revision history of entire repository or files merge: merge working directory with another revision pull: pull changes from the specified source push: push changes to the specified destination remove, rm: remove the specified files on the next commit serve: export the repository via HTTP status, st: show changed files in the working directory summary, sum: summarize working directory state update, up, checkout, co: update working directory view: start interactive history viewer global options: -R --repository repository root directory or name of overlay bundle file --cwd change working directory -y --noninteractive do not prompt, assume 'yes' for any required answers -q --quiet suppress output -v --verbose enable additional output --config set/override config option --debug enable debugging output --debugger start debugger --encoding set the charset encoding (default: UTF-8) --encodingmode set the charset encoding mode (default: strict) --traceback always print a traceback on exception --time time how long the command takes --profile print command execution profile --version output version information and exit -h --help display help and exit use "hg help" for the full list of commands abort: There is no Mercurial repository here (.hg not found)! hg: unknown command 'clean' Mercurial Distributed SCM (version 1.4.3) Copyright (C) 2005-2010 Matt Mackall and others This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. basic commands: add: add the specified files on the next commit annotate, blame: show changeset information by line for each file clone: make a copy of an existing repository commit, ci: commit the specified files or all outstanding changes diff: diff repository (or selected files) export: dump the header and diffs for one or more changesets forget: forget the specified files on the next commit init: create a new repository in the given directory log, history: show revision history of entire repository or files merge: merge working directory with another revision pull: pull changes from the specified source push: push changes to the specified destination remove, rm: remove the specified files on the next commit serve: export the repository via HTTP status, st: show changed files in the working directory summary, sum: summarize working directory state update, up, checkout, co: update working directory view: start interactive history viewer global options: -R --repository repository root directory or name of overlay bundle file --cwd change working directory -y --noninteractive do not prompt, assume 'yes' for any required answers -q --quiet suppress output -v --verbose enable additional output --config set/override config option --debug enable debugging output --debugger start debugger --encoding set the charset encoding (default: UTF-8) --encodingmode set the charset encoding mode (default: strict) --traceback always print a traceback on exception --time time how long the command takes --profile print command execution profile --version output version information and exit -h --help display help and exit use "hg help" for the full list of commands abort: There is no Mercurial repository here (.hg not found)! abort: There is no Mercurial repository here (.hg not found)! abort: There is no Mercurial repository here (.hg not found)! Building ~/petsc-dev.tar.gz and ~/petsc-lite-dev.tar.gz /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev/bin/maint/builddist: 82: ./config/configure.py: not found make[1]: Entering directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' make[1]: *** No rule to make target `allfortranstubs'. Stop. make[1]: Leaving directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' make[1]: Entering directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' make[1]: *** No rule to make target `alldoc'. Stop. make[1]: Leaving directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' make[1]: Entering directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' make[1]: *** No rule to make target `tree_basic'. Stop. make[1]: Leaving directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' /bin/mv: cannot stat `makefile': No such file or directory /bin/grep: makefile.bak: No such file or directory Using PETSC_VERSION_PATCH_DATE: Tue Jan 15 15:27:41 CST 2013 Using PETSC_VERSION_HG: /usr/bin/find: `src/contrib': No such file or directory Ending date: Tue Jan 15 15:27:44 CST 2013 From balay at mcs.anl.gov Tue Jan 15 15:40:22 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Jan 2013 15:40:22 -0600 (CST) Subject: [petsc-users] make dist + hg In-Reply-To: <20130115213511.GA19867@ices.utexas.edu> References: <20130115213511.GA19867@ices.utexas.edu> Message-ID: On Tue, 15 Jan 2013, Tobin Isaac wrote: > > I'm sorry if this is more of a mercurial question, but here goes: > > I'm keeping track of petsc-dev with mercurial. I'm trying to make a > tarball of my source, because I have some local tweaks that I'd like > to keep when I build on a new system. Two issues: > > 1) "make dist" doesn't seem to run correctly. It makes tarballs that > don't include the src/ directory. Output below. 'make dist' had quiet a few prerequisites [and it does revert all changes aswell'] - so this is not something you want.. > 2) mercurial wants to revert my changes before it builds the tarball. > How do I stop this? Why not commit your changes, and always do 'hg pull --rebase' to get latest petsc-dev stuff? Also - for uncommited stuff - I would get a patchfile with 'hg diff' and apply it to the remote source tree. Satish > > Thanks, > Toby > > make dist output: > > /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev/bin/maint/builddist /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev > Starting date: Tue Jan 15 15:27:39 CST 2013 > cd: 56: can't cd to /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev/config/BuildSystem > saving current version of src/ksp/pc/impls/asm/asm.c as src/ksp/pc/impls/asm/asm.c.orig > reverting src/ksp/pc/impls/asm/asm.c > saving current version of src/ksp/pc/impls/ml/ml.c as src/ksp/pc/impls/ml/ml.c.orig > reverting src/ksp/pc/impls/ml/ml.c > saving current version of src/ksp/pc/impls/sor/sor.c as src/ksp/pc/impls/sor/sor.c.orig > reverting src/ksp/pc/impls/sor/sor.c > saving current version of src/mat/impls/aij/mpi/mmaij.c as src/mat/impls/aij/mpi/mmaij.c.orig > reverting src/mat/impls/aij/mpi/mmaij.c > saving current version of src/mat/impls/baij/seq/baij.c as src/mat/impls/baij/seq/baij.c.orig > reverting src/mat/impls/baij/seq/baij.c > saving current version of src/mat/impls/sbaij/seq/sbaijfact.c as src/mat/impls/sbaij/seq/sbaijfact.c.orig > reverting src/mat/impls/sbaij/seq/sbaijfact.c > hg: unknown command 'clean' > Mercurial Distributed SCM (version 1.4.3) > > Copyright (C) 2005-2010 Matt Mackall and others > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > basic commands: > > add: > add the specified files on the next commit > annotate, blame: > show changeset information by line for each file > clone: > make a copy of an existing repository > commit, ci: > commit the specified files or all outstanding changes > diff: > diff repository (or selected files) > export: > dump the header and diffs for one or more changesets > forget: > forget the specified files on the next commit > init: > create a new repository in the given directory > log, history: > show revision history of entire repository or files > merge: > merge working directory with another revision > pull: > pull changes from the specified source > push: > push changes to the specified destination > remove, rm: > remove the specified files on the next commit > serve: > export the repository via HTTP > status, st: > show changed files in the working directory > summary, sum: > summarize working directory state > update, up, checkout, co: > update working directory > view: > start interactive history viewer > > global options: > -R --repository repository root directory or name of overlay bundle file > --cwd change working directory > -y --noninteractive do not prompt, assume 'yes' for any required answers > -q --quiet suppress output > -v --verbose enable additional output > --config set/override config option > --debug enable debugging output > --debugger start debugger > --encoding set the charset encoding (default: UTF-8) > --encodingmode set the charset encoding mode (default: strict) > --traceback always print a traceback on exception > --time time how long the command takes > --profile print command execution profile > --version output version information and exit > -h --help display help and exit > > use "hg help" for the full list of commands > abort: There is no Mercurial repository here (.hg not found)! > hg: unknown command 'clean' > Mercurial Distributed SCM (version 1.4.3) > > Copyright (C) 2005-2010 Matt Mackall and others > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > basic commands: > > add: > add the specified files on the next commit > annotate, blame: > show changeset information by line for each file > clone: > make a copy of an existing repository > commit, ci: > commit the specified files or all outstanding changes > diff: > diff repository (or selected files) > export: > dump the header and diffs for one or more changesets > forget: > forget the specified files on the next commit > init: > create a new repository in the given directory > log, history: > show revision history of entire repository or files > merge: > merge working directory with another revision > pull: > pull changes from the specified source > push: > push changes to the specified destination > remove, rm: > remove the specified files on the next commit > serve: > export the repository via HTTP > status, st: > show changed files in the working directory > summary, sum: > summarize working directory state > update, up, checkout, co: > update working directory > view: > start interactive history viewer > > global options: > -R --repository repository root directory or name of overlay bundle file > --cwd change working directory > -y --noninteractive do not prompt, assume 'yes' for any required answers > -q --quiet suppress output > -v --verbose enable additional output > --config set/override config option > --debug enable debugging output > --debugger start debugger > --encoding set the charset encoding (default: UTF-8) > --encodingmode set the charset encoding mode (default: strict) > --traceback always print a traceback on exception > --time time how long the command takes > --profile print command execution profile > --version output version information and exit > -h --help display help and exit > > use "hg help" for the full list of commands > abort: There is no Mercurial repository here (.hg not found)! > abort: There is no Mercurial repository here (.hg not found)! > abort: There is no Mercurial repository here (.hg not found)! > Building ~/petsc-dev.tar.gz and ~/petsc-lite-dev.tar.gz > /org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev/bin/maint/builddist: 82: ./config/configure.py: not found > make[1]: Entering directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' > make[1]: *** No rule to make target `allfortranstubs'. Stop. > make[1]: Leaving directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' > make[1]: Entering directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' > make[1]: *** No rule to make target `alldoc'. Stop. > make[1]: Leaving directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' > make[1]: Entering directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' > make[1]: *** No rule to make target `tree_basic'. Stop. > make[1]: Leaving directory `/org/centers/ccgo/local/ubuntu/lucid/apps/petsc/dev' > /bin/mv: cannot stat `makefile': No such file or directory > /bin/grep: makefile.bak: No such file or directory > Using PETSC_VERSION_PATCH_DATE: Tue Jan 15 15:27:41 CST 2013 > Using PETSC_VERSION_HG: > /usr/bin/find: `src/contrib': No such file or directory > Ending date: Tue Jan 15 15:27:44 CST 2013 > > From hzhang at mcs.anl.gov Tue Jan 15 15:43:49 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 15 Jan 2013 15:43:49 -0600 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: Message-ID: Jim : Can you switch to petsc-dev? MatMatMult() has been updated significantly in petsc-dev If you still see problem in petsc-dev, send us a short code that produce the error. We'll check it. Hong > Hi, > We are in the process of upgrading from Petsc 3.2 to 3.3p5. > > We are creating matrices A and B in this way. > petsc_matrix = new Mat; > ierr = MatCreateDense(comm, m, num_cols ,num_rows,num_cols,data,A); > > Elsewhere, we have this. It gets called a few times, and on the 4th time, > the size of matrix is C is wrong. Please see the output below. What could be > the problem? > C = new Mat; > double fill = PETSC_DEFAULT; > MatMatMult(A,B,MAT_INITIAL_MATRIX, fill, C); > { > int m,n; > MatGetOwnershipRange(A, &m, &n); > cerr << "A.m = " << m << "\n"; > cerr << "A.n = " << n << "\n"; > MatGetSize(A,&m,&n); > cerr << "A global rows = " << m << "\n"; > cerr << "A global cols = " << n << "\n"; > > MatGetOwnershipRange(B, &m, &n); > cerr << "B.m = " << m << "\n"; > cerr << "B.n = " << n << "\n"; > MatGetSize(B,&m,&n); > cerr << "B global rows = " << m << "\n"; > cerr << "B global cols = " << n << "\n"; > > MatGetOwnershipRange(*C, &m, &n); > cerr << "C.m = " << m << "\n"; > cerr << "C.n = " << n << "\n"; > > MatGetSize(*C,&m,&n); > cerr << "C global rows = " << m << "\n"; > cerr << "C global cols = " << n << "\n"; > > } > > A.m = 0 > A.n = 59 > A global rows = 59 > A global cols = 320 > B.m = 0 > B.n = 320 > B global rows = 320 > B global cols = 320 > C.m = 0 > C.n = 59 > C global rows = 59 > C global cols = 320 > A.m = 0 > A.n = 59 > A global rows = 59 > A global cols = 320 > B.m = 0 > B.n = 320 > B global rows = 320 > B global cols = 59 > C.m = 10922 > C.n = -1389327096 > C global rows = -1389327112 > C global cols = -1389327112 > > > Thanks, > Jim > -- > Jim Fonseca, PhD > Research Scientist > Network for Computational Nanotechnology > Purdue University > 765-496-6495 > www.jimfonseca.com > > From jedbrown at mcs.anl.gov Tue Jan 15 15:45:28 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 15:45:28 -0600 Subject: [petsc-users] make dist + hg In-Reply-To: References: <20130115213511.GA19867@ices.utexas.edu> Message-ID: On Tue, Jan 15, 2013 at 3:40 PM, Satish Balay wrote: > > Why not commit your changes, and always do 'hg pull --rebase' to get > latest petsc-dev stuff? > > Also - for uncommited stuff - I would get a patchfile with 'hg diff' > and apply it to the remote source tree. I would keep an hg clone on each machine. Then you can push and pull your changes. I'd put all your changes in a bookmark that you can rebase. Note that when you rebase, you'll get new commits when you pull from one of the other machines, and you should get rid of the old patches. (You can also merge each time, but then you get lots of merge commits that don't really mean anything; and it's harder to send your patches upstream because the merge commits end up in the wrong place.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jan 15 15:49:51 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Jan 2013 15:49:51 -0600 (CST) Subject: [petsc-users] make dist + hg In-Reply-To: References: <20130115213511.GA19867@ices.utexas.edu> Message-ID: On Tue, 15 Jan 2013, Jed Brown wrote: > On Tue, Jan 15, 2013 at 3:40 PM, Satish Balay wrote: > > > > > Why not commit your changes, and always do 'hg pull --rebase' to get > > latest petsc-dev stuff? > > > > Also - for uncommited stuff - I would get a patchfile with 'hg diff' > > and apply it to the remote source tree. > > > I would keep an hg clone on each machine. Then you can push and pull your > changes. I'd put all your changes in a bookmark that you can rebase. Note > that when you rebase, you'll get new commits when you pull from one of the > other machines, and you should get rid of the old patches. How does one get rid of the 'old patches' in this workflow? Satish > (You can also > merge each time, but then you get lots of merge commits that don't really > mean anything; and it's harder to send your patches upstream because the > merge commits end up in the wrong place.) From jedbrown at mcs.anl.gov Tue Jan 15 15:51:59 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 15:51:59 -0600 Subject: [petsc-users] make dist + hg In-Reply-To: References: <20130115213511.GA19867@ices.utexas.edu> Message-ID: On Tue, Jan 15, 2013 at 3:49 PM, Satish Balay wrote: > How does one get rid of the 'old patches' in this workflow? hg strip -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.michael.farley at gmail.com Tue Jan 15 15:59:04 2013 From: sean.michael.farley at gmail.com (Sean Farley) Date: Tue, 15 Jan 2013 15:59:04 -0600 Subject: [petsc-users] make dist + hg In-Reply-To: References: <20130115213511.GA19867@ices.utexas.edu> Message-ID: On Tue, Jan 15, 2013 at 3:49 PM, Satish Balay wrote: > On Tue, 15 Jan 2013, Jed Brown wrote: > >> On Tue, Jan 15, 2013 at 3:40 PM, Satish Balay wrote: >> >> > >> > Why not commit your changes, and always do 'hg pull --rebase' to get >> > latest petsc-dev stuff? >> > >> > Also - for uncommited stuff - I would get a patchfile with 'hg diff' >> > and apply it to the remote source tree. >> >> >> I would keep an hg clone on each machine. Then you can push and pull your >> changes. I'd put all your changes in a bookmark that you can rebase. Note >> that when you rebase, you'll get new commits when you pull from one of the >> other machines, and you should get rid of the old patches. > > How does one get rid of the 'old patches' in this workflow? If you want to see how mercurial will (automatically) handle this in the future, you can install a dev version of mercurial and this extension: https://bitbucket.org/marmoute/mutable-history/ Then `hg rebase` will just mark the old commits as obsolete and the pushing / pulling to the other clones will propagate the obsolete markers. When Bitbucket fixes this issue: https://bitbucket.org/site/master/issue/4560/provide-a-method-for-setting-the-phase-of it'd be possible to use that workflow in the wild. From jedbrown at mcs.anl.gov Tue Jan 15 16:05:34 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 16:05:34 -0600 Subject: [petsc-users] make dist + hg In-Reply-To: References: <20130115213511.GA19867@ices.utexas.edu> Message-ID: On Tue, Jan 15, 2013 at 3:51 PM, Jed Brown wrote: > On Tue, Jan 15, 2013 at 3:49 PM, Satish Balay wrote: > >> How does one get rid of the 'old patches' in this workflow? > > > hg strip > FWIW, when you fetch modified changes with git, you get a note like this + b67207b...68aa12d master -> origin/master (forced update) and the remote is updated without the user needing to do anything. This also applies if you have refactored the patches, perhaps merging, reordering, or dropping. At that point, you can either "git reset --hard origin" which explicitly sets your local branch to match what you just fetched, or the usual "git rebase" which discards the patches that are effectively present (while rebasing) and keeps any new work you had done in the clone (which is probably nothing in Toby's case). -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlowrie at uw.edu Tue Jan 15 16:12:24 2013 From: wlowrie at uw.edu (Weston Lowrie) Date: Tue, 15 Jan 2013 17:12:24 -0500 Subject: [petsc-users] PetscLayout and GetRange Message-ID: Hi, I have a problem where I want to grow the size of a Vec (or Mat) many times during program execution. I think for efficiency purposes I would just want to allocate a maximum size, and then only use the portion that I need. In the vector case, it is rather simple, just use the beginning of the vector, and add values to the end. This leads to me to the problem of processor ownership ranges. From a previous email I noticed one could use the PetscLayout object and keep adjusting it as the useful part of the vector grows. Does this sound like a good approach? I noticed the PetscLayout is not available in Fortran bindings. Any workarounds for this? I suppose I can just manually calculate the processor ranges? Thanks for the help, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 15 16:15:35 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 16:15:35 -0600 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: VecGetOwnershipRange() You can use VecCreateMPIWithArray() using your own array preallocated to be as long as you want. If you profile, you'll probably find this is not a meaningful optimization. On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: > Hi, > > I have a problem where I want to grow the size of a Vec (or Mat) many > times during program execution. I think for efficiency purposes I would > just want to allocate a maximum size, and then only use the portion that I > need. In the vector case, it is rather simple, just use the beginning of > the vector, and add values to the end. > > This leads to me to the problem of processor ownership ranges. From a > previous email I noticed one could use the PetscLayout object and keep > adjusting it as the useful part of the vector grows. Does this sound like > a good approach? > > I noticed the PetscLayout is not available in Fortran bindings. Any > workarounds for this? I suppose I can just manually calculate the > processor ranges? > > Thanks for the help, > Wes > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlowrie at u.washington.edu Tue Jan 15 16:21:07 2013 From: wlowrie at u.washington.edu (Weston Lowrie) Date: Tue, 15 Jan 2013 17:21:07 -0500 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: That's interesting. If I understand you correctly, I would create a vector of the size I want specifically for calculating the ownership range, then use that on the real vectors. Sounds like that would work. In my case, with many vectors, it does not make sense to copy them to a resized vector every time I want them to grow leading to many creates and destroys. Wes On Tue, Jan 15, 2013 at 5:15 PM, Jed Brown wrote: > VecGetOwnershipRange() > > You can use VecCreateMPIWithArray() using your own array preallocated to > be as long as you want. If you profile, you'll probably find this is not a > meaningful optimization. > > > On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: > >> Hi, >> >> I have a problem where I want to grow the size of a Vec (or Mat) many >> times during program execution. I think for efficiency purposes I would >> just want to allocate a maximum size, and then only use the portion that I >> need. In the vector case, it is rather simple, just use the beginning of >> the vector, and add values to the end. >> >> This leads to me to the problem of processor ownership ranges. From a >> previous email I noticed one could use the PetscLayout object and keep >> adjusting it as the useful part of the vector grows. Does this sound like >> a good approach? >> >> I noticed the PetscLayout is not available in Fortran bindings. Any >> workarounds for this? I suppose I can just manually calculate the >> processor ranges? >> >> Thanks for the help, >> Wes >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 15 16:26:15 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 15 Jan 2013 16:26:15 -0600 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: On Tue, Jan 15, 2013 at 4:21 PM, Weston Lowrie wrote: > That's interesting. If I understand you correctly, I would create a > vector of the size I want specifically for calculating the ownership range, > then use that on the real vectors. Sounds like that would work. > > In my case, with many vectors, it does not make sense to copy them to a > resized vector every time I want them to grow leading to many creates and > destroys. > You can't dynamically resize vectors like that, and the global offsets change when you resize. Please profile before jumping to the conclusion that there is some terrible inefficiency here. Unless all your loop does is create Vecs of different sizes, chances are that the VecCreate is insignificant. If you have a profile in which it's a big deal, please send the profile and explain what you are doing and why. > > Wes > > > On Tue, Jan 15, 2013 at 5:15 PM, Jed Brown wrote: > >> VecGetOwnershipRange() >> >> You can use VecCreateMPIWithArray() using your own array preallocated to >> be as long as you want. If you profile, you'll probably find this is not a >> meaningful optimization. >> >> >> On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: >> >>> Hi, >>> >>> I have a problem where I want to grow the size of a Vec (or Mat) many >>> times during program execution. I think for efficiency purposes I would >>> just want to allocate a maximum size, and then only use the portion that I >>> need. In the vector case, it is rather simple, just use the beginning of >>> the vector, and add values to the end. >>> >>> This leads to me to the problem of processor ownership ranges. From a >>> previous email I noticed one could use the PetscLayout object and keep >>> adjusting it as the useful part of the vector grows. Does this sound like >>> a good approach? >>> >>> I noticed the PetscLayout is not available in Fortran bindings. Any >>> workarounds for this? I suppose I can just manually calculate the >>> processor ranges? >>> >>> Thanks for the help, >>> Wes >>> >>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlowrie at uw.edu Tue Jan 15 16:31:53 2013 From: wlowrie at uw.edu (Weston Lowrie) Date: Tue, 15 Jan 2013 17:31:53 -0500 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: I don't think there is a inefficiency here. It will be just one extra empty vector since all the other "real" ones are identical size. Sounds like a good strategy to me. I'm not quite at the stage where I can profile yet. I will send if I can get it to a point where it might be significant. Thanks for the help, Wes On Tue, Jan 15, 2013 at 5:26 PM, Jed Brown wrote: > On Tue, Jan 15, 2013 at 4:21 PM, Weston Lowrie wrote: > >> That's interesting. If I understand you correctly, I would create a >> vector of the size I want specifically for calculating the ownership range, >> then use that on the real vectors. Sounds like that would work. >> >> In my case, with many vectors, it does not make sense to copy them to a >> resized vector every time I want them to grow leading to many creates and >> destroys. >> > > You can't dynamically resize vectors like that, and the global offsets > change when you resize. > > Please profile before jumping to the conclusion that there is some > terrible inefficiency here. Unless all your loop does is create Vecs of > different sizes, chances are that the VecCreate is insignificant. If you > have a profile in which it's a big deal, please send the profile and > explain what you are doing and why. > > >> >> Wes >> >> >> On Tue, Jan 15, 2013 at 5:15 PM, Jed Brown wrote: >> >>> VecGetOwnershipRange() >>> >>> You can use VecCreateMPIWithArray() using your own array preallocated to >>> be as long as you want. If you profile, you'll probably find this is not a >>> meaningful optimization. >>> >>> >>> On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: >>> >>>> Hi, >>>> >>>> I have a problem where I want to grow the size of a Vec (or Mat) many >>>> times during program execution. I think for efficiency purposes I would >>>> just want to allocate a maximum size, and then only use the portion that I >>>> need. In the vector case, it is rather simple, just use the beginning of >>>> the vector, and add values to the end. >>>> >>>> This leads to me to the problem of processor ownership ranges. From a >>>> previous email I noticed one could use the PetscLayout object and keep >>>> adjusting it as the useful part of the vector grows. Does this sound like >>>> a good approach? >>>> >>>> I noticed the PetscLayout is not available in Fortran bindings. Any >>>> workarounds for this? I suppose I can just manually calculate the >>>> processor ranges? >>>> >>>> Thanks for the help, >>>> Wes >>>> >>>> >>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_ang_temp at 163.com Wed Jan 16 07:55:14 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Wed, 16 Jan 2013 21:55:14 +0800 (CST) Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> <2d12f2c6.11c04.13c3f4cf9e6.Coremail.w_ang_temp@163.com> Message-ID: <7a6fb3d2.1c53f.13c43a46ae3.Coremail.w_ang_temp@163.com> At 2013-01-16 03:02:32,"Barry Smith" wrote: > >> 1.701448294063e+04 > 1.e4*1.145582415879e+00 hence it declares divergence Hello, Barry I made some tests and it is true. Thanks. But in the mannual, both version 3.2 and 3.3, the rule is:||rk||>dtol*||b||. It is ||b||, not r_0. My misunderstanding? Or the error in the mannual? Besides, in the mannual, both version 3.2 and 3.3, the default dtol=1.0E+5. But from the results in the example, it is 1.0E+4. I donot know the reason. My petsc version is 3.2-p7. Thanks. Jim >Note that at iteration 171 the preconditioned residual is 9.348832909193e-13 < 1.e-12 * 1.145582415879e+00 very good convergence. > >You seem to have set an unreasonably tight convergence criteria. In double precision you can never realistically expect to use a rtol smaller than e-12. In fact normally it is not reasonable to use more than like 1.e-8. Those extra digits don't mean anything. > > Barry > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlowrie at uw.edu Wed Jan 16 08:01:50 2013 From: wlowrie at uw.edu (Weston Lowrie) Date: Wed, 16 Jan 2013 09:01:50 -0500 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: On a related note: When using VecGetArray(), it uses the info from VecSetSizes(), which is set on Vec creation. Can this be updated after the Vec has been used? My issue is that even if I know the proper local size that I want, when I call VecGetArray() it's going to use the local size determined on Vec creation. To avoid this problem I could use VecSetValues() for the range I am interested in, rather then using the pointer generated from VecGetArray()? Wes On Tue, Jan 15, 2013 at 5:31 PM, Weston Lowrie wrote: > I don't think there is a inefficiency here. It will be just one extra > empty vector since all the other "real" ones are identical size. Sounds > like a good strategy to me. > > I'm not quite at the stage where I can profile yet. I will send if I can > get it to a point where it might be significant. > > Thanks for the help, > Wes > > > > On Tue, Jan 15, 2013 at 5:26 PM, Jed Brown wrote: > >> On Tue, Jan 15, 2013 at 4:21 PM, Weston Lowrie wrote: >> >>> That's interesting. If I understand you correctly, I would create a >>> vector of the size I want specifically for calculating the ownership range, >>> then use that on the real vectors. Sounds like that would work. >>> >>> In my case, with many vectors, it does not make sense to copy them to a >>> resized vector every time I want them to grow leading to many creates and >>> destroys. >>> >> >> You can't dynamically resize vectors like that, and the global offsets >> change when you resize. >> >> Please profile before jumping to the conclusion that there is some >> terrible inefficiency here. Unless all your loop does is create Vecs of >> different sizes, chances are that the VecCreate is insignificant. If you >> have a profile in which it's a big deal, please send the profile and >> explain what you are doing and why. >> >> >>> >>> Wes >>> >>> >>> On Tue, Jan 15, 2013 at 5:15 PM, Jed Brown wrote: >>> >>>> VecGetOwnershipRange() >>>> >>>> You can use VecCreateMPIWithArray() using your own array preallocated >>>> to be as long as you want. If you profile, you'll probably find this is not >>>> a meaningful optimization. >>>> >>>> >>>> On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: >>>> >>>>> Hi, >>>>> >>>>> I have a problem where I want to grow the size of a Vec (or Mat) many >>>>> times during program execution. I think for efficiency purposes I would >>>>> just want to allocate a maximum size, and then only use the portion that I >>>>> need. In the vector case, it is rather simple, just use the beginning of >>>>> the vector, and add values to the end. >>>>> >>>>> This leads to me to the problem of processor ownership ranges. From a >>>>> previous email I noticed one could use the PetscLayout object and keep >>>>> adjusting it as the useful part of the vector grows. Does this sound like >>>>> a good approach? >>>>> >>>>> I noticed the PetscLayout is not available in Fortran bindings. Any >>>>> workarounds for this? I suppose I can just manually calculate the >>>>> processor ranges? >>>>> >>>>> Thanks for the help, >>>>> Wes >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 16 08:07:51 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 16 Jan 2013 08:07:51 -0600 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: You can't change the size of the vector after it has been created. You can reuse the memory if you manage it separately (by calling VecCreateMPIWithArray()). Don't try to "trick" the Vec. I'm pretty sure your performance anxiety is premature. Just call VecDestroy() and create a new Vec for the next iteration. Note: it's inappropriate to think of a Vec as a dynamic array that you're using to accumulate an unknown amount of data. The Vec is meant to be used for linear algebra (including collective semantics). If you're building something dynamically, build it with a dynamic data structure and then put into a Vec once the structure is built. On Wed, Jan 16, 2013 at 8:01 AM, Weston Lowrie wrote: > On a related note: > When using VecGetArray(), it uses the info from VecSetSizes(), which is > set on Vec creation. Can this be updated after the Vec has been used? > > My issue is that even if I know the proper local size that I want, when I > call VecGetArray() it's going to use the local size determined on Vec > creation. > > To avoid this problem I could use VecSetValues() for the range I am > interested in, rather then using the pointer generated from VecGetArray()? > > Wes > > On Tue, Jan 15, 2013 at 5:31 PM, Weston Lowrie wrote: > >> I don't think there is a inefficiency here. It will be just one extra >> empty vector since all the other "real" ones are identical size. Sounds >> like a good strategy to me. >> >> I'm not quite at the stage where I can profile yet. I will send if I can >> get it to a point where it might be significant. >> >> Thanks for the help, >> Wes >> >> >> >> On Tue, Jan 15, 2013 at 5:26 PM, Jed Brown wrote: >> >>> On Tue, Jan 15, 2013 at 4:21 PM, Weston Lowrie >> > wrote: >>> >>>> That's interesting. If I understand you correctly, I would create a >>>> vector of the size I want specifically for calculating the ownership range, >>>> then use that on the real vectors. Sounds like that would work. >>>> >>>> In my case, with many vectors, it does not make sense to copy them to a >>>> resized vector every time I want them to grow leading to many creates and >>>> destroys. >>>> >>> >>> You can't dynamically resize vectors like that, and the global offsets >>> change when you resize. >>> >>> Please profile before jumping to the conclusion that there is some >>> terrible inefficiency here. Unless all your loop does is create Vecs of >>> different sizes, chances are that the VecCreate is insignificant. If you >>> have a profile in which it's a big deal, please send the profile and >>> explain what you are doing and why. >>> >>> >>>> >>>> Wes >>>> >>>> >>>> On Tue, Jan 15, 2013 at 5:15 PM, Jed Brown wrote: >>>> >>>>> VecGetOwnershipRange() >>>>> >>>>> You can use VecCreateMPIWithArray() using your own array preallocated >>>>> to be as long as you want. If you profile, you'll probably find this is not >>>>> a meaningful optimization. >>>>> >>>>> >>>>> On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I have a problem where I want to grow the size of a Vec (or Mat) many >>>>>> times during program execution. I think for efficiency purposes I would >>>>>> just want to allocate a maximum size, and then only use the portion that I >>>>>> need. In the vector case, it is rather simple, just use the beginning of >>>>>> the vector, and add values to the end. >>>>>> >>>>>> This leads to me to the problem of processor ownership ranges. From >>>>>> a previous email I noticed one could use the PetscLayout object and keep >>>>>> adjusting it as the useful part of the vector grows. Does this sound like >>>>>> a good approach? >>>>>> >>>>>> I noticed the PetscLayout is not available in Fortran bindings. Any >>>>>> workarounds for this? I suppose I can just manually calculate the >>>>>> processor ranges? >>>>>> >>>>>> Thanks for the help, >>>>>> Wes >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wlowrie at u.washington.edu Wed Jan 16 08:21:54 2013 From: wlowrie at u.washington.edu (Weston Lowrie) Date: Wed, 16 Jan 2013 09:21:54 -0500 Subject: [petsc-users] PetscLayout and GetRange In-Reply-To: References: Message-ID: Jed, Thank you for the good advice! On Wed, Jan 16, 2013 at 9:07 AM, Jed Brown wrote: > You can't change the size of the vector after it has been created. You can > reuse the memory if you manage it separately (by calling > VecCreateMPIWithArray()). Don't try to "trick" the Vec. > > I'm pretty sure your performance anxiety is premature. Just call > VecDestroy() and create a new Vec for the next iteration. > > Note: it's inappropriate to think of a Vec as a dynamic array that you're > using to accumulate an unknown amount of data. The Vec is meant to be used > for linear algebra (including collective semantics). If you're building > something dynamically, build it with a dynamic data structure and then put > into a Vec once the structure is built. > > > On Wed, Jan 16, 2013 at 8:01 AM, Weston Lowrie wrote: > >> On a related note: >> When using VecGetArray(), it uses the info from VecSetSizes(), which is >> set on Vec creation. Can this be updated after the Vec has been used? >> >> My issue is that even if I know the proper local size that I want, when I >> call VecGetArray() it's going to use the local size determined on Vec >> creation. >> >> To avoid this problem I could use VecSetValues() for the range I am >> interested in, rather then using the pointer generated from VecGetArray()? >> >> Wes >> >> On Tue, Jan 15, 2013 at 5:31 PM, Weston Lowrie wrote: >> >>> I don't think there is a inefficiency here. It will be just one extra >>> empty vector since all the other "real" ones are identical size. Sounds >>> like a good strategy to me. >>> >>> I'm not quite at the stage where I can profile yet. I will send if I >>> can get it to a point where it might be significant. >>> >>> Thanks for the help, >>> Wes >>> >>> >>> >>> On Tue, Jan 15, 2013 at 5:26 PM, Jed Brown wrote: >>> >>>> On Tue, Jan 15, 2013 at 4:21 PM, Weston Lowrie < >>>> wlowrie at u.washington.edu> wrote: >>>> >>>>> That's interesting. If I understand you correctly, I would create a >>>>> vector of the size I want specifically for calculating the ownership range, >>>>> then use that on the real vectors. Sounds like that would work. >>>>> >>>>> In my case, with many vectors, it does not make sense to copy them to >>>>> a resized vector every time I want them to grow leading to many creates and >>>>> destroys. >>>>> >>>> >>>> You can't dynamically resize vectors like that, and the global offsets >>>> change when you resize. >>>> >>>> Please profile before jumping to the conclusion that there is some >>>> terrible inefficiency here. Unless all your loop does is create Vecs of >>>> different sizes, chances are that the VecCreate is insignificant. If you >>>> have a profile in which it's a big deal, please send the profile and >>>> explain what you are doing and why. >>>> >>>> >>>>> >>>>> Wes >>>>> >>>>> >>>>> On Tue, Jan 15, 2013 at 5:15 PM, Jed Brown wrote: >>>>> >>>>>> VecGetOwnershipRange() >>>>>> >>>>>> You can use VecCreateMPIWithArray() using your own array preallocated >>>>>> to be as long as you want. If you profile, you'll probably find this is not >>>>>> a meaningful optimization. >>>>>> >>>>>> >>>>>> On Tue, Jan 15, 2013 at 4:12 PM, Weston Lowrie wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I have a problem where I want to grow the size of a Vec (or Mat) >>>>>>> many times during program execution. I think for efficiency purposes I >>>>>>> would just want to allocate a maximum size, and then only use the portion >>>>>>> that I need. In the vector case, it is rather simple, just use the >>>>>>> beginning of the vector, and add values to the end. >>>>>>> >>>>>>> This leads to me to the problem of processor ownership ranges. From >>>>>>> a previous email I noticed one could use the PetscLayout object and keep >>>>>>> adjusting it as the useful part of the vector grows. Does this sound like >>>>>>> a good approach? >>>>>>> >>>>>>> I noticed the PetscLayout is not available in Fortran bindings. Any >>>>>>> workarounds for this? I suppose I can just manually calculate the >>>>>>> processor ranges? >>>>>>> >>>>>>> Thanks for the help, >>>>>>> Wes >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 16 11:32:25 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 16 Jan 2013 11:32:25 -0600 Subject: [petsc-users] DIVERGED_DTOL In-Reply-To: <7a6fb3d2.1c53f.13c43a46ae3.Coremail.w_ang_temp@163.com> References: <18a9408d.272c4.13c1a60725e.Coremail.w_ang_temp@163.com> <3a0d2766.3b.13c1b2272aa.Coremail.w_ang_temp@163.com> <2cb6e1d.179.13c1b48bd19.Coremail.w_ang_temp@163.com> <2d12f2c6.11c04.13c3f4cf9e6.Coremail.w_ang_temp@163.com> <7a6fb3d2.1c53f.13c43a46ae3.Coremail.w_ang_temp@163.com> Message-ID: <217EC119-925D-4C5C-8610-62A3DB51FD07@mcs.anl.gov> On Jan 16, 2013, at 7:55 AM, w_ang_temp wrote: > At 2013-01-16 03:02:32,"Barry Smith" wrote: > > > >> 1.701448294063e+04 > 1.e4*1.145582415879e+00 hence it declares divergence > Hello, Barry > I made some tests and it is true. Thanks. > > But in the mannual, both version 3.2 and 3.3, the rule is:||rk||>dtol*||b||. It is ||b||, not r_0. > My misunderstanding? Or the error in the manual? Error in the manual. Depending on the solver it is either the preconditioned norm or the unpreconditioned norm > Besides, in the mannual, both version 3.2 and 3.3, the default dtol=1.0E+5. But from the results > in the example, it is 1.0E+4. Another error in the manual > I donot know the reason. My petsc version is 3.2-p7. None of this should really matter. The solver has gone bad at this point regardless of specifics of how you measure it. Barry > Thanks. Jim > > > >Note that at iteration 171 the preconditioned residual is 9.348832909193e-13 < 1.e-12 * 1.145582415879e+00 very good convergence. > > > >You seem to have set an unreasonably tight convergence criteria. In double precision you can never realistically expect to use a rtol smaller than e-12. In fact normally it is not reasonable to use more than like 1.e-8. Those extra digits don't mean anything. > > > > Barry > > > > > > From w_ang_temp at 163.com Wed Jan 16 12:09:09 2013 From: w_ang_temp at 163.com (w_ang_temp) Date: Thu, 17 Jan 2013 02:09:09 +0800 (CST) Subject: [petsc-users] Is there something to be paid attention to about MatIsSymmetric? In-Reply-To: References: <7c30a630.9645.13b5c1487f1.Coremail.w_ang_temp@163.com> Message-ID: <4205f3f4.6e.13c448ce2dd.Coremail.w_ang_temp@163.com> >At 2012-12-02 23:10:51,"Jed Brown" wrote: >The test for symmetry is not implemented for all matrix types. Looking at the code, it seems to only be SeqAIJ, but MatIsTranspose(A,A,...) would also work >for MPIAIJ. I made a test. I get all the elements of the matrix and compare the element A(i,j) with A(j,i). The result shows that the matrix is symmetric. But the result of MatIsTranspose(A,A,0.0,matsysflg,ierr) shows that it is unsymmetric(matsysflg=0). I think there is something that I donot know about using the function. >>On Sun, Dec 2, 2012 at 8:45 AM, w_ang_temp wrote: >>Hello, >> I use MatIsSymmetric to know if the matrix A is symmetric. >>According to my model, it should be symmetric due to the theory. >>But I always get the result 'PetscBool *flg = 0', although I >>set 'tol' a large value(0.001). >> Because the matrix is of 20000 dimension, I can not output the >>matrix to the txt. So I want to konw if there is something to be paid attention to >>about the function 'MatIsSymmetric' in version 3.2. Or do I have some other ways >>to determine the symmetry.I think symmetry is one of the most important thing >>in my analysis. >> Thanks. >> Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 16 12:15:59 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 16 Jan 2013 12:15:59 -0600 Subject: [petsc-users] Is there something to be paid attention to about MatIsSymmetric? In-Reply-To: <4205f3f4.6e.13c448ce2dd.Coremail.w_ang_temp@163.com> References: <7c30a630.9645.13b5c1487f1.Coremail.w_ang_temp@163.com> <4205f3f4.6e.13c448ce2dd.Coremail.w_ang_temp@163.com> Message-ID: On Wed, Jan 16, 2013 at 12:09 PM, w_ang_temp wrote: > > > > > > > >At 2012-12-02 23:10:51,"Jed Brown" wrote: > > >The test for symmetry is not implemented for all matrix types. Looking at > the code, it seems to only be SeqAIJ, but MatIsTranspose(A,A,...) would > also work >for MPIAIJ. > > > I made a test. I get all the elements of the matrix and compare the > element A(i,j) with > A(j,i). The result shows that the matrix is symmetric. But the result of > MatIsTranspose(A,A,0.0,matsysflg,ierr) > shows that it is unsymmetric(matsysflg=0). I think there is something that > I donot know about using the function. > > Obviously your comparison has a tolerance. Matt > >>On Sun, Dec 2, 2012 at 8:45 AM, w_ang_temp wrote: > >> >>Hello, >> >> >> I use MatIsSymmetric to know if the matrix A is symmetric. >> >> >>According to my model, it should be symmetric due to the theory. >> >> >>But I always get the result 'PetscBool *flg = 0', although I >> >> >>set 'tol' a large value(0.001). >> >> >> Because the matrix is of 20000 dimension, I can not output the >> >> >>matrix to the txt. So I want to konw if there is something to be paid >> attention to >> >> >>about the function 'MatIsSymmetric' in version 3.2. Or do I have some >> other ways >> >> >>to determine the symmetry.I think symmetry is one of the most important >> thing >> >> >>in my analysis. >> >> >> Thanks. >> >> >> Jim >> >> >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefonseca at gmail.com Wed Jan 16 12:59:40 2013 From: jefonseca at gmail.com (Jim Fonseca) Date: Wed, 16 Jan 2013 13:59:40 -0500 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: Message-ID: Hi, Thanks for the suggestion about PETSc-dev regarding the MatMatMult issue. I was able to build PETSc-dev, but slepc-dev gives me errors I haven't seen before. Logs are attached. Any ideas? BEGINNING TO COMPILE LIBRARIES IN ALL DIRECTORIES ========================================= libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src/eps libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src/eps/interface basic.c(560): error: argument of type "PetscFList" is incompatible with parameter of type "MPI_Comm={int}" ierr = PetscFListFind(EPSList,((PetscObject)eps)->comm,type,PETSC_TRUE,(void (**)(void)) &r);CHKERRQ(ierr); Thanks, Jim On Tue, Jan 15, 2013 at 4:43 PM, Hong Zhang wrote: > Jim : > Can you switch to petsc-dev? MatMatMult() > has been updated significantly in petsc-dev > If you still see problem in petsc-dev, send us a short code that > produce the error. We'll check it. > > Hong > > > Hi, > > We are in the process of upgrading from Petsc 3.2 to 3.3p5. > > > > We are creating matrices A and B in this way. > > petsc_matrix = new Mat; > > ierr = MatCreateDense(comm, m, num_cols ,num_rows,num_cols,data,A); > > > > Elsewhere, we have this. It gets called a few times, and on the 4th time, > > the size of matrix is C is wrong. Please see the output below. What > could be > > the problem? > > C = new Mat; > > double fill = PETSC_DEFAULT; > > MatMatMult(A,B,MAT_INITIAL_MATRIX, fill, C); > > { > > int m,n; > > MatGetOwnershipRange(A, &m, &n); > > cerr << "A.m = " << m << "\n"; > > cerr << "A.n = " << n << "\n"; > > MatGetSize(A,&m,&n); > > cerr << "A global rows = " << m << "\n"; > > cerr << "A global cols = " << n << "\n"; > > > > MatGetOwnershipRange(B, &m, &n); > > cerr << "B.m = " << m << "\n"; > > cerr << "B.n = " << n << "\n"; > > MatGetSize(B,&m,&n); > > cerr << "B global rows = " << m << "\n"; > > cerr << "B global cols = " << n << "\n"; > > > > MatGetOwnershipRange(*C, &m, &n); > > cerr << "C.m = " << m << "\n"; > > cerr << "C.n = " << n << "\n"; > > > > MatGetSize(*C,&m,&n); > > cerr << "C global rows = " << m << "\n"; > > cerr << "C global cols = " << n << "\n"; > > > > } > > > > A.m = 0 > > A.n = 59 > > A global rows = 59 > > A global cols = 320 > > B.m = 0 > > B.n = 320 > > B global rows = 320 > > B global cols = 320 > > C.m = 0 > > C.n = 59 > > C global rows = 59 > > C global cols = 320 > > A.m = 0 > > A.n = 59 > > A global rows = 59 > > A global cols = 320 > > B.m = 0 > > B.n = 320 > > B global rows = 320 > > B global cols = 59 > > C.m = 10922 > > C.n = -1389327096 > > C global rows = -1389327112 > > C global cols = -1389327112 > > > > > > Thanks, > > Jim > > -- > > Jim Fonseca, PhD > > Research Scientist > > Network for Computational Nanotechnology > > Purdue University > > 765-496-6495 > > www.jimfonseca.com > > > > > -- Jim Fonseca, PhD Research Scientist Network for Computational Nanotechnology Purdue University 765-496-6495 www.jimfonseca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 31307 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 4601 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 25674 bytes Desc: not available URL: From irving at naml.us Wed Jan 16 13:05:45 2013 From: irving at naml.us (Geoffrey Irving) Date: Wed, 16 Jan 2013 11:05:45 -0800 Subject: [petsc-users] reparsing command line options Message-ID: Is there a convenient way to reparse PETSc options from (a fake) argv without doing PetscFinalize followed by PetscInitialize? I'd like to be able to change options while a program is running. I realize that any previously created objects would not automatically refresh, so doing such a reparse may be a bit sketchy anyways. Thanks, Geoffrey From knepley at gmail.com Wed Jan 16 13:15:34 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 16 Jan 2013 13:15:34 -0600 Subject: [petsc-users] reparsing command line options In-Reply-To: References: Message-ID: I think you can just recall PetscOptionsInsert(). Matt On Wed, Jan 16, 2013 at 1:05 PM, Geoffrey Irving wrote: > Is there a convenient way to reparse PETSc options from (a fake) argv > without doing PetscFinalize followed by PetscInitialize? I'd like to > be able to change options while a program is running. I realize that > any previously created objects would not automatically refresh, so > doing such a reparse may be a bit sketchy anyways. > > Thanks, > Geoffrey > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From irving at naml.us Wed Jan 16 13:16:36 2013 From: irving at naml.us (Geoffrey Irving) Date: Wed, 16 Jan 2013 11:16:36 -0800 Subject: [petsc-users] reparsing command line options In-Reply-To: References: Message-ID: Perfect, thanks. Geoffrey On Wed, Jan 16, 2013 at 11:15 AM, Matthew Knepley wrote: > I think you can just recall PetscOptionsInsert(). > > Matt > > > On Wed, Jan 16, 2013 at 1:05 PM, Geoffrey Irving wrote: >> >> Is there a convenient way to reparse PETSc options from (a fake) argv >> without doing PetscFinalize followed by PetscInitialize? I'd like to >> be able to change options while a program is running. I realize that >> any previously created objects would not automatically refresh, so >> doing such a reparse may be a bit sketchy anyways. >> >> Thanks, >> Geoffrey > > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener From jroman at dsic.upv.es Wed Jan 16 13:18:31 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 16 Jan 2013 20:18:31 +0100 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: Message-ID: <8C77279B-0627-40C1-AE4F-3A23B1D3CA89@dsic.upv.es> El 16/01/2013, a las 19:59, Jim Fonseca escribi?: > Hi, > Thanks for the suggestion about PETSc-dev regarding the MatMatMult issue. > I was able to build PETSc-dev, but slepc-dev gives me errors I haven't seen before. Logs are attached. Any ideas? > > BEGINNING TO COMPILE LIBRARIES IN ALL DIRECTORIES > ========================================= > libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src > libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src/eps > libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src/eps/interface > basic.c(560): error: argument of type "PetscFList" is incompatible with parameter of type "MPI_Comm={int}" > ierr = PetscFListFind(EPSList,((PetscObject)eps)->comm,type,PETSC_TRUE,(void (**)(void)) &r);CHKERRQ(ierr); > > Thanks, > Jim I know slepc-dev is broken but haven't had time to fix. Try updating in a few hours. Jose From fd.kong at siat.ac.cn Wed Jan 16 13:53:51 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Wed, 16 Jan 2013 12:53:51 -0700 Subject: [petsc-users] How to reorder the local part of the parallel mat? Message-ID: Hi all, I create a parallel mat with type MATMPIAIJ, and use the MatGetOrdering to reorder it. But it doesn't work. Who can tell me how to do that? I just want to reorder the local part of the matrix. Thanks, -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jan 16 14:10:40 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 16 Jan 2013 14:10:40 -0600 Subject: [petsc-users] How to reorder the local part of the parallel mat? In-Reply-To: References: Message-ID: <79B09D75-4C01-4D9E-BADB-DC82F3B30FA8@mcs.anl.gov> On Jan 16, 2013, at 1:53 PM, Fande Kong wrote: > Hi all, > > I create a parallel mat with type MATMPIAIJ, and use the MatGetOrdering to reorder it. But it doesn't work. Who can tell me how to do that? I just want to reorder the local part of the matrix. We don't have code in place for this type of thing. Maybe if you explain why you would like it we could suggest alternatives. Generally we recommend ordering the original mesh appropriately so that the resulting matrices have good orderings and don't recommend reordering the matrices directly (except for direct solvers). Barry > > Thanks, > > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences From fd.kong at siat.ac.cn Wed Jan 16 14:26:46 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Wed, 16 Jan 2013 13:26:46 -0700 Subject: [petsc-users] How to reorder the local part of the parallel mat? In-Reply-To: <79B09D75-4C01-4D9E-BADB-DC82F3B30FA8@mcs.anl.gov> References: <79B09D75-4C01-4D9E-BADB-DC82F3B30FA8@mcs.anl.gov> Message-ID: Barry, Thank you very much. What I want to do is to order the original mesh. I have partitioned the mesh. But I want to reorder the vertices. Thus I create a matrix that represent the relationships of the vertices, and then I want to use the ordering methods in petsc to reorder the vertices by using matgetordering. What should I do if I want to use the ordering methods (e.g. MATORDERINGRCM ) in petsc to reorder the mesh vertices? On Wed, Jan 16, 2013 at 1:10 PM, Barry Smith wrote: > > On Jan 16, 2013, at 1:53 PM, Fande Kong wrote: > > > Hi all, > > > > I create a parallel mat with type MATMPIAIJ, and use the MatGetOrdering > to reorder it. But it doesn't work. Who can tell me how to do that? I just > want to reorder the local part of the matrix. > > We don't have code in place for this type of thing. Maybe if you > explain why you would like it we could suggest alternatives. > > Generally we recommend ordering the original mesh appropriately so > that the resulting matrices have good orderings and don't recommend > reordering the matrices directly (except for direct solvers). > > Barry > > > > > Thanks, > > > > -- > > Fande Kong > > ShenZhen Institutes of Advanced Technology > > Chinese Academy of Sciences > > > -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 16 15:32:51 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 16 Jan 2013 15:32:51 -0600 Subject: [petsc-users] How to reorder the local part of the parallel mat? In-Reply-To: References: <79B09D75-4C01-4D9E-BADB-DC82F3B30FA8@mcs.anl.gov> Message-ID: 1. Partition the mesh 2. Create a sequential matrix representing local mesh adjacency 3. Use MATORDERINGRCM to get a new ordering for that sequential matrix 4. Broadcast the new global indices to neighbors On Wed, Jan 16, 2013 at 2:26 PM, Fande Kong wrote: > Barry, Thank you very much. > > What I want to do is to order the original mesh. I have partitioned the > mesh. But I want to reorder the vertices. Thus I create a matrix that > represent the relationships of the vertices, and then I want to use the > ordering methods in petsc to reorder the vertices by using matgetordering. > > What should I do if I want to use the ordering methods > (e.g. MATORDERINGRCM ) in petsc to reorder the mesh vertices? > > On Wed, Jan 16, 2013 at 1:10 PM, Barry Smith wrote: > >> >> On Jan 16, 2013, at 1:53 PM, Fande Kong wrote: >> >> > Hi all, >> > >> > I create a parallel mat with type MATMPIAIJ, and use the MatGetOrdering >> to reorder it. But it doesn't work. Who can tell me how to do that? I just >> want to reorder the local part of the matrix. >> >> We don't have code in place for this type of thing. Maybe if you >> explain why you would like it we could suggest alternatives. >> >> Generally we recommend ordering the original mesh appropriately so >> that the resulting matrices have good orderings and don't recommend >> reordering the matrices directly (except for direct solvers). >> >> Barry >> >> > >> > Thanks, >> > >> > -- >> > Fande Kong >> > ShenZhen Institutes of Advanced Technology >> > Chinese Academy of Sciences >> >> >> > > > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Wed Jan 16 15:51:20 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Wed, 16 Jan 2013 14:51:20 -0700 Subject: [petsc-users] How to reorder the local part of the parallel mat? In-Reply-To: References: <79B09D75-4C01-4D9E-BADB-DC82F3B30FA8@mcs.anl.gov> Message-ID: Thanks, Jed, Could I directly extract a local sub matrix from the parallel matrix? The local part should be a sequential matrix. On Wed, Jan 16, 2013 at 2:32 PM, Jed Brown wrote: > 1. Partition the mesh > 2. Create a sequential matrix representing local mesh adjacency > 3. Use MATORDERINGRCM to get a new ordering for that sequential matrix > 4. Broadcast the new global indices to neighbors > > > On Wed, Jan 16, 2013 at 2:26 PM, Fande Kong wrote: > >> Barry, Thank you very much. >> >> What I want to do is to order the original mesh. I have partitioned the >> mesh. But I want to reorder the vertices. Thus I create a matrix that >> represent the relationships of the vertices, and then I want to use the >> ordering methods in petsc to reorder the vertices by using matgetordering. >> >> What should I do if I want to use the ordering methods >> (e.g. MATORDERINGRCM ) in petsc to reorder the mesh vertices? >> >> On Wed, Jan 16, 2013 at 1:10 PM, Barry Smith wrote: >> >>> >>> On Jan 16, 2013, at 1:53 PM, Fande Kong wrote: >>> >>> > Hi all, >>> > >>> > I create a parallel mat with type MATMPIAIJ, and use the >>> MatGetOrdering to reorder it. But it doesn't work. Who can tell me how to >>> do that? I just want to reorder the local part of the matrix. >>> >>> We don't have code in place for this type of thing. Maybe if you >>> explain why you would like it we could suggest alternatives. >>> >>> Generally we recommend ordering the original mesh appropriately so >>> that the resulting matrices have good orderings and don't recommend >>> reordering the matrices directly (except for direct solvers). >>> >>> Barry >>> >>> > >>> > Thanks, >>> > >>> > -- >>> > Fande Kong >>> > ShenZhen Institutes of Advanced Technology >>> > Chinese Academy of Sciences >>> >>> >>> >> >> >> -- >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> > > -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 16 15:54:33 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 16 Jan 2013 15:54:33 -0600 Subject: [petsc-users] How to reorder the local part of the parallel mat? In-Reply-To: References: <79B09D75-4C01-4D9E-BADB-DC82F3B30FA8@mcs.anl.gov> Message-ID: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetDiagonalBlock.html On Wed, Jan 16, 2013 at 3:51 PM, Fande Kong wrote: > Thanks, Jed, > > Could I directly extract a local sub matrix from the parallel matrix? The > local part should be a sequential matrix. > > > On Wed, Jan 16, 2013 at 2:32 PM, Jed Brown wrote: > >> 1. Partition the mesh >> 2. Create a sequential matrix representing local mesh adjacency >> 3. Use MATORDERINGRCM to get a new ordering for that sequential matrix >> 4. Broadcast the new global indices to neighbors >> >> >> On Wed, Jan 16, 2013 at 2:26 PM, Fande Kong wrote: >> >>> Barry, Thank you very much. >>> >>> What I want to do is to order the original mesh. I have partitioned the >>> mesh. But I want to reorder the vertices. Thus I create a matrix that >>> represent the relationships of the vertices, and then I want to use the >>> ordering methods in petsc to reorder the vertices by using matgetordering. >>> >>> What should I do if I want to use the ordering methods >>> (e.g. MATORDERINGRCM ) in petsc to reorder the mesh vertices? >>> >>> On Wed, Jan 16, 2013 at 1:10 PM, Barry Smith wrote: >>> >>>> >>>> On Jan 16, 2013, at 1:53 PM, Fande Kong wrote: >>>> >>>> > Hi all, >>>> > >>>> > I create a parallel mat with type MATMPIAIJ, and use the >>>> MatGetOrdering to reorder it. But it doesn't work. Who can tell me how to >>>> do that? I just want to reorder the local part of the matrix. >>>> >>>> We don't have code in place for this type of thing. Maybe if you >>>> explain why you would like it we could suggest alternatives. >>>> >>>> Generally we recommend ordering the original mesh appropriately so >>>> that the resulting matrices have good orderings and don't recommend >>>> reordering the matrices directly (except for direct solvers). >>>> >>>> Barry >>>> >>>> > >>>> > Thanks, >>>> > >>>> > -- >>>> > Fande Kong >>>> > ShenZhen Institutes of Advanced Technology >>>> > Chinese Academy of Sciences >>>> >>>> >>>> >>> >>> >>> -- >>> Fande Kong >>> ShenZhen Institutes of Advanced Technology >>> Chinese Academy of Sciences >>> >> >> > > > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefonseca at gmail.com Wed Jan 16 22:47:05 2013 From: jefonseca at gmail.com (Jim Fonseca) Date: Wed, 16 Jan 2013 23:47:05 -0500 Subject: [petsc-users] MatMatMult size error In-Reply-To: <8C77279B-0627-40C1-AE4F-3A23B1D3CA89@dsic.upv.es> References: <8C77279B-0627-40C1-AE4F-3A23B1D3CA89@dsic.upv.es> Message-ID: Dear Jose, Thanks for looking into this. I seem to still be getting a error with the new change in revision 3170: (I was trying 3169 previously). /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/include/slepcst.h(116): error: identifier "PetscFunctionList" is undefined PETSC_EXTERN PetscFunctionList STList; Thanks, Jim On Wed, Jan 16, 2013 at 2:18 PM, Jose E. Roman wrote: > > El 16/01/2013, a las 19:59, Jim Fonseca escribi?: > > > Hi, > > Thanks for the suggestion about PETSc-dev regarding the MatMatMult issue. > > I was able to build PETSc-dev, but slepc-dev gives me errors I haven't > seen before. Logs are attached. Any ideas? > > > > BEGINNING TO COMPILE LIBRARIES IN ALL DIRECTORIES > > ========================================= > > libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src > > libfast in: /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src/eps > > libfast in: > /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/src/eps/interface > > basic.c(560): error: argument of type "PetscFList" is incompatible with > parameter of type "MPI_Comm={int}" > > ierr = > PetscFListFind(EPSList,((PetscObject)eps)->comm,type,PETSC_TRUE,(void > (**)(void)) &r);CHKERRQ(ierr); > > > > Thanks, > > Jim > > I know slepc-dev is broken but haven't had time to fix. Try updating in a > few hours. > Jose > > -- Jim Fonseca, PhD Research Scientist Network for Computational Nanotechnology Purdue University 765-496-6495 www.jimfonseca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Jan 17 01:54:40 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 17 Jan 2013 08:54:40 +0100 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: <8C77279B-0627-40C1-AE4F-3A23B1D3CA89@dsic.upv.es> Message-ID: <0D88CB36-DF96-499E-B441-2A8D6616C732@dsic.upv.es> El 17/01/2013, a las 05:47, Jim Fonseca escribi?: > Dear Jose, > Thanks for looking into this. I seem to still be getting a error with the new change in revision 3170: (I was trying 3169 previously). > > /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/include/slepcst.h(116): error: identifier "PetscFunctionList" is undefined > PETSC_EXTERN PetscFunctionList STList; > Thanks, > Jim > Maybe you have files remaining from a previous build. Try rm $PETSC_ARCH and rebuild again. Jose From ecoon at lanl.gov Fri Jan 18 11:55:09 2013 From: ecoon at lanl.gov (Ethan Coon) Date: Fri, 18 Jan 2013 10:55:09 -0700 Subject: [petsc-users] oddity in MatTranspose() under petsc4py Message-ID: <1358531709.14535.16.camel@kafka.lanl.gov> First, apologies for not looking into this more closely, but I wanted to make sure I wasn't missing something stupid before I rebuilt petsc-dev and petsc4py-dev and started writing PETSc-only versions to see what is going on. See the following odd result: In [1]: from petsc4py import PETSc In [2]: import numpy as np In [3]: n = [0,1] In [4]: vals = np.array([[0,1],[0,0]], 'd') In [5]: A = PETSc.Mat().createAIJ(2,2) In [6]: A.assemble() In [7]: B = A.duplicate() In [8]: B.setValues(n,n,vals, PETSc.InsertMode.ADD_VALUES) In [9]: B.assemble() In [10]: B.transpose(out=A) Out[10]: In [11]: A.view() Matrix Object: 1 MPI processes type: seqbaij row 0: (0, 0) (1, 0) row 1: (0, 1) (1, 0) In [12]: vals Out[12]: array([[ 0., 1.], [ 0., 0.]]) In [13]: A.setValues(n,n,-vals.transpose(), PETSc.InsertMode.ADD_VALUES) In [14]: A.assemble() In [15]: A.view() Matrix Object: 1 MPI processes type: seqbaij row 0: (0, 0) (1, -1) row 1: (0, 1) (1, 0) where I would have expected A to be all zeros... This is on petsc4py-3.3 and petsc-3.3-p5, and I know this code worked on older versions of petsc4py/petsc. If I'm not missing something dumb I'll look a little closer into whether it is PETSc or petsc4py... Thanks, Ethan -- ------------------------------------ Ethan Coon Post-Doctoral Researcher Applied Mathematics - T-5 Los Alamos National Laboratory 505-665-8289 http://www.ldeo.columbia.edu/~ecoon/ ------------------------------------ From ecoon at lanl.gov Fri Jan 18 12:16:28 2013 From: ecoon at lanl.gov (Ethan Coon) Date: Fri, 18 Jan 2013 11:16:28 -0700 Subject: [petsc-users] oddity in MatTranspose() under petsc4py In-Reply-To: <1358531709.14535.16.camel@kafka.lanl.gov> References: <1358531709.14535.16.camel@kafka.lanl.gov> Message-ID: <1358532988.14535.22.camel@kafka.lanl.gov> Nevermind... the issue is in numpy being smarter about limiting copies than it used to be, not petsc4py. Forcing numpy to copy the array after the transpose fixes the problem. Which brings up the question as to whether petsc4py should be smart enough to check vals.flags['OWNDATA'] and make a copy if not true before it grabs the underlying c-array? Ethan On Fri, 2013-01-18 at 10:55 -0700, Ethan Coon wrote: > First, apologies for not looking into this more closely, but I wanted to > make sure I wasn't missing something stupid before I rebuilt petsc-dev > and petsc4py-dev and started writing PETSc-only versions to see what is > going on. > > See the following odd result: > > > In [1]: from petsc4py import PETSc > > In [2]: import numpy as np > > In [3]: n = [0,1] > > In [4]: vals = np.array([[0,1],[0,0]], 'd') > > In [5]: A = PETSc.Mat().createAIJ(2,2) > > In [6]: A.assemble() > > In [7]: B = A.duplicate() > > In [8]: B.setValues(n,n,vals, PETSc.InsertMode.ADD_VALUES) > > In [9]: B.assemble() > > In [10]: B.transpose(out=A) > Out[10]: > > In [11]: A.view() > Matrix Object: 1 MPI processes > type: seqbaij > row 0: (0, 0) (1, 0) > row 1: (0, 1) (1, 0) > > In [12]: vals > Out[12]: > array([[ 0., 1.], > [ 0., 0.]]) > > In [13]: A.setValues(n,n,-vals.transpose(), PETSc.InsertMode.ADD_VALUES) > > In [14]: A.assemble() > > In [15]: A.view() > Matrix Object: 1 MPI processes > type: seqbaij > row 0: (0, 0) (1, -1) > row 1: (0, 1) (1, 0) > > where I would have expected A to be all zeros... > > This is on petsc4py-3.3 and petsc-3.3-p5, and I know this code worked on > older versions of petsc4py/petsc. If I'm not missing something dumb > I'll look a little closer into whether it is PETSc or petsc4py... > > Thanks, > > Ethan > > > -- ------------------------------------ Ethan Coon Post-Doctoral Researcher Applied Mathematics - T-5 Los Alamos National Laboratory 505-665-8289 http://www.ldeo.columbia.edu/~ecoon/ ------------------------------------ From jedbrown at mcs.anl.gov Fri Jan 18 12:17:04 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 18 Jan 2013 12:17:04 -0600 Subject: [petsc-users] oddity in MatTranspose() under petsc4py In-Reply-To: <1358531709.14535.16.camel@kafka.lanl.gov> References: <1358531709.14535.16.camel@kafka.lanl.gov> Message-ID: Ethan, can you make a deep copy of the transposed vals array? I would guess that petsc4py is not not using the transposed view correctly. On Jan 18, 2013 11:55 AM, "Ethan Coon" wrote: > First, apologies for not looking into this more closely, but I wanted to > make sure I wasn't missing something stupid before I rebuilt petsc-dev > and petsc4py-dev and started writing PETSc-only versions to see what is > going on. > > See the following odd result: > > > In [1]: from petsc4py import PETSc > > In [2]: import numpy as np > > In [3]: n = [0,1] > > In [4]: vals = np.array([[0,1],[0,0]], 'd') > > In [5]: A = PETSc.Mat().createAIJ(2,2) > > In [6]: A.assemble() > > In [7]: B = A.duplicate() > > In [8]: B.setValues(n,n,vals, PETSc.InsertMode.ADD_VALUES) > > In [9]: B.assemble() > > In [10]: B.transpose(out=A) > Out[10]: > > In [11]: A.view() > Matrix Object: 1 MPI processes > type: seqbaij > row 0: (0, 0) (1, 0) > row 1: (0, 1) (1, 0) > > In [12]: vals > Out[12]: > array([[ 0., 1.], > [ 0., 0.]]) > > In [13]: A.setValues(n,n,-vals.transpose(), PETSc.InsertMode.ADD_VALUES) > > In [14]: A.assemble() > > In [15]: A.view() > Matrix Object: 1 MPI processes > type: seqbaij > row 0: (0, 0) (1, -1) > row 1: (0, 1) (1, 0) > > where I would have expected A to be all zeros... > > This is on petsc4py-3.3 and petsc-3.3-p5, and I know this code worked on > older versions of petsc4py/petsc. If I'm not missing something dumb > I'll look a little closer into whether it is PETSc or petsc4py... > > Thanks, > > Ethan > > > > -- > ------------------------------------ > Ethan Coon > Post-Doctoral Researcher > Applied Mathematics - T-5 > Los Alamos National Laboratory > 505-665-8289 > > http://www.ldeo.columbia.edu/~ecoon/ > ------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefonseca at gmail.com Fri Jan 18 13:58:49 2013 From: jefonseca at gmail.com (Jim Fonseca) Date: Fri, 18 Jan 2013 14:58:49 -0500 Subject: [petsc-users] MatMatMult size error In-Reply-To: <0D88CB36-DF96-499E-B441-2A8D6616C732@dsic.upv.es> References: <8C77279B-0627-40C1-AE4F-3A23B1D3CA89@dsic.upv.es> <0D88CB36-DF96-499E-B441-2A8D6616C732@dsic.upv.es> Message-ID: Hi Jose, I am still getting the same error with revision 3171 and the petsc-dev-6e0adfdd7dd1 from the afternoon of Jan 17th. Please tell me specifically what revisions I need to use. Thank you, Jim On Thu, Jan 17, 2013 at 2:54 AM, Jose E. Roman wrote: > > El 17/01/2013, a las 05:47, Jim Fonseca escribi?: > > > Dear Jose, > > Thanks for looking into this. I seem to still be getting a error with > the new change in revision 3170: (I was trying 3169 previously). > > > > > /home/jfonseca/NEMOdevcarter/libs/slepc/build-real/include/slepcst.h(116): > error: identifier "PetscFunctionList" is undefined > > PETSC_EXTERN PetscFunctionList STList; > > Thanks, > > Jim > > > > Maybe you have files remaining from a previous build. Try rm $PETSC_ARCH > and rebuild again. > Jose > > -- Jim Fonseca, PhD Research Scientist Network for Computational Nanotechnology Purdue University 765-496-6495 www.jimfonseca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Fri Jan 18 14:20:16 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Fri, 18 Jan 2013 13:20:16 -0700 Subject: [petsc-users] How to change indices in an IS object? Message-ID: Dear all, I want to change the indices in an IS object. How should I do ? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISGetIndices.html. This function doesn't allow us modify the elements. Thank you in advance. Regards, -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jan 18 14:22:32 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 18 Jan 2013 14:22:32 -0600 Subject: [petsc-users] How to change indices in an IS object? In-Reply-To: References: Message-ID: Create a new IS. On Fri, Jan 18, 2013 at 2:20 PM, Fande Kong wrote: > Dear all, > > I want to change the indices in an IS object. How should I do ? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISGetIndices.html. > This function doesn't allow us modify the elements. > > Thank you in advance. > > Regards, > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Fri Jan 18 14:28:29 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Fri, 18 Jan 2013 13:28:29 -0700 Subject: [petsc-users] How to change indices in an IS object? In-Reply-To: References: Message-ID: Thanks. Got it. On Fri, Jan 18, 2013 at 1:22 PM, Jed Brown wrote: > Create a new IS. > > > On Fri, Jan 18, 2013 at 2:20 PM, Fande Kong wrote: > >> Dear all, >> >> I want to change the indices in an IS object. How should I do ? >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/ISGetIndices.html. >> This function doesn't allow us modify the elements. >> >> Thank you in advance. >> >> Regards, >> -- >> Fande Kong >> ShenZhen Institutes of Advanced Technology >> Chinese Academy of Sciences >> > > -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Fri Jan 18 14:53:38 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 18 Jan 2013 21:53:38 +0100 Subject: [petsc-users] MatMatMult size error In-Reply-To: References: <8C77279B-0627-40C1-AE4F-3A23B1D3CA89@dsic.upv.es> <0D88CB36-DF96-499E-B441-2A8D6616C732@dsic.upv.es> Message-ID: El 18/01/2013, a las 20:58, Jim Fonseca escribi?: > Hi Jose, > I am still getting the same error with revision 3171 and the petsc-dev-6e0adfdd7dd1 from the afternoon of Jan 17th. Please tell me specifically what revisions I need to use. > Thank you, > Jim The latest petsc-dev and slepc-dev should work. Send logs to slepc-maint. Jose From dominik at itis.ethz.ch Sat Jan 19 02:07:27 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sat, 19 Jan 2013 09:07:27 +0100 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: > I don't know. If we need to update something in PETSc's interface to Hypre, > we'll do it. I reported the build problem and they said they had patched it, > but did not send me a patch. I encourage more people to ask them to publish > (read-only) their source repository. Then we could stop guessing. I have just been notified by the HYPRE developers that there is now a new version available that will work with PETSc. Thanks, Dominik From jedbrown at mcs.anl.gov Sat Jan 19 08:48:27 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 19 Jan 2013 08:48:27 -0600 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: On Sat, Jan 19, 2013 at 2:07 AM, Dominik Szczerba wrote: > > I don't know. If we need to update something in PETSc's interface to > Hypre, > > we'll do it. I reported the build problem and they said they had patched > it, > > but did not send me a patch. I encourage more people to ask them to > publish > > (read-only) their source repository. Then we could stop guessing. > > I have just been notified by the HYPRE developers that there is now a > new version available that will work with PETSc. Yes, that's the version used by --download-hypre in petsc-dev. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Sat Jan 19 17:49:06 2013 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 20 Jan 2013 07:49:06 +0800 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: References: Message-ID: <50FB30F2.1080909@gmail.com> Hi, Am I right to say that the newest ver 2.9b is working well with PETSc dev? Thanks Yours sincerely, TAY wee-beng On 1/19/2013 10:48 PM, Jed Brown wrote: > On Sat, Jan 19, 2013 at 2:07 AM, Dominik Szczerba > > wrote: > > > I don't know. If we need to update something in PETSc's > interface to Hypre, > > we'll do it. I reported the build problem and they said they had > patched it, > > but did not send me a patch. I encourage more people to ask them > to publish > > (read-only) their source repository. Then we could stop guessing. > > I have just been notified by the HYPRE developers that there is now a > new version available that will work with PETSc. > > > Yes, that's the version used by --download-hypre in petsc-dev. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sat Jan 19 18:06:59 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sat, 19 Jan 2013 18:06:59 -0600 Subject: [petsc-users] Petsc 3.3.p5 with HYPRE 2.9.0b In-Reply-To: <50FB30F2.1080909@gmail.com> References: <50FB30F2.1080909@gmail.com> Message-ID: On Sat, Jan 19, 2013 at 5:49 PM, TAY wee-beng wrote: > Hi, > > Am I right to say that the newest ver 2.9b is working well with PETSc dev? > It's named 2.9.1a and is working fine. Please report any problems. > > Thanks > > Yours sincerely, > > TAY wee-beng > > On 1/19/2013 10:48 PM, Jed Brown wrote: > > On Sat, Jan 19, 2013 at 2:07 AM, Dominik Szczerba wrote: > >> > I don't know. If we need to update something in PETSc's interface to >> Hypre, >> > we'll do it. I reported the build problem and they said they had >> patched it, >> > but did not send me a patch. I encourage more people to ask them to >> publish >> > (read-only) their source repository. Then we could stop guessing. >> >> I have just been notified by the HYPRE developers that there is now a >> new version available that will work with PETSc. > > > Yes, that's the version used by --download-hypre in petsc-dev. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Sun Jan 20 10:12:12 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Sun, 20 Jan 2013 17:12:12 +0100 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat Message-ID: parallel Mat, multiplies a serial Vec or a serial Mat Is it supported directly? If yes, can the resulting Vec/Mat be serial or parallel? From jedbrown at mcs.anl.gov Sun Jan 20 10:13:41 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 20 Jan 2013 10:13:41 -0600 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: No, MatMult and MatMatMult require all objects to be on the same communicator. On Sun, Jan 20, 2013 at 10:12 AM, Hui Zhang wrote: > parallel Mat, multiplies a serial Vec or a serial Mat > > Is it supported directly? If yes, can the resulting Vec/Mat be serial or > parallel? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Sun Jan 20 10:22:03 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Sun, 20 Jan 2013 17:22:03 +0100 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Jan 20, 2013, at 5:13 PM, Jed Brown wrote: > No, MatMult and MatMatMult require all objects to be on the same communicator. Thanks for the quick answer! It seems a general rule that all operands and results must be on the same communicator. Can I lift an object from a sub-communicator to the sup-communicator without changing the parallel layout of the object? > > > On Sun, Jan 20, 2013 at 10:12 AM, Hui Zhang wrote: > parallel Mat, multiplies a serial Vec or a serial Mat > > Is it supported directly? If yes, can the resulting Vec/Mat be serial or parallel? > > From jedbrown at mcs.anl.gov Sun Jan 20 10:28:57 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 20 Jan 2013 10:28:57 -0600 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Sun, Jan 20, 2013 at 10:22 AM, Hui Zhang wrote: > > On Jan 20, 2013, at 5:13 PM, Jed Brown wrote: > > > No, MatMult and MatMatMult require all objects to be on the same > communicator. > > Thanks for the quick answer! It seems a general rule that all operands > and results must be > on the same communicator. Can I lift an object from a sub-communicator to > the sup-communicator > without changing the parallel layout of the object? > You can with a Vec (e.g., using VecPlaceArray), but you're responsible for the sharing so it's fragile. In general, I recommend creating the object on the largest communicator involved and then getting access more locally. In the first pass, just make a copy to a local subcomm unless a routine exists to do it automatically. If you profile and see that the copy is significant (remarkably rare in practice) you can sometimes optimize what is copied versus shared, or change the parent parallel data structure to make the local access faster. > > > > > > > On Sun, Jan 20, 2013 at 10:12 AM, Hui Zhang > wrote: > > parallel Mat, multiplies a serial Vec or a serial Mat > > > > Is it supported directly? If yes, can the resulting Vec/Mat be serial or > parallel? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Sun Jan 20 12:26:01 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Sun, 20 Jan 2013 19:26:01 +0100 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Jan 20, 2013, at 5:28 PM, Jed Brown wrote: > On Sun, Jan 20, 2013 at 10:22 AM, Hui Zhang wrote: > > On Jan 20, 2013, at 5:13 PM, Jed Brown wrote: > > > No, MatMult and MatMatMult require all objects to be on the same communicator. > > Thanks for the quick answer! It seems a general rule that all operands and results must be > on the same communicator. Can I lift an object from a sub-communicator to the sup-communicator > without changing the parallel layout of the object? > > You can with a Vec (e.g., using VecPlaceArray), but you're responsible for the sharing so it's fragile. > > In general, I recommend creating the object on the largest communicator involved and then getting access more locally. In the first pass, just make a copy to a local subcomm unless a routine exists to do it automatically. If you profile and see that the copy is significant (remarkably rare in practice) you can sometimes optimize what is copied versus shared, or change the parent parallel data structure to make the local access faster. I think about the method and find it is not as convenient as allowing inter-communicator operations. For example, VecScatter actually allows the two Vec in different communicators (or one must include the other?). I have more questions. (1) Does MatScatter supports MatMatMult? (2) Does MatConvert works on MATNEST? (3) Does MatGetLocalSubMat support localization to a sub-communicator instead of a process? Thanks! > > > > > > > On Sun, Jan 20, 2013 at 10:12 AM, Hui Zhang wrote: > > parallel Mat, multiplies a serial Vec or a serial Mat > > > > Is it supported directly? If yes, can the resulting Vec/Mat be serial or parallel? > > > > > > From knepley at gmail.com Sun Jan 20 12:38:46 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 20 Jan 2013 12:38:46 -0600 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Sun, Jan 20, 2013 at 12:26 PM, Hui Zhang wrote: > > On Jan 20, 2013, at 5:28 PM, Jed Brown wrote: > > > On Sun, Jan 20, 2013 at 10:22 AM, Hui Zhang > wrote: > > > > On Jan 20, 2013, at 5:13 PM, Jed Brown wrote: > > > > > No, MatMult and MatMatMult require all objects to be on the same > communicator. > > > > Thanks for the quick answer! It seems a general rule that all operands > and results must be > > on the same communicator. Can I lift an object from a sub-communicator > to the sup-communicator > > without changing the parallel layout of the object? > > > > You can with a Vec (e.g., using VecPlaceArray), but you're responsible > for the sharing so it's fragile. > > > > In general, I recommend creating the object on the largest communicator > involved and then getting access more locally. In the first pass, just make > a copy to a local subcomm unless a routine exists to do it automatically. > If you profile and see that the copy is significant (remarkably rare in > practice) you can sometimes optimize what is copied versus shared, or > change the parent parallel data structure to make the local access faster. > > I think about the method and find it is not as convenient as allowing > inter-communicator operations. > For example, VecScatter actually allows the two Vec in different > communicators (or one must include > the other?). > This is about the design of MPI. > I have more questions. > (1) Does MatScatter supports MatMatMult? > No > (2) Does MatConvert works on MATNEST? > Yes, for AIJ > (3) Does MatGetLocalSubMat support localization to a sub-communicator > instead of a process? > No, use MatGetSubMatrix. Matt > Thanks! > > > > > > > > > > > > On Sun, Jan 20, 2013 at 10:12 AM, Hui Zhang < > mike.hui.zhang at hotmail.com> wrote: > > > parallel Mat, multiplies a serial Vec or a serial Mat > > > > > > Is it supported directly? If yes, can the resulting Vec/Mat be serial > or parallel? > > > > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Jan 20 12:41:01 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 20 Jan 2013 12:41:01 -0600 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Sun, Jan 20, 2013 at 12:26 PM, Hui Zhang wrote: > I think about the method and find it is not as convenient as allowing > inter-communicator operations. > Supporting general mixed-communicator operations is a non-starter because the semantics can be ambiguous, it's very easy to get deadlock, and you can't debug or check invariants. Also note that inter-communicators require MPI-2 and have historically had implementation bugs since they are not heavily used. For example, VecScatter actually allows the two Vec in different > communicators (or one must include > the other?). > > I have more questions. > (1) Does MatScatter supports MatMatMult? > No, and VecScatter does not keep enough information to support such an operation. If you have the index sets, you can MatPermute(), but moving matrix entries around is quite expensive so you should usually avoid it. > (2) Does MatConvert works on MATNEST? > No, the implementation was never quite finished. We still think it's a useful thing. > (3) Does MatGetLocalSubMat support localization to a sub-communicator > instead of a process? > No, you might be misunderstanding the purpose MatGetLocalSubMatrix(). It always returns a Mat object, but the communicator is not specified and it may not be "functional" in the sense of supporting MatMult, etc. It does, however, support MatSetValuesLocal() and related operations. See snes/examples/tutorials/ex28.c for an example of assembling a partitioned system into nested versus monolithic format without any knowledge of the representation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Sun Jan 20 13:06:53 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Sun, 20 Jan 2013 20:06:53 +0100 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Jan 20, 2013, at 7:41 PM, Jed Brown wrote: > > On Sun, Jan 20, 2013 at 12:26 PM, Hui Zhang wrote: > I think about the method and find it is not as convenient as allowing inter-communicator operations. > > Supporting general mixed-communicator operations is a non-starter because the semantics can be ambiguous, it's very easy to get deadlock, and you can't debug or check invariants. > > Also note that inter-communicators require MPI-2 and have historically had implementation bugs since they are not heavily used. > > For example, VecScatter actually allows the two Vec in different communicators (or one must include > the other?). > > I have more questions. > (1) Does MatScatter supports MatMatMult? > > No, and VecScatter does not keep enough information to support such an operation. If you have the index sets, you can MatPermute(), but moving matrix entries around is quite expensive so you should usually avoid it. > > (2) Does MatConvert works on MATNEST? > > No, the implementation was never quite finished. We still think it's a useful thing. > > (3) Does MatGetLocalSubMat support localization to a sub-communicator instead of a process? > > No, you might be misunderstanding the purpose MatGetLocalSubMatrix(). It always returns a Mat object, but the communicator is not specified and it may not be "functional" in the sense of supporting MatMult, etc. It does, however, support MatSetValuesLocal() and related operations. See snes/examples/tutorials/ex28.c for an example of assembling a partitioned system into nested versus monolithic format without any knowledge of the representation. Thanks very much! I am taking a look of this example. But MATNEST does not support factorization. I understand now I must do it in a basic way using MatSetValuesLocal. From jedbrown at mcs.anl.gov Sun Jan 20 13:16:27 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 20 Jan 2013 13:16:27 -0600 Subject: [petsc-users] parallel Mat Mult a serial Vec or a serial Mat In-Reply-To: References: Message-ID: On Sun, Jan 20, 2013 at 1:06 PM, Hui Zhang wrote: > Thanks very much! I am taking a look of this example. But MATNEST does > not support factorization. > I understand now I must do it in a basic way using MatSetValuesLocal. > With that example, you run with -dm_mat_type nest if you want to use MATNEST, acknowledging that it means you'll be using a fieldsplit preconditioner. Note that the term "nest" does not appear anywhere in the code. Without that option, an AIJ format will be used, so you can use a direct solver or anything else. MATNEST here is a run-time optimization that is primarily only useful for fieldsplit (or shell) preconditioning. It's a very bad idea to make your code depend on MATNEST because it severely limits your choices of algorithms. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Mon Jan 21 18:41:31 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Mon, 21 Jan 2013 17:41:31 -0700 Subject: [petsc-users] PETSc 3.3 p5 installation newbie question Message-ID: Hi, all I downloaded the PETSc 3.3 p5 version and installed it. Everything seems to be working fine as I followed those instructions and eventually I did the test like: make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug test and I got: Running test examples to verify correct installation Using PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 and PETSC_ARCH=arch-darwin-c-debug C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with 1 MPI process C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with 2 MPI processes Fortran example src/snes/examples/tutorials/ex5f run *successfully* with 1 MPI process Completed test examples I guess everything so far is good. Question: When I went to petsc-3.3-p5/src/snes/examples/tests, trying to see how those example codes work, I did these and none of them worked. ============================================= ../petsc-3.3-p5/src/snes/examples/tests]> make makefile:12: /conf/variables: No such file or directory makefile:13: /conf/rules: No such file or directory makefile:259: /conf/test: No such file or directory make: *** No rule to make target `/conf/test'. Stop. ============================================= ../petsc-3.3-p5/src/snes/examples/tests]> make test makefile:12: /conf/variables: No such file or directory makefile:13: /conf/rules: No such file or directory makefile:259: /conf/test: No such file or directory make: *** No rule to make target `/conf/test'. Stop. ============================================= ../petsc-3.3-p5/src/snes/examples/tests]> make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug (nothing happened, or at least nothing showed) ============================================= ../petsc-3.3-p5/src/snes/examples/tests]> make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug test make: *** No rule to make target `test'. Stop. ============================================= I'd greatly appreciate it if anybody could give me a hand on this. Best, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 21 18:55:48 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Jan 2013 18:55:48 -0600 Subject: [petsc-users] PETSc 3.3 p5 installation newbie question In-Reply-To: References: Message-ID: I think you're looking for export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug (or include in make command) followed by make ex1 make runex1 You can also make alltests but this is overkill. On Mon, Jan 21, 2013 at 6:41 PM, Zou (Non-US), Ling wrote: > Hi, all > > I downloaded the PETSc 3.3 p5 version and installed it. Everything seems > to be working fine as I followed those instructions and eventually I did > the test like: > > make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 > PETSC_ARCH=arch-darwin-c-debug test > > and I got: > > Running test examples to verify correct installation > Using PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 and > PETSC_ARCH=arch-darwin-c-debug > C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with 1 > MPI process > C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with 2 > MPI processes > Fortran example src/snes/examples/tutorials/ex5f run *successfully* with > 1 MPI process > Completed test examples > > I guess everything so far is good. > > > Question: > When I went to petsc-3.3-p5/src/snes/examples/tests, trying to see how > those example codes work, I did these and none of them worked. > > ============================================= > ../petsc-3.3-p5/src/snes/examples/tests]> make > > makefile:12: /conf/variables: No such file or directory > makefile:13: /conf/rules: No such file or directory > makefile:259: /conf/test: No such file or directory > make: *** No rule to make target `/conf/test'. Stop. > ============================================= > ../petsc-3.3-p5/src/snes/examples/tests]> make test > > makefile:12: /conf/variables: No such file or directory > makefile:13: /conf/rules: No such file or directory > makefile:259: /conf/test: No such file or directory > make: *** No rule to make target `/conf/test'. Stop. > ============================================= > ../petsc-3.3-p5/src/snes/examples/tests]> make > PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug > > (nothing happened, or at least nothing showed) > ============================================= > ../petsc-3.3-p5/src/snes/examples/tests]> make > PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug > test > > make: *** No rule to make target `test'. Stop. > ============================================= > > I'd greatly appreciate it if anybody could give me a hand on this. > > Best, > > Ling > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Mon Jan 21 19:03:39 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Mon, 21 Jan 2013 18:03:39 -0700 Subject: [petsc-users] PETSc 3.3 p5 installation newbie question In-Reply-To: References: Message-ID: Thank you Jed. After 'export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug', 'make alltests' seems working well. This is what I got, not a big difference, should it be a concern? tee: arch-darwin-c-debug/conf/alltests.log: No such file or directory 4c4 < 3 SNES Function norm 5.48028e-09 --- > 3 SNES Function norm 5.48039e-09 /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests Possible problem with with ex1_3, diffs above ========================================= 5c5 < 2 SNES Function norm 0.000452855 --- > 2 SNES Function norm 0.000452848 7c7 < 3 SNES Function norm 1.39154e-09 --- > 3 SNES Function norm 1.39443e-09 /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests Possible problem with with ex7_1, diffs above ========================================= 5c5 < 2 SNES Function norm 0.000452855 --- > 2 SNES Function norm 0.000452848 7c7 < 3 SNES Function norm 1.39154e-09 --- > 3 SNES Function norm 1.39443e-09 /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests Possible problem with with ex7_2, diffs above ========================================= 4c4 < 3 SNES Function norm 2.083e-10 --- > 3 SNES Function norm 2.081e-10 /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests Possible problem with with ex9_1, diffs above ========================================= make: [alltests] Error 1 (ignored) Best, Ling On Mon, Jan 21, 2013 at 5:55 PM, Jed Brown wrote: > I think you're looking for > > export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 > PETSC_ARCH=arch-darwin-c-debug > > (or include in make command) followed by > > make ex1 > make runex1 > > You can also > > make alltests > > but this is overkill. > > > On Mon, Jan 21, 2013 at 6:41 PM, Zou (Non-US), Ling wrote: > >> Hi, all >> >> I downloaded the PETSc 3.3 p5 version and installed it. Everything seems >> to be working fine as I followed those instructions and eventually I did >> the test like: >> >> make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 >> PETSC_ARCH=arch-darwin-c-debug test >> >> and I got: >> >> Running test examples to verify correct installation >> Using PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 and >> PETSC_ARCH=arch-darwin-c-debug >> C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with 1 >> MPI process >> C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with 2 >> MPI processes >> Fortran example src/snes/examples/tutorials/ex5f run *successfully* with >> 1 MPI process >> Completed test examples >> >> I guess everything so far is good. >> >> >> Question: >> When I went to petsc-3.3-p5/src/snes/examples/tests, trying to see how >> those example codes work, I did these and none of them worked. >> >> ============================================= >> ../petsc-3.3-p5/src/snes/examples/tests]> make >> >> makefile:12: /conf/variables: No such file or directory >> makefile:13: /conf/rules: No such file or directory >> makefile:259: /conf/test: No such file or directory >> make: *** No rule to make target `/conf/test'. Stop. >> ============================================= >> ../petsc-3.3-p5/src/snes/examples/tests]> make test >> >> makefile:12: /conf/variables: No such file or directory >> makefile:13: /conf/rules: No such file or directory >> makefile:259: /conf/test: No such file or directory >> make: *** No rule to make target `/conf/test'. Stop. >> ============================================= >> ../petsc-3.3-p5/src/snes/examples/tests]> make >> PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug >> >> (nothing happened, or at least nothing showed) >> ============================================= >> ../petsc-3.3-p5/src/snes/examples/tests]> make >> PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug >> test >> >> make: *** No rule to make target `test'. Stop. >> ============================================= >> >> I'd greatly appreciate it if anybody could give me a hand on this. >> >> Best, >> >> Ling >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 21 19:05:19 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 21 Jan 2013 19:05:19 -0600 Subject: [petsc-users] PETSc 3.3 p5 installation newbie question In-Reply-To: References: Message-ID: On Mon, Jan 21, 2013 at 7:03 PM, Zou (Non-US), Ling wrote: > Thank you Jed. > > After 'export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 > PETSC_ARCH=arch-darwin-c-debug', > > 'make alltests' seems working well. This is what I got, not a big > difference, should it be a concern? > It's fine, the default tolerances are just a hair too tight. (Hopefully we'll move to a better testing system so these false positives go away.) > > tee: arch-darwin-c-debug/conf/alltests.log: No such file or directory > 4c4 > < 3 SNES Function norm 5.48028e-09 > --- > > 3 SNES Function norm 5.48039e-09 > /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests > Possible problem with with ex1_3, diffs above > ========================================= > 5c5 > < 2 SNES Function norm 0.000452855 > --- > > 2 SNES Function norm 0.000452848 > 7c7 > < 3 SNES Function norm 1.39154e-09 > --- > > 3 SNES Function norm 1.39443e-09 > /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests > Possible problem with with ex7_1, diffs above > ========================================= > 5c5 > < 2 SNES Function norm 0.000452855 > --- > > 2 SNES Function norm 0.000452848 > 7c7 > < 3 SNES Function norm 1.39154e-09 > --- > > 3 SNES Function norm 1.39443e-09 > /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests > Possible problem with with ex7_2, diffs above > ========================================= > 4c4 > < 3 SNES Function norm 2.083e-10 > --- > > 3 SNES Function norm 2.081e-10 > /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests > Possible problem with with ex9_1, diffs above > ========================================= > make: [alltests] Error 1 (ignored) > > > > Best, > > Ling > > > On Mon, Jan 21, 2013 at 5:55 PM, Jed Brown wrote: > >> I think you're looking for >> >> export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 >> PETSC_ARCH=arch-darwin-c-debug >> >> (or include in make command) followed by >> >> make ex1 >> make runex1 >> >> You can also >> >> make alltests >> >> but this is overkill. >> >> >> On Mon, Jan 21, 2013 at 6:41 PM, Zou (Non-US), Ling wrote: >> >>> Hi, all >>> >>> I downloaded the PETSc 3.3 p5 version and installed it. Everything seems >>> to be working fine as I followed those instructions and eventually I did >>> the test like: >>> >>> make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 >>> PETSC_ARCH=arch-darwin-c-debug test >>> >>> and I got: >>> >>> Running test examples to verify correct installation >>> Using PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 and >>> PETSC_ARCH=arch-darwin-c-debug >>> C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with >>> 1 MPI process >>> C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with >>> 2 MPI processes >>> Fortran example src/snes/examples/tutorials/ex5f run *successfully*with 1 MPI process >>> Completed test examples >>> >>> I guess everything so far is good. >>> >>> >>> Question: >>> When I went to petsc-3.3-p5/src/snes/examples/tests, trying to see how >>> those example codes work, I did these and none of them worked. >>> >>> ============================================= >>> ../petsc-3.3-p5/src/snes/examples/tests]> make >>> >>> makefile:12: /conf/variables: No such file or directory >>> makefile:13: /conf/rules: No such file or directory >>> makefile:259: /conf/test: No such file or directory >>> make: *** No rule to make target `/conf/test'. Stop. >>> ============================================= >>> ../petsc-3.3-p5/src/snes/examples/tests]> make test >>> >>> makefile:12: /conf/variables: No such file or directory >>> makefile:13: /conf/rules: No such file or directory >>> makefile:259: /conf/test: No such file or directory >>> make: *** No rule to make target `/conf/test'. Stop. >>> ============================================= >>> ../petsc-3.3-p5/src/snes/examples/tests]> make >>> PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug >>> >>> (nothing happened, or at least nothing showed) >>> ============================================= >>> ../petsc-3.3-p5/src/snes/examples/tests]> make >>> PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug >>> test >>> >>> make: *** No rule to make target `test'. Stop. >>> ============================================= >>> >>> I'd greatly appreciate it if anybody could give me a hand on this. >>> >>> Best, >>> >>> Ling >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Mon Jan 21 19:06:54 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Mon, 21 Jan 2013 18:06:54 -0700 Subject: [petsc-users] PETSc 3.3 p5 installation newbie question In-Reply-To: References: Message-ID: Jed, thanks again for your reply. Have a good night. Ling On Mon, Jan 21, 2013 at 6:05 PM, Jed Brown wrote: > On Mon, Jan 21, 2013 at 7:03 PM, Zou (Non-US), Ling wrote: > >> Thank you Jed. >> >> After 'export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 >> PETSC_ARCH=arch-darwin-c-debug', >> >> 'make alltests' seems working well. This is what I got, not a big >> difference, should it be a concern? >> > > It's fine, the default tolerances are just a hair too tight. (Hopefully > we'll move to a better testing system so these false positives go away.) > > >> >> tee: arch-darwin-c-debug/conf/alltests.log: No such file or directory >> 4c4 >> < 3 SNES Function norm 5.48028e-09 >> --- >> > 3 SNES Function norm 5.48039e-09 >> /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests >> Possible problem with with ex1_3, diffs above >> ========================================= >> 5c5 >> < 2 SNES Function norm 0.000452855 >> --- >> > 2 SNES Function norm 0.000452848 >> 7c7 >> < 3 SNES Function norm 1.39154e-09 >> --- >> > 3 SNES Function norm 1.39443e-09 >> /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests >> Possible problem with with ex7_1, diffs above >> ========================================= >> 5c5 >> < 2 SNES Function norm 0.000452855 >> --- >> > 2 SNES Function norm 0.000452848 >> 7c7 >> < 3 SNES Function norm 1.39154e-09 >> --- >> > 3 SNES Function norm 1.39443e-09 >> /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests >> Possible problem with with ex7_2, diffs above >> ========================================= >> 4c4 >> < 3 SNES Function norm 2.083e-10 >> --- >> > 3 SNES Function norm 2.081e-10 >> /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tests >> Possible problem with with ex9_1, diffs above >> ========================================= >> make: [alltests] Error 1 (ignored) >> >> >> >> Best, >> >> Ling >> >> >> On Mon, Jan 21, 2013 at 5:55 PM, Jed Brown wrote: >> >>> I think you're looking for >>> >>> export PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 >>> PETSC_ARCH=arch-darwin-c-debug >>> >>> (or include in make command) followed by >>> >>> make ex1 >>> make runex1 >>> >>> You can also >>> >>> make alltests >>> >>> but this is overkill. >>> >>> >>> On Mon, Jan 21, 2013 at 6:41 PM, Zou (Non-US), Ling wrote: >>> >>>> Hi, all >>>> >>>> I downloaded the PETSc 3.3 p5 version and installed it. Everything >>>> seems to be working fine as I followed those instructions and eventually I >>>> did the test like: >>>> >>>> make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 >>>> PETSC_ARCH=arch-darwin-c-debug test >>>> >>>> and I got: >>>> >>>> Running test examples to verify correct installation >>>> Using PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 and >>>> PETSC_ARCH=arch-darwin-c-debug >>>> C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with >>>> 1 MPI process >>>> C/C++ example src/snes/examples/tutorials/ex19 run *successfully* with >>>> 2 MPI processes >>>> Fortran example src/snes/examples/tutorials/ex5f run *successfully*with 1 MPI process >>>> Completed test examples >>>> >>>> I guess everything so far is good. >>>> >>>> >>>> Question: >>>> When I went to petsc-3.3-p5/src/snes/examples/tests, trying to see how >>>> those example codes work, I did these and none of them worked. >>>> >>>> ============================================= >>>> ../petsc-3.3-p5/src/snes/examples/tests]> make >>>> >>>> makefile:12: /conf/variables: No such file or directory >>>> makefile:13: /conf/rules: No such file or directory >>>> makefile:259: /conf/test: No such file or directory >>>> make: *** No rule to make target `/conf/test'. Stop. >>>> ============================================= >>>> ../petsc-3.3-p5/src/snes/examples/tests]> make test >>>> >>>> makefile:12: /conf/variables: No such file or directory >>>> makefile:13: /conf/rules: No such file or directory >>>> makefile:259: /conf/test: No such file or directory >>>> make: *** No rule to make target `/conf/test'. Stop. >>>> ============================================= >>>> ../petsc-3.3-p5/src/snes/examples/tests]> make >>>> PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug >>>> >>>> (nothing happened, or at least nothing showed) >>>> ============================================= >>>> ../petsc-3.3-p5/src/snes/examples/tests]> make >>>> PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug >>>> test >>>> >>>> make: *** No rule to make target `test'. Stop. >>>> ============================================= >>>> >>>> I'd greatly appreciate it if anybody could give me a hand on this. >>>> >>>> Best, >>>> >>>> Ling >>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Tue Jan 22 03:34:15 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Tue, 22 Jan 2013 10:34:15 +0100 Subject: [petsc-users] DMDACreateSection Message-ID: Hi petsc-group, if I want to map a structured mesh onto processors by partition of mesh **elements**, can I use DMDACreate3d -> DMDACreateSection, then DMDACreateLocal(Global)Vector to give me the vector corresponding to my Q1-element d.o.f.? I ask because according to the manual, the usual use of DMDA is only for partition of **nodes** to processors and the local vector is not corresponding to partition of elements. For example, consider the Q1 element on the 1D mesh, (numbers for nodes) 1--2--3--4--5 Given two processors, I want the local vector after partition to reside on the partitioned mesh 1--2--3 3--4--5 proc 0 proc 1 The global value of d.o.f at the node 3 may be managed by proc 1, and proc 0 only ghosts it. If it is the case, proc1 does not ghost any value. This is different from the usual DMDA for finite difference stencils, in which the ghosting is symmetric, i.e. both proc0 and proc1 will ghost some values. Thanks in advance! From mike.hui.zhang at hotmail.com Tue Jan 22 04:04:10 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Tue, 22 Jan 2013 11:04:10 +0100 Subject: [petsc-users] DMDACreateSection In-Reply-To: References: Message-ID: One more question: why does not DMDACreateSection include d.o.f on edges in 3D? I think it is very useful for the sub-structuring methods on structured meshes. For example, the BPS preconditioner, the wire-basket preconditioner on a structured coarse mesh (one subdomain per element) involve interface d.o.f. on faces, edges and vertices. On Jan 22, 2013, at 10:34 AM, Hui Zhang wrote: > Hi petsc-group, > > if I want to map a structured mesh onto processors by partition of mesh **elements**, > can I use DMDACreate3d -> DMDACreateSection, then DMDACreateLocal(Global)Vector to > give me the vector corresponding to my Q1-element d.o.f.? > > I ask because according to the manual, the usual use of DMDA is only for partition of > **nodes** to processors and the local vector is not corresponding to partition of elements. > For example, consider the Q1 element on the 1D mesh, (numbers for nodes) > > 1--2--3--4--5 > > Given two processors, I want the local vector after partition to reside on the partitioned mesh > > 1--2--3 3--4--5 > proc 0 proc 1 > > The global value of d.o.f at the node 3 may be managed by proc 1, and proc 0 only ghosts it. > If it is the case, proc1 does not ghost any value. > > This is different from the usual DMDA for finite difference stencils, in which the ghosting > is symmetric, i.e. both proc0 and proc1 will ghost some values. > > Thanks in advance! > > > > From knepley at gmail.com Tue Jan 22 07:54:05 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 22 Jan 2013 07:54:05 -0600 Subject: [petsc-users] DMDACreateSection In-Reply-To: References: Message-ID: On Tue, Jan 22, 2013 at 3:34 AM, Hui Zhang wrote: > Hi petsc-group, > > if I want to map a structured mesh onto processors by partition of mesh > **elements**, > can I use DMDACreate3d -> DMDACreateSection, then > DMDACreateLocal(Global)Vector to > give me the vector corresponding to my Q1-element d.o.f.? > No, this is not finished yet. I have only implemented it in serial, mostly because no one is asking for it. Note that the data layout has nothing to do with partitioning. We would still have to rewrite those routines for an element-wise partition. This is on the list of things to do. Matt > I ask because according to the manual, the usual use of DMDA is only for > partition of > **nodes** to processors and the local vector is not corresponding to > partition of elements. > For example, consider the Q1 element on the 1D mesh, (numbers for nodes) > > 1--2--3--4--5 > > Given two processors, I want the local vector after partition to reside on > the partitioned mesh > > 1--2--3 3--4--5 > proc 0 proc 1 > > The global value of d.o.f at the node 3 may be managed by proc 1, and proc > 0 only ghosts it. > If it is the case, proc1 does not ghost any value. > > This is different from the usual DMDA for finite difference stencils, in > which the ghosting > is symmetric, i.e. both proc0 and proc1 will ghost some values. > > Thanks in advance! > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fd.kong at siat.ac.cn Tue Jan 22 11:16:30 2013 From: fd.kong at siat.ac.cn (Fande Kong) Date: Tue, 22 Jan 2013 10:16:30 -0700 Subject: [petsc-users] How to produce a coarser mesh for a given 2d or 3d unstructured mesh Message-ID: Hi all, Are there any popular methods or algorithms which can be used to produce a coarser mesh for a given 2d or 3d unstructured mesh? -- Fande Kong ShenZhen Institutes of Advanced Technology Chinese Academy of Sciences -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.fettig at gmail.com Tue Jan 22 18:54:05 2013 From: john.fettig at gmail.com (John Fettig) Date: Tue, 22 Jan 2013 19:54:05 -0500 Subject: [petsc-users] Reset the matrix Message-ID: Is it possible to reset a matrix to the state it was in before any values were set? I.e. removing all nonzero structure as well as values and returning it to just a pre-allocated state? Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Jan 22 19:00:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 22 Jan 2013 19:00:05 -0600 Subject: [petsc-users] Reset the matrix In-Reply-To: References: Message-ID: Just call the appropriate preallocation routine. It can't very well be done in-place because the data structure gets repacked in Assembly. On Tue, Jan 22, 2013 at 6:54 PM, John Fettig wrote: > Is it possible to reset a matrix to the state it was in before any values > were set? I.e. removing all nonzero structure as well as values and > returning it to just a pre-allocated state? > > Regards, > John > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaurish108 at gmail.com Wed Jan 23 00:28:26 2013 From: gaurish108 at gmail.com (Gaurish Telang) Date: Wed, 23 Jan 2013 01:28:26 -0500 Subject: [petsc-users] Reading a rectangular system from a file on disk and solving system in parallel Message-ID: Hi I want to solve a least squares problem min || Ax - b|| in parallel with PETSc. I have the matrix A and the column vector b stored in files which are in PETSc' s binary format. So far, I am able to read both these files and solve the least squares system with PETSc's lsqr routine on a single processor. To solve the least squares system in parallel, can someone guide me on how to distribute the matrix A and vector b among several processors after reading A and b from the corresponding files on disk? For reference, I have pasted my code below, which works well on a single processor. Thank you, Gaurish ----------------------------------------------------------------------------------------------------- static char help[] = "--"; #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **args) { Vec x, b, residue; /* approx solution, RHS, residual */ Mat A; /* linear system matrix */ KSP ksp; /* linear solver context */ PC pc; /* preconditioner context */ PetscErrorCode ierr; PetscInt m,n ; /* # number of rows and columns of the matrix read in*/ PetscViewer fd ; PetscInt size, its; PetscScalar norm, tol=1.e-5; PetscBool flg; char fmat[PETSC_MAX_PATH_LEN]; /* input file names */ char frhs[PETSC_MAX_PATH_LEN]; /* input file names */ PetscInitialize(&argc,&args,(char *)0,help); ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); if (size != 1) SETERRQ(PETSC_COMM_WORLD,1,"This is a uniprocessor example only!"); ierr = PetscOptionsGetString(PETSC_NULL,"-fmat",fmat,PETSC_MAX_PATH_LEN,&flg); ierr = PetscOptionsGetString(PETSC_NULL,"-frhs",frhs,PETSC_MAX_PATH_LEN,&flg); /* Read in the matrix */ ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, fmat, FILE_MODE_READ, &fd ); CHKERRQ(ierr); ierr = MatCreate(PETSC_COMM_WORLD,&A); ierr = MatSetType(A,MATSEQAIJ); ierr = MatLoad(A,fd); ierr = PetscViewerDestroy(&fd); /*Read in the right hand side*/ ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, frhs, FILE_MODE_READ, &fd ); CHKERRQ(ierr); ierr = VecCreate(PETSC_COMM_WORLD,&b); ierr = VecSetType(b,VECSEQ); ierr = VecLoad(b,fd); ierr = PetscViewerDestroy(&fd); /* Get the matrix size. Used for setting the vector sizes*/ ierr = MatGetSize(A , &m, &n); printf("The size of the matrix read in is %d x %d\n", m , n); ierr = VecCreateSeq(PETSC_COMM_WORLD,n, &x); ierr = VecCreateSeq(PETSC_COMM_WORLD,m, &residue); /* Set the solver type at the command-line */ ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr); ierr = KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); ierr = PCSetType(pc,PCNONE);CHKERRQ(ierr); ierr = KSPSetTolerances(ksp,1.e-5,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT);CHKERRQ(ierr); /* Set runtime options, e.g., -ksp_type -pc_type -ksp_monitor -ksp_rtol These options will override those specified above as long as KSPSetFromOptions() is called _after_ any other customization routines */ ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); /* Initial guess for the krylov method set to zero*/ PetscScalar p = 0; ierr = VecSet(x,p);CHKERRQ(ierr); ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE);CHKERRQ(ierr); /* Solve linear system */ ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr); /* View solver info; we could instead use the option -ksp_view to print this info to the screen at the conclusion of KSPSolve().*/ ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); /*View the numerical solution and the residue vector*/ printf("-------------------\nThe numerical solution is x = \n"); ierr = VecView(x,PETSC_VIEWER_STDOUT_SELF); printf("-------------------\nThe residue vector Ax - b is = \n"); ierr = MatMult(A,x,residue);CHKERRQ(ierr); ierr = VecAXPY(residue,-1.0,b);CHKERRQ(ierr); ierr = VecView(residue,PETSC_VIEWER_STDOUT_SELF); /* Clean up */ ierr = VecDestroy(&x);CHKERRQ(ierr); ierr = VecDestroy(&residue);CHKERRQ(ierr); ierr = VecDestroy(&b);CHKERRQ(ierr); ierr = MatDestroy(&A);CHKERRQ(ierr); ierr = KSPDestroy(&ksp);CHKERRQ(ierr); ierr = PetscFinalize(); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 23 07:00:44 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Jan 2013 07:00:44 -0600 Subject: [petsc-users] Reading a rectangular system from a file on disk and solving system in parallel In-Reply-To: References: Message-ID: On Wed, Jan 23, 2013 at 12:28 AM, Gaurish Telang wrote: > Hi > > I want to solve a least squares problem min || Ax - b|| in parallel with > PETSc. > > I have the matrix A and the column vector b stored in files which are in > PETSc' s binary format. > > So far, I am able to read both these files and solve the least squares > system with PETSc's lsqr routine on a single processor. > > To solve the least squares system in parallel, can someone guide me on how > to distribute the matrix A and vector b among several processors after > reading A and b from the corresponding files on disk? > Did you try just not VecSetType and MatSetType to force SEQ implementations? It's better to call MatSetFromOptions() and VecSetFromOptions() so you can change the type at run-time. src/ksp/ksp/examples/tutorials/ex10.c automatically loads the matrix in parallel and solves non-square systems (use -ksp_type lsqr). > > > For reference, I have pasted my code below, which works well on a single > processor. > > Thank you, > > Gaurish > > > > > ----------------------------------------------------------------------------------------------------- > > static char help[] = "--"; > #include > #include > #include > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > Vec x, b, residue; /* approx solution, RHS, residual */ > Mat A; /* linear system matrix */ > KSP ksp; /* linear solver context */ > PC pc; /* preconditioner context */ > PetscErrorCode ierr; > PetscInt m,n ; /* # number of rows and columns of the > matrix read in*/ > PetscViewer fd ; > PetscInt size, its; > PetscScalar norm, tol=1.e-5; > PetscBool flg; > char fmat[PETSC_MAX_PATH_LEN]; /* input file names */ > char frhs[PETSC_MAX_PATH_LEN]; /* input file names */ > > PetscInitialize(&argc,&args,(char *)0,help); > ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); > if (size != 1) SETERRQ(PETSC_COMM_WORLD,1,"This is a uniprocessor > example only!"); > > > ierr = > PetscOptionsGetString(PETSC_NULL,"-fmat",fmat,PETSC_MAX_PATH_LEN,&flg); > ierr = > PetscOptionsGetString(PETSC_NULL,"-frhs",frhs,PETSC_MAX_PATH_LEN,&flg); > > > /* Read in the matrix */ > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, fmat, FILE_MODE_READ, &fd > ); CHKERRQ(ierr); > ierr = MatCreate(PETSC_COMM_WORLD,&A); > ierr = MatSetType(A,MATSEQAIJ); > ierr = MatLoad(A,fd); > ierr = PetscViewerDestroy(&fd); > > > /*Read in the right hand side*/ > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, frhs, FILE_MODE_READ, &fd > ); CHKERRQ(ierr); > ierr = VecCreate(PETSC_COMM_WORLD,&b); > ierr = VecSetType(b,VECSEQ); > ierr = VecLoad(b,fd); > ierr = PetscViewerDestroy(&fd); > > > /* Get the matrix size. Used for setting the vector sizes*/ > ierr = MatGetSize(A , &m, &n); > printf("The size of the matrix read in is %d x %d\n", m , n); > > ierr = VecCreateSeq(PETSC_COMM_WORLD,n, &x); > ierr = VecCreateSeq(PETSC_COMM_WORLD,m, &residue); > > > /* Set the solver type at the command-line */ > ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr); > ierr = KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); > ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); > ierr = PCSetType(pc,PCNONE);CHKERRQ(ierr); > ierr = > KSPSetTolerances(ksp,1.e-5,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT);CHKERRQ(ierr); > /* Set runtime options, e.g., > -ksp_type -pc_type -ksp_monitor -ksp_rtol > These options will override those specified above as long as > KSPSetFromOptions() is called _after_ any other customization > routines */ > ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); > > /* Initial guess for the krylov method set to zero*/ > PetscScalar p = 0; > ierr = VecSet(x,p);CHKERRQ(ierr); > ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE);CHKERRQ(ierr); > /* Solve linear system */ > ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr); > /* View solver info; we could instead use the option -ksp_view to print > this info to the screen at the conclusion of KSPSolve().*/ > ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > > > /*View the numerical solution and the residue vector*/ > printf("-------------------\nThe numerical solution is x = \n"); > ierr = VecView(x,PETSC_VIEWER_STDOUT_SELF); > > > printf("-------------------\nThe residue vector Ax - b is = \n"); > ierr = MatMult(A,x,residue);CHKERRQ(ierr); > ierr = VecAXPY(residue,-1.0,b);CHKERRQ(ierr); > > ierr = VecView(residue,PETSC_VIEWER_STDOUT_SELF); > > /* Clean up */ > ierr = VecDestroy(&x);CHKERRQ(ierr); > ierr = VecDestroy(&residue);CHKERRQ(ierr); > ierr = VecDestroy(&b);CHKERRQ(ierr); > ierr = MatDestroy(&A);CHKERRQ(ierr); > ierr = KSPDestroy(&ksp);CHKERRQ(ierr); > > ierr = PetscFinalize(); > return 0; > } > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Jan 23 08:35:14 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Jan 2013 08:35:14 -0600 Subject: [petsc-users] How to produce a coarser mesh for a given 2d or 3d unstructured mesh In-Reply-To: References: Message-ID: This is a big topic. One is node-nested re-triangularization, popular in finite element methods. There is some experimental code for this in PCGAMG, but it's not production code and due for a refactor. Another is element agglomeration, which is more popular with finite volume methods (and generally for problems where conservation is paramount). On Tue, Jan 22, 2013 at 11:16 AM, Fande Kong wrote: > Hi all, > > Are there any popular methods or algorithms which can be used to produce a > coarser mesh for a given 2d or 3d unstructured mesh? > > -- > Fande Kong > ShenZhen Institutes of Advanced Technology > Chinese Academy of Sciences > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhalen at gmail.com Wed Jan 23 10:16:37 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Wed, 23 Jan 2013 11:16:37 -0500 Subject: [petsc-users] MatGetDiagonal Message-ID: Any chance of making this work in serial? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetDiagonal.html Not a show stopper, I am trying to get the diagonal of some small projected, dense matrices (which come from large sparse matrices). I am running in serial because 1) Since my projected matrices are small, and 2) PETSc does not do certain matrix multiplications involving a dense matrix in parallel, -Nachiket From jedbrown at mcs.anl.gov Wed Jan 23 10:20:29 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 23 Jan 2013 10:20:29 -0600 Subject: [petsc-users] MatGetDiagonal In-Reply-To: References: Message-ID: It works in serial. In parallel, it currently gives the diagonal of the "diagonal blocks" induced by the row and column distributions. That only matches the true diagonal for square matrices, though an actual diagonal doesn't typically make algorithmic sense for a non-square parallel matrix. On Wed, Jan 23, 2013 at 10:16 AM, Nachiket Gokhale wrote: > Any chance of making this work in serial? > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetDiagonal.html > > Not a show stopper, I am trying to get the diagonal of some small > projected, dense matrices (which come from large sparse matrices). I > am running in serial because 1) Since my projected matrices are small, > and 2) PETSc does not do certain matrix multiplications involving a > dense matrix in parallel, > > -Nachiket > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhalen at gmail.com Wed Jan 23 10:23:53 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Wed, 23 Jan 2013 11:23:53 -0500 Subject: [petsc-users] MatGetDiagonal In-Reply-To: References: Message-ID: Thanks, I guess I misinterpreted the manual. -Nachiket On Wed, Jan 23, 2013 at 11:20 AM, Jed Brown wrote: > It works in serial. In parallel, it currently gives the diagonal of the > "diagonal blocks" induced by the row and column distributions. That only > matches the true diagonal for square matrices, though an actual diagonal > doesn't typically make algorithmic sense for a non-square parallel matrix. > > > On Wed, Jan 23, 2013 at 10:16 AM, Nachiket Gokhale > wrote: >> >> Any chance of making this work in serial? >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetDiagonal.html >> >> Not a show stopper, I am trying to get the diagonal of some small >> projected, dense matrices (which come from large sparse matrices). I >> am running in serial because 1) Since my projected matrices are small, >> and 2) PETSc does not do certain matrix multiplications involving a >> dense matrix in parallel, >> >> -Nachiket > > From Sanjay.Kharche at liverpool.ac.uk Wed Jan 23 14:18:36 2013 From: Sanjay.Kharche at liverpool.ac.uk (Kharche, Sanjay) Date: Wed, 23 Jan 2013 20:18:36 +0000 Subject: [petsc-users] output Message-ID: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk> Dear All I am an absolute beginner to PetSc. I am trying to output PetSc vectors in a specific format. This may have been discussed before, but I have so far not found a solution. For 2 or 3 vectors, I do this: /* now write it to file. I would like row 1 to be u, row 2 to be b, and row 3 to be u. that way I can use gnuplots surf, and also all my existin matlab code for plotting/analysis. */ PetscViewer viewer; // a Petsc file pointer. PetscViewerASCIIOpen(PETSC_COMM_WORLD,"ubx.dat",&viewer); VecView(u,viewer); // this comes out with information that I dont want, and in a column - I need to put it as a row. VecView(b,viewer); VecView(x,viewer); PetscViewerDestroy( &viewer ); However, I have not been able to get rid of the information about procs, and also the vectors u, b, and u need to be in row format as: 1 2 1 1 1 0 0 0 1 0 -1 2 -1 0 rather than what I have now: Vector Object: 1 MPI processes type: seq 1 2 1 1 1 etc.... Any suggestions on how to do this will be appreciated. thanks Sanjay From bsmith at mcs.anl.gov Wed Jan 23 14:53:13 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 23 Jan 2013 14:53:13 -0600 Subject: [petsc-users] output In-Reply-To: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk> References: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk> Message-ID: Sanjay, Since you are wanting ASCII output you don't need to worry about absolute scalability. Thus what I would do is in your parallel PETSc application code save the vectors with VecView() to binary format. Then write a stand-alone sequential program in C, Matlab, Python that reads in the binary vectors with VecLoad() and outputs them in any way you want. Trying to do fancy ASCII output in parallel is not worth spending time. Barry On Jan 23, 2013, at 2:18 PM, "Kharche, Sanjay" wrote: > > Dear All > > I am an absolute beginner to PetSc. I am trying to output PetSc vectors in a specific format. This may have been discussed before, but I have so far not found a solution. > > For 2 or 3 vectors, I do this: > > /* > now write it to file. > I would like row 1 to be u, row 2 to be b, and row 3 to be u. that way I can use > gnuplots surf, and also all my existin matlab code for plotting/analysis. > */ > PetscViewer viewer; // a Petsc file pointer. > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"ubx.dat",&viewer); > VecView(u,viewer); // this comes out with information that I dont want, and in a column - I need to put it as a row. > VecView(b,viewer); > VecView(x,viewer); > PetscViewerDestroy( &viewer ); > > However, I have not been able to get rid of the information about procs, and also the vectors u, b, and u need to be in row format as: > 1 2 1 1 1 > 0 0 0 1 0 > -1 2 -1 0 > > rather than what I have now: > > Vector Object: 1 MPI processes > type: seq > 1 > 2 > 1 > 1 > 1 > etc.... > > > Any suggestions on how to do this will be appreciated. > > thanks > Sanjay From Sanjay.Kharche at liverpool.ac.uk Thu Jan 24 13:17:17 2013 From: Sanjay.Kharche at liverpool.ac.uk (Kharche, Sanjay) Date: Thu, 24 Jan 2013 19:17:17 +0000 Subject: [petsc-users] output In-Reply-To: <04649ABFF695C94F8E6CF3BBBA9B1665598A6DC9@BHEXMBX1.livad.liv.ac.uk> References: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk>, , <04649ABFF695C94F8E6CF3BBBA9B1665598A6DC9@BHEXMBX1.livad.liv.ac.uk> Message-ID: <04649ABFF695C94F8E6CF3BBBA9B1665598A6E15@BHEXMBX1.livad.liv.ac.uk> Further to my posting of this afternoon, the solution to my current issue is the function: VecGetArray. thanks. Sanjay ________________________________________ From: Kharche, Sanjay Sent: 24 January 2013 16:03 To: PETSc users list Subject: RE: [petsc-users] output Hi Barry, All I am still taking the first few steps towards getting started. Yesterday, I was trying to output a vector in a specific format, essentially a row of 10 numbers only rather than a column of numbers+metadata. The suggestion for that was to output the vector as a binary, and then read it in using VecLoad from another standalone program and do whatever I wanted. Both my programs are serial, so there is no issue of parallel. In my standalone, I do something like: PetscViewer viewer; PetscViewerBinaryOpen(PETSC_COMM_WORLD,"ubu.bin",FILE_MODE_READ,&viewer); VecLoad(u, viewer); PetscViewerDestroy(&viewer); // check that u has the values you think it should have. VecView(u, PETSC_VIEWER_STDOUT_WORLD); // yes it does. // Now test if u now visible to printf so I can do this: for(i=0;i<10;i++) fprintf(my_non_petscfile,"%f ",u[i]); fprintf(my_non_petscfile,"\n"); // but this does not work - the values of u are not as shown by VecView! And I still cannot output the vector u as a row into the non_petscfile. Can you help? thanks Sanjay ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Barry Smith [bsmith at mcs.anl.gov] Sent: 23 January 2013 20:53 To: PETSc users list Subject: Re: [petsc-users] output Sanjay, Since you are wanting ASCII output you don't need to worry about absolute scalability. Thus what I would do is in your parallel PETSc application code save the vectors with VecView() to binary format. Then write a stand-alone sequential program in C, Matlab, Python that reads in the binary vectors with VecLoad() and outputs them in any way you want. Trying to do fancy ASCII output in parallel is not worth spending time. Barry On Jan 23, 2013, at 2:18 PM, "Kharche, Sanjay" wrote: > > Dear All > > I am an absolute beginner to PetSc. I am trying to output PetSc vectors in a specific format. This may have been discussed before, but I have so far not found a solution. > > For 2 or 3 vectors, I do this: > > /* > now write it to file. > I would like row 1 to be u, row 2 to be b, and row 3 to be u. that way I can use > gnuplots surf, and also all my existin matlab code for plotting/analysis. > */ > PetscViewer viewer; // a Petsc file pointer. > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"ubx.dat",&viewer); > VecView(u,viewer); // this comes out with information that I dont want, and in a column - I need to put it as a row. > VecView(b,viewer); > VecView(x,viewer); > PetscViewerDestroy( &viewer ); > > However, I have not been able to get rid of the information about procs, and also the vectors u, b, and u need to be in row format as: > 1 2 1 1 1 > 0 0 0 1 0 > -1 2 -1 0 > > rather than what I have now: > > Vector Object: 1 MPI processes > type: seq > 1 > 2 > 1 > 1 > 1 > etc.... > > > Any suggestions on how to do this will be appreciated. > > thanks > Sanjay From Sanjay.Kharche at liverpool.ac.uk Thu Jan 24 10:03:33 2013 From: Sanjay.Kharche at liverpool.ac.uk (Kharche, Sanjay) Date: Thu, 24 Jan 2013 16:03:33 +0000 Subject: [petsc-users] output In-Reply-To: References: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk>, Message-ID: <04649ABFF695C94F8E6CF3BBBA9B1665598A6DC9@BHEXMBX1.livad.liv.ac.uk> Hi Barry, All I am still taking the first few steps towards getting started. Yesterday, I was trying to output a vector in a specific format, essentially a row of 10 numbers only rather than a column of numbers+metadata. The suggestion for that was to output the vector as a binary, and then read it in using VecLoad from another standalone program and do whatever I wanted. Both my programs are serial, so there is no issue of parallel. In my standalone, I do something like: PetscViewer viewer; PetscViewerBinaryOpen(PETSC_COMM_WORLD,"ubu.bin",FILE_MODE_READ,&viewer); VecLoad(u, viewer); PetscViewerDestroy(&viewer); // check that u has the values you think it should have. VecView(u, PETSC_VIEWER_STDOUT_WORLD); // yes it does. // Now test if u now visible to printf so I can do this: for(i=0;i<10;i++) fprintf(my_non_petscfile,"%f ",u[i]); fprintf(my_non_petscfile,"\n"); // but this does not work - the values of u are not as shown by VecView! And I still cannot output the vector u as a row into the non_petscfile. Can you help? thanks Sanjay ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Barry Smith [bsmith at mcs.anl.gov] Sent: 23 January 2013 20:53 To: PETSc users list Subject: Re: [petsc-users] output Sanjay, Since you are wanting ASCII output you don't need to worry about absolute scalability. Thus what I would do is in your parallel PETSc application code save the vectors with VecView() to binary format. Then write a stand-alone sequential program in C, Matlab, Python that reads in the binary vectors with VecLoad() and outputs them in any way you want. Trying to do fancy ASCII output in parallel is not worth spending time. Barry On Jan 23, 2013, at 2:18 PM, "Kharche, Sanjay" wrote: > > Dear All > > I am an absolute beginner to PetSc. I am trying to output PetSc vectors in a specific format. This may have been discussed before, but I have so far not found a solution. > > For 2 or 3 vectors, I do this: > > /* > now write it to file. > I would like row 1 to be u, row 2 to be b, and row 3 to be u. that way I can use > gnuplots surf, and also all my existin matlab code for plotting/analysis. > */ > PetscViewer viewer; // a Petsc file pointer. > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"ubx.dat",&viewer); > VecView(u,viewer); // this comes out with information that I dont want, and in a column - I need to put it as a row. > VecView(b,viewer); > VecView(x,viewer); > PetscViewerDestroy( &viewer ); > > However, I have not been able to get rid of the information about procs, and also the vectors u, b, and u need to be in row format as: > 1 2 1 1 1 > 0 0 0 1 0 > -1 2 -1 0 > > rather than what I have now: > > Vector Object: 1 MPI processes > type: seq > 1 > 2 > 1 > 1 > 1 > etc.... > > > Any suggestions on how to do this will be appreciated. > > thanks > Sanjay From knepley at gmail.com Thu Jan 24 14:35:20 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 24 Jan 2013 14:35:20 -0600 Subject: [petsc-users] output In-Reply-To: <04649ABFF695C94F8E6CF3BBBA9B1665598A6DC9@BHEXMBX1.livad.liv.ac.uk> References: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk> <04649ABFF695C94F8E6CF3BBBA9B1665598A6DC9@BHEXMBX1.livad.liv.ac.uk> Message-ID: On Thu, Jan 24, 2013 at 10:03 AM, Kharche, Sanjay < Sanjay.Kharche at liverpool.ac.uk> wrote: > > Hi Barry, All > > I am still taking the first few steps towards getting started. > > Yesterday, I was trying to output a vector in a specific format, > essentially a row of 10 numbers only rather than a column of > numbers+metadata. The suggestion for that was to output the vector as a > binary, and then read it in using VecLoad from another standalone program > and do whatever I wanted. Both my programs are serial, so there is no issue > of parallel. In my standalone, I do something like: > > PetscViewer viewer; > PetscViewerBinaryOpen(PETSC_COMM_WORLD,"ubu.bin",FILE_MODE_READ,&viewer); > VecLoad(u, viewer); > PetscViewerDestroy(&viewer); > > // check that u has the values you think it should have. > VecView(u, PETSC_VIEWER_STDOUT_WORLD); // yes it does. > > // Now test if u now visible to printf so I can do this: > for(i=0;i<10;i++) > fprintf(my_non_petscfile,"%f ",u[i]); > fprintf(my_non_petscfile,"\n"); > // but this does not work - the values of u are not as shown by VecView! > > And I still cannot output the vector u as a row into the non_petscfile. > Can you help? > This is not the whole code. Where is the call to VecGetArray()? Matt > thanks > Sanjay > > > ________________________________________ > From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] > on behalf of Barry Smith [bsmith at mcs.anl.gov] > Sent: 23 January 2013 20:53 > To: PETSc users list > Subject: Re: [petsc-users] output > > Sanjay, > > Since you are wanting ASCII output you don't need to worry about > absolute scalability. Thus what I would do is in your parallel PETSc > application code save the vectors with VecView() to binary format. Then > write a stand-alone sequential program in C, Matlab, Python that reads in > the binary vectors with VecLoad() and outputs them in any way you want. > Trying to do fancy ASCII output in parallel is not worth spending time. > > Barry > > On Jan 23, 2013, at 2:18 PM, "Kharche, Sanjay" < > Sanjay.Kharche at liverpool.ac.uk> wrote: > > > > > Dear All > > > > I am an absolute beginner to PetSc. I am trying to output PetSc vectors > in a specific format. This may have been discussed before, but I have so > far not found a solution. > > > > For 2 or 3 vectors, I do this: > > > > /* > > now write it to file. > > I would like row 1 to be u, row 2 to be b, and row 3 to be u. that way I > can use > > gnuplots surf, and also all my existin matlab code for plotting/analysis. > > */ > > PetscViewer viewer; // a Petsc file pointer. > > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"ubx.dat",&viewer); > > VecView(u,viewer); // this comes out with information that I dont want, > and in a column - I need to put it as a row. > > VecView(b,viewer); > > VecView(x,viewer); > > PetscViewerDestroy( &viewer ); > > > > However, I have not been able to get rid of the information about procs, > and also the vectors u, b, and u need to be in row format as: > > 1 2 1 1 1 > > 0 0 0 1 0 > > -1 2 -1 0 > > > > rather than what I have now: > > > > Vector Object: 1 MPI processes > > type: seq > > 1 > > 2 > > 1 > > 1 > > 1 > > etc.... > > > > > > Any suggestions on how to do this will be appreciated. > > > > thanks > > Sanjay > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyh03259.aps at gmail.com Thu Jan 24 15:38:53 2013 From: lyh03259.aps at gmail.com (Yonghui) Date: Thu, 24 Jan 2013 15:38:53 -0600 Subject: [petsc-users] A little help for running config Message-ID: <000901cdfa7b$35134870$9f39d950$@gmail.com> Dear PETSc users, I am trying to compile PETSc, but there seems to be some problems when I run config. Here is my config command: ./config --prefix=/opt/PETSc --with-c++-support=1 --with-c-support=1 --with-fortran=1 --with-mpi-dir=/opt/mpich2 And here is the error message: TESTING: checkCLibraries from config.compilers(/home/biu/Projects/petsc-3.3-p5/config/BuildSystem/c onfig/compilers.py:161) **************************************************************************** *** UNABLE to EXECUTE BINARIES for ./configure ---------------------------------------------------------------------------- --- Cannot run executables created with FC. If this machine uses a batch system to submit jobs you will need to configure using ./configure with the additional option --with-batch. Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers? I just installed intel compiler on the machine this morning without any error. Then I compiled mpich2 (details below) with the intel compiler. Then sudo make install in /opt/mpich2. There is no error when compiling mpich2. More details: OS: Ubuntu 12.04 LTS (32-bit). CPU: Intel? Core? i7-2630QM CPU @ 2.00GHz ? 4 RAM: 2GB Compiler: icc (ICC) 13.0.1 20121010, ifort (IFORT) 13.0.1 20121010 (Parallel studio 2013 update1). MPICH2: 1.4.1p1 (compiled with compilers above without error). Any suggestion will be appreciated. Thanks, Yonghui -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Jan 24 15:42:41 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 24 Jan 2013 15:42:41 -0600 (CST) Subject: [petsc-users] A little help for running config In-Reply-To: <000901cdfa7b$35134870$9f39d950$@gmail.com> References: <000901cdfa7b$35134870$9f39d950$@gmail.com> Message-ID: send configure.log to petsc-maint Satish On Thu, 24 Jan 2013, Yonghui wrote: > Dear PETSc users, > > > > I am trying to compile PETSc, but there seems to be some problems when I run > config. > > Here is my config command: ./config --prefix=/opt/PETSc --with-c++-support=1 > --with-c-support=1 --with-fortran=1 --with-mpi-dir=/opt/mpich2 > > > > And here is the error message: > > TESTING: checkCLibraries from > config.compilers(/home/biu/Projects/petsc-3.3-p5/config/BuildSystem/c > > onfig/compilers.py:161) > > **************************************************************************** > *** > > UNABLE to EXECUTE BINARIES for ./configure > > ---------------------------------------------------------------------------- > --- > > Cannot run executables created with FC. If this machine uses a batch system > to submit jobs you will need to configure using ./configure with the > additional option --with-batch. > > Otherwise there is problem with the compilers. Can you compile and run code > with your C/C++ (and maybe Fortran) compilers? > > > > I just installed intel compiler on the machine this morning without any > error. Then I compiled mpich2 (details below) with the intel compiler. > > Then sudo make install in /opt/mpich2. There is no error when compiling > mpich2. > > More details: > > OS: Ubuntu 12.04 LTS (32-bit). > > CPU: Intel? Core? i7-2630QM CPU @ 2.00GHz ? 4 > > RAM: 2GB > > Compiler: icc (ICC) 13.0.1 20121010, ifort (IFORT) 13.0.1 20121010 (Parallel > studio 2013 update1). > > MPICH2: 1.4.1p1 (compiled with compilers above without error). > > > > Any suggestion will be appreciated. > > > > Thanks, > > Yonghui > > From Sanjay.Kharche at liverpool.ac.uk Thu Jan 24 10:03:33 2013 From: Sanjay.Kharche at liverpool.ac.uk (Kharche, Sanjay) Date: Thu, 24 Jan 2013 16:03:33 +0000 Subject: [petsc-users] output In-Reply-To: References: <04649ABFF695C94F8E6CF3BBBA9B1665598A6C78@BHEXMBX1.livad.liv.ac.uk>, Message-ID: <04649ABFF695C94F8E6CF3BBBA9B1665598A6DC9@BHEXMBX1.livad.liv.ac.uk> Hi Barry, All I am still taking the first few steps towards getting started. Yesterday, I was trying to output a vector in a specific format, essentially a row of 10 numbers only rather than a column of numbers+metadata. The suggestion for that was to output the vector as a binary, and then read it in using VecLoad from another standalone program and do whatever I wanted. Both my programs are serial, so there is no issue of parallel. In my standalone, I do something like: PetscViewer viewer; PetscViewerBinaryOpen(PETSC_COMM_WORLD,"ubu.bin",FILE_MODE_READ,&viewer); VecLoad(u, viewer); PetscViewerDestroy(&viewer); // check that u has the values you think it should have. VecView(u, PETSC_VIEWER_STDOUT_WORLD); // yes it does. // Now test if u now visible to printf so I can do this: for(i=0;i<10;i++) fprintf(my_non_petscfile,"%f ",u[i]); fprintf(my_non_petscfile,"\n"); // but this does not work - the values of u are not as shown by VecView! And I still cannot output the vector u as a row into the non_petscfile. Can you help? thanks Sanjay ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Barry Smith [bsmith at mcs.anl.gov] Sent: 23 January 2013 20:53 To: PETSc users list Subject: Re: [petsc-users] output Sanjay, Since you are wanting ASCII output you don't need to worry about absolute scalability. Thus what I would do is in your parallel PETSc application code save the vectors with VecView() to binary format. Then write a stand-alone sequential program in C, Matlab, Python that reads in the binary vectors with VecLoad() and outputs them in any way you want. Trying to do fancy ASCII output in parallel is not worth spending time. Barry On Jan 23, 2013, at 2:18 PM, "Kharche, Sanjay" wrote: > > Dear All > > I am an absolute beginner to PetSc. I am trying to output PetSc vectors in a specific format. This may have been discussed before, but I have so far not found a solution. > > For 2 or 3 vectors, I do this: > > /* > now write it to file. > I would like row 1 to be u, row 2 to be b, and row 3 to be u. that way I can use > gnuplots surf, and also all my existin matlab code for plotting/analysis. > */ > PetscViewer viewer; // a Petsc file pointer. > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"ubx.dat",&viewer); > VecView(u,viewer); // this comes out with information that I dont want, and in a column - I need to put it as a row. > VecView(b,viewer); > VecView(x,viewer); > PetscViewerDestroy( &viewer ); > > However, I have not been able to get rid of the information about procs, and also the vectors u, b, and u need to be in row format as: > 1 2 1 1 1 > 0 0 0 1 0 > -1 2 -1 0 > > rather than what I have now: > > Vector Object: 1 MPI processes > type: seq > 1 > 2 > 1 > 1 > 1 > etc.... > > > Any suggestions on how to do this will be appreciated. > > thanks > Sanjay From Wadud.Miah at awe.co.uk Fri Jan 25 08:56:19 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Fri, 25 Jan 2013 14:56:19 +0000 Subject: [petsc-users] using hybrid chebyshev Message-ID: <201301251456.r0PEuRVM001471@msw1.awe.co.uk> The changelog of PETSc 3.3-p5 says that it now provides a hybrid Chebyshev solver. What parameter do you pass into the Fortran KSPSetType subroutine to use this solver? Regards, -------------------------- Wadud Miah HPC, Design Physics Division Direct: 0118 98 56220 AWE, Aldermaston, Reading, RG7 4PR ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Jan 25 09:16:52 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 25 Jan 2013 09:16:52 -0600 Subject: [petsc-users] using hybrid chebyshev In-Reply-To: <201301251456.r0PEuRVM001471@msw1.awe.co.uk> References: <201301251456.r0PEuRVM001471@msw1.awe.co.uk> Message-ID: KSPCHEBYSHEV (or -ksp_type chebyshev). The hybrid option is -ksp_chebyshev_hybrid (see -help for related options). There is currently no functional interface in code, just the options database. On Fri, Jan 25, 2013 at 8:56 AM, wrote: > ****** > > The changelog of PETSc 3.3-p5 says that it now provides a hybrid Chebyshev > solver. What parameter do you pass into the Fortran KSPSetType subroutine > to use this solver? **** > > ** ** > > Regards,**** > > ** ** > > *--------------------------***** > > *Wadud Miah* > *HPC, Design Physics Division** > *Direct: 0118 98 56220 > AWE, Aldermaston, ****Reading**, ** RG7 4PR******** > > **** > > ** ** > > ___________________________________________________ > ____________________________ The information in this email and in any > attachment(s) is commercial in confidence. If you are not the named > addressee(s) or if you receive this email in error then any distribution, > copying or use of this communication or the information in it is strictly > prohibited. Please notify us immediately by email at admin.internet(at) > awe.co.uk, and then delete this message from your computer. While > attachments are virus checked, AWE plc does not accept any liability in > respect of any virus which is not detected. AWE Plc Registered in England > and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 12:39:54 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 11:39:54 -0700 Subject: [petsc-users] newbie questions on preconditioner LU Message-ID: Hi, All I am trying to understand how the preconditioner works when using KSP. For example, when using KSP to solve the linear system problem, Ax = b with the default left preconditioning. We actually solve, M^(-1) * A x = M^(-1) * b where, M is the preconditioning matrix and in many cases, we just use A as the preconditioning matrix. Question: 1), Is the understanding above correct? 2), If the understanding above is correct, is it correct to state the different methods provided in PETSc (such as PCLU, PCILU, etc) are to calculate the inverse matrix M^(-1) from M? 3), How to understand this sentence in the manual (PETSc Users Manual, Reversion 3.3, page 78, under 4.4 Preconditioners) "The direct preconditioner, PCLU, is, in fact, a direct solver for the linear system that uses LU factorization. PCLU is included as a preconditioner so that PETSc has a consistent interface among direct and iterative linear solvers." Does this indicate when using PCLU, we solve Ax = b directly using LU factorization, or, we solve M^(-1) from M using LU factorization? As a beginner to the PETSc, all questions are probably too simple. I'd appreciate it if someone could answer my questions. Best, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 28 13:01:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Jan 2013 14:01:30 -0500 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:39 PM, Ling Zou wrote: > Hi, All > > I am trying to understand how the preconditioner works when using KSP. > > For example, when using KSP to solve the linear system problem, > > Ax = b > > with the default left preconditioning. We actually solve, > > M^(-1) * A x = M^(-1) * b > > where, M is the preconditioning matrix and in many cases, we just use A as > the preconditioning matrix. > > > Question: > 1), Is the understanding above correct? > This is too simplistic. If you really mean M^{-1}, then no, you (almost) never use A as M. If you mean an approximate inverse to M, then yes. > 2), If the understanding above is correct, is it correct to state the > different methods provided in PETSc (such as PCLU, PCILU, etc) are to > calculate the inverse matrix M^(-1) from M? > An approximate inverse. > 3), How to understand this sentence in the manual (PETSc Users Manual, > Reversion 3.3, page 78, under 4.4 Preconditioners) > "The direct preconditioner, PCLU, is, in fact, a direct solver for the > linear system that uses LU factorization. PCLU is included as a > preconditioner so that PETSc has a consistent interface among direct and > iterative linear solvers." > Does this indicate when using PCLU, we solve Ax = b directly using LU > factorization, or, we solve M^(-1) from M using LU factorization? > Same thing, if M = A, M^{-1} A x = A^{-1} A x = x = A^{-1} b which is Gaussian elimination for the original problem. Matt > As a beginner to the PETSc, all questions are probably too simple. I'd > appreciate it if someone could answer my questions. > > Best, > > Ling > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 13:12:01 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 13:12:01 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:01 PM, Matthew Knepley wrote: > M^(-1) * A x = M^(-1) * b >> >> where, M is the preconditioning matrix and in many cases, we just use A >> as the preconditioning matrix. >> >> >> Question: >> 1), Is the understanding above correct? >> > > This is too simplistic. If you really mean M^{-1}, then no, you (almost) > never use A as M. If you mean an > approximate inverse to M, then yes. > Specifically, the notation "M^{-1}" indicates a matrix M exists and that we are applying its inverse. In practice, M is never computed explicitly. In the case of incomplete factorization, M "exists" in that it is equal to L*U, but we still don't form it explicitly, we just apply it as M^{-1} x = U^{-1} (L^{-1} x). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 13:27:31 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 12:27:31 -0700 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 12:01 PM, Matthew Knepley wrote: > On Mon, Jan 28, 2013 at 1:39 PM, Ling Zou wrote: > >> Hi, All >> >> I am trying to understand how the preconditioner works when using KSP. >> >> For example, when using KSP to solve the linear system problem, >> >> Ax = b >> >> with the default left preconditioning. We actually solve, >> >> M^(-1) * A x = M^(-1) * b >> >> where, M is the preconditioning matrix and in many cases, we just use A >> as the preconditioning matrix. >> >> >> Question: >> 1), Is the understanding above correct? >> > > This is too simplistic. If you really mean M^{-1}, then no, you (almost) > never use A as M. If you mean an > approximate inverse to M, then yes. > > >> 2), If the understanding above is correct, is it correct to state the >> different methods provided in PETSc (such as PCLU, PCILU, etc) are to >> calculate the inverse matrix M^(-1) from M? >> > > An approximate inverse. > > >> 3), How to understand this sentence in the manual (PETSc Users Manual, >> Reversion 3.3, page 78, under 4.4 Preconditioners) >> "The direct preconditioner, PCLU, is, in fact, a direct solver for the >> linear system that uses LU factorization. PCLU is included as a >> preconditioner so that PETSc has a consistent interface among direct and >> iterative linear solvers." >> Does this indicate when using PCLU, we solve Ax = b directly using LU >> factorization, or, we solve M^(-1) from M using LU factorization? >> > > Same thing, if M = A, > > M^{-1} A x = A^{-1} A x = x = A^{-1} b > > which is Gaussian elimination for the original problem. > Ahh.. that's true! In case M is not A (as you pointed out earlier), does PCLU provide the approximated inverse matrix of M^{-1} using LU factorization on M? > Matt > > >> As a beginner to the PETSc, all questions are probably too simple. I'd >> appreciate it if someone could answer my questions. >> >> Best, >> >> Ling >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 28 13:29:16 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Jan 2013 14:29:16 -0500 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 2:27 PM, Ling Zou wrote: > > > On Mon, Jan 28, 2013 at 12:01 PM, Matthew Knepley wrote: > >> On Mon, Jan 28, 2013 at 1:39 PM, Ling Zou wrote: >> >>> Hi, All >>> >>> I am trying to understand how the preconditioner works when using KSP. >>> >>> For example, when using KSP to solve the linear system problem, >>> >>> Ax = b >>> >>> with the default left preconditioning. We actually solve, >>> >>> M^(-1) * A x = M^(-1) * b >>> >>> where, M is the preconditioning matrix and in many cases, we just use A >>> as the preconditioning matrix. >>> >>> >>> Question: >>> 1), Is the understanding above correct? >>> >> >> This is too simplistic. If you really mean M^{-1}, then no, you (almost) >> never use A as M. If you mean an >> approximate inverse to M, then yes. >> >> >>> 2), If the understanding above is correct, is it correct to state the >>> different methods provided in PETSc (such as PCLU, PCILU, etc) are to >>> calculate the inverse matrix M^(-1) from M? >>> >> >> An approximate inverse. >> >> >>> 3), How to understand this sentence in the manual (PETSc Users Manual, >>> Reversion 3.3, page 78, under 4.4 Preconditioners) >>> "The direct preconditioner, PCLU, is, in fact, a direct solver for the >>> linear system that uses LU factorization. PCLU is included as a >>> preconditioner so that PETSc has a consistent interface among direct and >>> iterative linear solvers." >>> Does this indicate when using PCLU, we solve Ax = b directly using LU >>> factorization, or, we solve M^(-1) from M using LU factorization? >>> >> >> Same thing, if M = A, >> >> M^{-1} A x = A^{-1} A x = x = A^{-1} b >> >> which is Gaussian elimination for the original problem. >> > > Ahh.. that's true! > In case M is not A (as you pointed out earlier), does PCLU provide the > approximated inverse matrix of M^{-1} using LU factorization on M? > Yes. Matt > >> Matt >> >> >>> As a beginner to the PETSc, all questions are probably too simple. I'd >>> appreciate it if someone could answer my questions. >>> >>> Best, >>> >>> Ling >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 13:33:26 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 13:33:26 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:27 PM, Ling Zou wrote: > Ahh.. that's true! > In case M is not A (as you pointed out earlier), does PCLU provide the > approximated inverse matrix of M^{-1} using LU factorization on M? > Not really, preconditioners are based on inexact algorithms applied to A, not explicit formation of an M that is easier to factor exactly. Since the preconditioner P ("=M^{-1}") is non-singular, there _exists_ an M such that P=M^{-1}, but M is not explicitly computed and it's not used in the solve. Only P is used, and only in special cases (like incomplete factorization) is there even a practical algorithm available to compute M if you wanted to. (For many interesting algorithms, M is dense even though A is sparse.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 28 13:35:11 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Jan 2013 14:35:11 -0500 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 2:33 PM, Jed Brown wrote: > > On Mon, Jan 28, 2013 at 1:27 PM, Ling Zou wrote: > >> Ahh.. that's true! >> In case M is not A (as you pointed out earlier), does PCLU provide the >> approximated inverse matrix of M^{-1} using LU factorization on M? >> > > Not really, preconditioners are based on inexact algorithms applied to A, > not explicit formation of an M that is easier to factor exactly. Since the > preconditioner P ("=M^{-1}") is non-singular, there _exists_ an M such that > P=M^{-1}, but M is not explicitly computed and it's not used in the solve. > Only P is used, and only in special cases (like incomplete factorization) > is there even a practical algorithm available to compute M if you wanted > to. (For many interesting algorithms, M is dense even though A is sparse.) > Jed, I don't think that was the question. He was asking, does LU always use A as the matrix to factorize, or can I use M? to which the answer is plainly Yes. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 13:36:57 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 12:36:57 -0700 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: Thank you Matt. I guess I got the impression that M is generally the same as A from reading the manual.(PETSc Users Manual, Reversion 3.3, page 71, under 4.1 Using KSP) "Typically the preconditioning matrix (i.e., the matrix from which the preconditioner is to be constructed), Pmat, is the same as the matrix that defines the linear system, Amat; however, occasionally these matrices differ (for instance, when a preconditioning matrix is obtained from a lower order method than that employed to form the linear system matrix)." Best, Ling On Mon, Jan 28, 2013 at 12:29 PM, Matthew Knepley wrote: > On Mon, Jan 28, 2013 at 2:27 PM, Ling Zou wrote: > >> >> >> On Mon, Jan 28, 2013 at 12:01 PM, Matthew Knepley wrote: >> >>> On Mon, Jan 28, 2013 at 1:39 PM, Ling Zou wrote: >>> >>>> Hi, All >>>> >>>> I am trying to understand how the preconditioner works when using KSP. >>>> >>>> For example, when using KSP to solve the linear system problem, >>>> >>>> Ax = b >>>> >>>> with the default left preconditioning. We actually solve, >>>> >>>> M^(-1) * A x = M^(-1) * b >>>> >>>> where, M is the preconditioning matrix and in many cases, we just use A >>>> as the preconditioning matrix. >>>> >>>> >>>> Question: >>>> 1), Is the understanding above correct? >>>> >>> >>> This is too simplistic. If you really mean M^{-1}, then no, you (almost) >>> never use A as M. If you mean an >>> approximate inverse to M, then yes. >>> >>> >>>> 2), If the understanding above is correct, is it correct to state the >>>> different methods provided in PETSc (such as PCLU, PCILU, etc) are to >>>> calculate the inverse matrix M^(-1) from M? >>>> >>> >>> An approximate inverse. >>> >>> >>>> 3), How to understand this sentence in the manual (PETSc Users Manual, >>>> Reversion 3.3, page 78, under 4.4 Preconditioners) >>>> "The direct preconditioner, PCLU, is, in fact, a direct solver for the >>>> linear system that uses LU factorization. PCLU is included as a >>>> preconditioner so that PETSc has a consistent interface among direct and >>>> iterative linear solvers." >>>> Does this indicate when using PCLU, we solve Ax = b directly using LU >>>> factorization, or, we solve M^(-1) from M using LU factorization? >>>> >>> >>> Same thing, if M = A, >>> >>> M^{-1} A x = A^{-1} A x = x = A^{-1} b >>> >>> which is Gaussian elimination for the original problem. >>> >> >> Ahh.. that's true! >> In case M is not A (as you pointed out earlier), does PCLU provide the >> approximated inverse matrix of M^{-1} using LU factorization on M? >> > > Yes. > > Matt > > >> >>> Matt >>> >>> >>>> As a beginner to the PETSc, all questions are probably too simple. I'd >>>> appreciate it if someone could answer my questions. >>>> >>>> Best, >>>> >>>> Ling >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 13:40:20 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 13:40:20 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:35 PM, Matthew Knepley wrote: > Jed, I don't think that was the question. He was asking, does LU always > use A as the matrix to factorize, or can I use M? to which the > answer is plainly Yes. > Okay, but when using KSPSetOperators(ksp,A,M,...), the preconditioner is *not* M^{-1} (unless you use LU). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 13:43:36 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 13:43:36 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:36 PM, Ling Zou wrote: > I guess I got the impression that M is generally the same as A from > reading the manual.(PETSc Users Manual, Reversion 3.3, page 71, under 4.1 > Using KSP) > > "Typically the preconditioning matrix (i.e., the matrix from which the > preconditioner is to be constructed), Pmat, is the same as the matrix that > defines the linear system, Amat; however, occasionally these matrices > differ (for instance, when a preconditioning matrix is obtained from a > lower order method than that employed to form the linear system matrix)." > I think you are getting confused by mixed notation. A PC in PETSc is an algorithm that takes a matrix (Pmat in the docs) and does some work to be able to apply an operation (named "M^{-1}" in your first email). This does *not* imply that M=Pmat, or that M is ever available or used, there is just a a linear operation named "M^{-1}". -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 13:46:42 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 12:46:42 -0700 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: Thank you Jed. As you explained, M is not explicitly computed. However, when setup the ksp KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat,MatStructure flag); We need a Pmat (the M we are talking about here). In the solver, we actually need M^{-1} as the preconditioning matrix, so we don't need to compute the M matrix but need PC to get the approximated M^{-1}, right? Even we don't explicitly compute M, but we still need provide non-zero entries for this M, is it correct? Ling On Mon, Jan 28, 2013 at 12:33 PM, Jed Brown wrote: > > On Mon, Jan 28, 2013 at 1:27 PM, Ling Zou wrote: > >> Ahh.. that's true! >> In case M is not A (as you pointed out earlier), does PCLU provide the >> approximated inverse matrix of M^{-1} using LU factorization on M? >> > > Not really, preconditioners are based on inexact algorithms applied to A, > not explicit formation of an M that is easier to factor exactly. Since the > preconditioner P ("=M^{-1}") is non-singular, there _exists_ an M such that > P=M^{-1}, but M is not explicitly computed and it's not used in the solve. > Only P is used, and only in special cases (like incomplete factorization) > is there even a practical algorithm available to compute M if you wanted > to. (For many interesting algorithms, M is dense even though A is sparse.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 13:50:12 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 13:50:12 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:46 PM, Ling Zou wrote: > As you explained, M is not explicitly computed. However, when setup the ksp > > KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat,MatStructure flag); > > We need a Pmat (the M we are talking about here). In the solver, we > actually need M^{-1} > The two statements above contradict. You can *either* have M=Pmat or you can have "M^{-1}" is the preconditioning operation used in the Krylov method. > as the preconditioning matrix, so we don't need to compute the M matrix > but need PC to get the approximated M^{-1}, right? Even we don't explicitly > compute M, but we still need provide non-zero entries for this M, is it > correct? > You need to provide a matrix that will be used to compute a preconditioning operation. The preconditioning operation will *not* be the inverse of the matrix you provide, but it may be "close". -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 13:50:49 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 12:50:49 -0700 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 12:43 PM, Jed Brown wrote: > > On Mon, Jan 28, 2013 at 1:36 PM, Ling Zou wrote: > >> I guess I got the impression that M is generally the same as A from >> reading the manual.(PETSc Users Manual, Reversion 3.3, page 71, under 4.1 >> Using KSP) >> >> "Typically the preconditioning matrix (i.e., the matrix from which the >> preconditioner is to be constructed), Pmat, is the same as the matrix that >> defines the linear system, Amat; however, occasionally these matrices >> differ (for instance, when a preconditioning matrix is obtained from a >> lower order method than that employed to form the linear system matrix)." >> > > I think you are getting confused by mixed notation. A PC in PETSc is an > algorithm that takes a matrix (Pmat in the docs) and does some work to be > able to apply an operation (named "M^{-1}" in your first email). This does > *not* imply that M=Pmat, or that M is ever available or used, there is just > a a linear operation named "M^{-1}". > Hmmm...it's getting more complicated now. I guess I need read the manual more carefully and study the example codes more. By the way, could you redirect me to an example code or a tutorial to understand this better? Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 14:03:32 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 13:03:32 -0700 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: Ok... So, is it correct to say that, we provide a Pmat matrix, the PC will use this Pmat to do the preconditioning operation (with an operation name M^{-1}, but it is only operations not really a matrix). This Pmat can be obtained, for example, from a lower order method of the problem (as stated in the manual). The PC in PETSc provides different options how to get this M^{-1} operations from the Pmat provide by the user. Ling On Mon, Jan 28, 2013 at 12:50 PM, Jed Brown wrote: > > On Mon, Jan 28, 2013 at 1:46 PM, Ling Zou wrote: > >> As you explained, M is not explicitly computed. However, when setup the >> ksp >> >> KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat,MatStructure flag); >> >> We need a Pmat (the M we are talking about here). In the solver, we >> actually need M^{-1} >> > > The two statements above contradict. You can *either* have M=Pmat or you > can have "M^{-1}" is the preconditioning operation used in the Krylov > method. > > >> as the preconditioning matrix, so we don't need to compute the M matrix >> but need PC to get the approximated M^{-1}, right? Even we don't explicitly >> compute M, but we still need provide non-zero entries for this M, is it >> correct? >> > > You need to provide a matrix that will be used to compute a > preconditioning operation. The preconditioning operation will *not* be the > inverse of the matrix you provide, but it may be "close". > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 14:04:10 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 14:04:10 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 1:50 PM, Ling Zou wrote: > On Mon, Jan 28, 2013 at 12:43 PM, Jed Brown wrote: > >> >> On Mon, Jan 28, 2013 at 1:36 PM, Ling Zou wrote: >> >>> I guess I got the impression that M is generally the same as A from >>> reading the manual.(PETSc Users Manual, Reversion 3.3, page 71, under 4.1 >>> Using KSP) >>> >>> "Typically the preconditioning matrix (i.e., the matrix from which the >>> preconditioner is to be constructed), Pmat, is the same as the matrix that >>> defines the linear system, Amat; however, occasionally these matrices >>> differ (for instance, when a preconditioning matrix is obtained from a >>> lower order method than that employed to form the linear system matrix)." >>> >> >> I think you are getting confused by mixed notation. A PC in PETSc is an >> algorithm that takes a matrix (Pmat in the docs) and does some work to be >> able to apply an operation (named "M^{-1}" in your first email). This does >> *not* imply that M=Pmat, or that M is ever available or used, there is just >> a a linear operation named "M^{-1}". >> > > Hmmm...it's getting more complicated now. > It's not that bad, the notation "M^{-1}" is just misleading. Let's try writing the preconditioned equation differently T A x = T b T is a linear operator defined as a _function_, not as an assembled matrix. Now suppose there is an operation T = MakePreconditioner(SomeMatrix). Usually T will be somehow "close" to SomeMatrix^{-1}, but this could be very approximate. There are many different algorithms "MakePreconditioner", but Jacobi (i.e., T = diag(SomeMatrix)^{-1}), incomplete factorization, and domain decomposition are common cases. When you call KSPSetOperators(ksp,A,Pmat,...), you're telling PETSc to use T = MakePreconditioner(Pmat). > I guess I need read the manual more carefully and study the example codes > more. By the way, could you redirect me to an example code or a tutorial to > understand this better? > The examples using Pmat different from Amat are more advanced. I recommend just passing Pmat=Amat for now and revisit this topic later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 14:05:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 14:05:05 -0600 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 2:03 PM, Ling Zou wrote: > So, is it correct to say that, we provide a Pmat matrix, the PC will use > this Pmat to do the preconditioning operation (with an operation name > M^{-1}, but it is only operations not really a matrix). > > This Pmat can be obtained, for example, from a lower order method of the > problem (as stated in the manual). > > The PC in PETSc provides different options how to get this M^{-1} > operations from the Pmat provide by the user. > yes -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingzou80 at gmail.com Mon Jan 28 14:16:08 2013 From: lingzou80 at gmail.com (Ling Zou) Date: Mon, 28 Jan 2013 13:16:08 -0700 Subject: [petsc-users] newbie questions on preconditioner LU In-Reply-To: References: Message-ID: Jed, thank you so much. Ling On Mon, Jan 28, 2013 at 1:05 PM, Jed Brown wrote: > > On Mon, Jan 28, 2013 at 2:03 PM, Ling Zou wrote: > >> So, is it correct to say that, we provide a Pmat matrix, the PC will use >> this Pmat to do the preconditioning operation (with an operation name >> M^{-1}, but it is only operations not really a matrix). >> >> This Pmat can be obtained, for example, from a lower order method of the >> problem (as stated in the manual). >> >> The PC in PETSc provides different options how to get this M^{-1} >> operations from the Pmat provide by the user. >> > > yes > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Mon Jan 28 14:27:18 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 28 Jan 2013 21:27:18 +0100 Subject: [petsc-users] can't find PETSC_i Message-ID: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html I can not find PETSC_i from the above link, is it renamed? From jedbrown at mcs.anl.gov Mon Jan 28 14:30:39 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 14:30:39 -0600 Subject: [petsc-users] can't find PETSC_i In-Reply-To: References: Message-ID: It's there, just missing a man page. I'll add one to petsc-dev. On Mon, Jan 28, 2013 at 2:27 PM, Hui Zhang wrote: > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html > > I can not find PETSC_i from the above link, is it renamed? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhelenbr at clarkson.edu Mon Jan 28 15:37:45 2013 From: bhelenbr at clarkson.edu (Brian Helenbrook) Date: Mon, 28 Jan 2013 16:37:45 -0500 Subject: [petsc-users] Rock & Hard Place with SuperLU Message-ID: Dear Petsc-Users-List, I recently upgraded to petsc3.3-p5 from petsc3.2-p7 and the results from my code have changed. I am using superLU with the following options: -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist Everything was working with petsc3.2 but now I get totally different answers and the iteration doesn't converge. My build configuration is ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 --download-superlu_dist=1 --with-x=0 --download-parmetis=1 --download-metis=1 --with-mpi-dir=${HOME}/Packages --with-valgrind-dir=${HOME}/Packages I am running on OS X 10.8.2 with openmpi-1.6.3. I have run valgrind on my code and it is clean (except for start-up issues with mpi which occur before my code is entered.) I'm not very sure how to go about debugging this. What I've tried is to re-install pets-3.2-p7, but now I am having trouble getting that to build: ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 --download-superlu_dist=1 --with-x=0 --download-parmetis=1 --download-metis=1 --with-mpi-dir=${HOME}/Packages --with-valgrind-dir=${HOME}/Packages =============================================================================== Configuring PETSc to compile on your system =============================================================================== =============================================================================== Compiling & installing Metis; this may take several minutes =============================================================================== ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Error running make on Metis: Could not execute "cd /Users/bhelenbr/Packages/petsc-3.2-p7/externalpackages/metis-4.0.3 && make clean && make library && make minstall && make clean": Any ideas what direction to go with this? Thanks, Brian Brian Helenbrook Associate Professor 362 CAMP Mech. and Aero. Eng. Dept. Clarkson University Potsdam, NY 13699-5725 work: 315-268-2204 fax: 315-268-6695 From balay at mcs.anl.gov Mon Jan 28 15:40:33 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 28 Jan 2013 15:40:33 -0600 (CST) Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: On Mon, 28 Jan 2013, Brian Helenbrook wrote: > Dear Petsc-Users-List, > > I recently upgraded to petsc3.3-p5 from petsc3.2-p7 and the results from my code have changed. I am using superLU with the following options: > > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist > > Everything was working with petsc3.2 but now I get totally different answers and the iteration doesn't converge. My build configuration is > > ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 --download-superlu_dist=1 --with-x=0 --download-parmetis=1 --download-metis=1 --with-mpi-dir=${HOME}/Packages --with-valgrind-dir=${HOME}/Packages > > I am running on OS X 10.8.2 with openmpi-1.6.3. > > I have run valgrind on my code and it is clean (except for start-up issues with mpi which occur before my code is entered.) > > I'm not very sure how to go about debugging this. What I've tried is to re-install pets-3.2-p7, but now I am having trouble getting that to build: remove option --download-metis=1. Its not needed for petsc-3.2 satish > > ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 --download-superlu_dist=1 --with-x=0 --download-parmetis=1 --download-metis=1 --with-mpi-dir=${HOME}/Packages --with-valgrind-dir=${HOME}/Packages > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > =============================================================================== > Compiling & installing Metis; this may take several minutes =============================================================================== ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > Error running make on Metis: Could not execute "cd /Users/bhelenbr/Packages/petsc-3.2-p7/externalpackages/metis-4.0.3 && make clean && make library && make minstall && make clean": > > > Any ideas what direction to go with this? > > Thanks, > > Brian > > > > Brian Helenbrook > Associate Professor > 362 CAMP > Mech. and Aero. Eng. Dept. > Clarkson University > Potsdam, NY 13699-5725 > > work: 315-268-2204 > fax: 315-268-6695 > > > > From jedbrown at mcs.anl.gov Mon Jan 28 15:42:23 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 15:42:23 -0600 Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: Send -ksp_monitor_true_residual -ksp_view output for both cases so we can try to identify the source of the different convergence behavior. On Mon, Jan 28, 2013 at 3:37 PM, Brian Helenbrook wrote: > Dear Petsc-Users-List, > > I recently upgraded to petsc3.3-p5 from petsc3.2-p7 and the results from > my code have changed. I am using superLU with the following options: > > -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist > > Everything was working with petsc3.2 but now I get totally different > answers and the iteration doesn't converge. My build configuration is > > ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 > --download-superlu_dist=1 --with-x=0 --download-parmetis=1 > --download-metis=1 --with-mpi-dir=${HOME}/Packages > --with-valgrind-dir=${HOME}/Packages > > I am running on OS X 10.8.2 with openmpi-1.6.3. > > I have run valgrind on my code and it is clean (except for start-up issues > with mpi which occur before my code is entered.) > > I'm not very sure how to go about debugging this. What I've tried is to > re-install pets-3.2-p7, but now I am having trouble getting that to build: > > ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 > --download-superlu_dist=1 --with-x=0 --download-parmetis=1 > --download-metis=1 --with-mpi-dir=${HOME}/Packages > --with-valgrind-dir=${HOME}/Packages > > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > > =============================================================================== > Compiling & installing Metis; this may take several minutes > =============================================================================== > > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > > ------------------------------------------------------------------------------- > Error running make on Metis: Could not execute "cd > /Users/bhelenbr/Packages/petsc-3.2-p7/externalpackages/metis-4.0.3 && make > clean && make library && make minstall && make clean": > > > Any ideas what direction to go with this? > > Thanks, > > Brian > > > > Brian Helenbrook > Associate Professor > 362 CAMP > Mech. and Aero. Eng. Dept. > Clarkson University > Potsdam, NY 13699-5725 > > work: 315-268-2204 > fax: 315-268-6695 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenway at utias.utoronto.ca Mon Jan 28 15:58:29 2013 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Mon, 28 Jan 2013 16:58:29 -0500 Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: Hi everyone I have the exactly same issue actually. When I updated to petsc-3.3, SuperLU_dist was giving me random answers to KSPSolve(). Maybe half of the time you would get the same result as 3.2, other times it was a little off and other times widely differnet. I am using SuperLU_dist with a PREONLY ksp object. I haven't tracked down what is causing it and reverted back to petsc-3.2 that still works. Also, to fix the issue with the configure below, just drop out the download-metis. You need it for 3.3 but not 3.2 Gaetan On Mon, Jan 28, 2013 at 4:42 PM, Jed Brown wrote: > Send -ksp_monitor_true_residual -ksp_view output for both cases so we can > try to identify the source of the different convergence behavior. > > > On Mon, Jan 28, 2013 at 3:37 PM, Brian Helenbrook wrote: > >> Dear Petsc-Users-List, >> >> I recently upgraded to petsc3.3-p5 from petsc3.2-p7 and the results from >> my code have changed. I am using superLU with the following options: >> >> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist >> >> Everything was working with petsc3.2 but now I get totally different >> answers and the iteration doesn't converge. My build configuration is >> >> ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 >> --download-superlu_dist=1 --with-x=0 --download-parmetis=1 >> --download-metis=1 --with-mpi-dir=${HOME}/Packages >> --with-valgrind-dir=${HOME}/Packages >> >> I am running on OS X 10.8.2 with openmpi-1.6.3. >> >> I have run valgrind on my code and it is clean (except for start-up >> issues with mpi which occur before my code is entered.) >> >> I'm not very sure how to go about debugging this. What I've tried is to >> re-install pets-3.2-p7, but now I am having trouble getting that to build: >> >> ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 >> --download-superlu_dist=1 --with-x=0 --download-parmetis=1 >> --download-metis=1 --with-mpi-dir=${HOME}/Packages >> --with-valgrind-dir=${HOME}/Packages >> >> =============================================================================== >> Configuring PETSc to compile on your system >> >> =============================================================================== >> >> =============================================================================== >> Compiling & installing Metis; this may take several minutes >> =============================================================================== >> >> >> ******************************************************************************* >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> >> ------------------------------------------------------------------------------- >> Error running make on Metis: Could not execute "cd >> /Users/bhelenbr/Packages/petsc-3.2-p7/externalpackages/metis-4.0.3 && make >> clean && make library && make minstall && make clean": >> >> >> Any ideas what direction to go with this? >> >> Thanks, >> >> Brian >> >> >> >> Brian Helenbrook >> Associate Professor >> 362 CAMP >> Mech. and Aero. Eng. Dept. >> Clarkson University >> Potsdam, NY 13699-5725 >> >> work: 315-268-2204 >> fax: 315-268-6695 >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Jan 28 17:09:34 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 28 Jan 2013 17:09:34 -0600 Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: Vague "random answers" isn't very helpful. If there is a real problem, we'd like a test case so we can track it down. On Mon, Jan 28, 2013 at 3:58 PM, Gaetan Kenway wrote: > Hi everyone > > I have the exactly same issue actually. When I updated to petsc-3.3, > SuperLU_dist was giving me random answers to KSPSolve(). Maybe half of the > time you would get the same result as 3.2, other times it was a little off > and other times widely differnet. I am using SuperLU_dist with a PREONLY > ksp object. > > I haven't tracked down what is causing it and reverted back to petsc-3.2 > that still works. > > Also, to fix the issue with the configure below, just drop out the > download-metis. You need it for 3.3 but not 3.2 > > Gaetan > > > On Mon, Jan 28, 2013 at 4:42 PM, Jed Brown wrote: > >> Send -ksp_monitor_true_residual -ksp_view output for both cases so we can >> try to identify the source of the different convergence behavior. >> >> >> On Mon, Jan 28, 2013 at 3:37 PM, Brian Helenbrook wrote: >> >>> Dear Petsc-Users-List, >>> >>> I recently upgraded to petsc3.3-p5 from petsc3.2-p7 and the results from >>> my code have changed. I am using superLU with the following options: >>> >>> -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist >>> >>> Everything was working with petsc3.2 but now I get totally different >>> answers and the iteration doesn't converge. My build configuration is >>> >>> ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 >>> --download-superlu_dist=1 --with-x=0 --download-parmetis=1 >>> --download-metis=1 --with-mpi-dir=${HOME}/Packages >>> --with-valgrind-dir=${HOME}/Packages >>> >>> I am running on OS X 10.8.2 with openmpi-1.6.3. >>> >>> I have run valgrind on my code and it is clean (except for start-up >>> issues with mpi which occur before my code is entered.) >>> >>> I'm not very sure how to go about debugging this. What I've tried is >>> to re-install pets-3.2-p7, but now I am having trouble getting that to >>> build: >>> >>> ./config/configure.py --prefix=${HOME}/Packages --with-fortran=0 >>> --download-superlu_dist=1 --with-x=0 --download-parmetis=1 >>> --download-metis=1 --with-mpi-dir=${HOME}/Packages >>> --with-valgrind-dir=${HOME}/Packages >>> >>> =============================================================================== >>> Configuring PETSc to compile on your system >>> >>> =============================================================================== >>> >>> =============================================================================== >>> Compiling & installing Metis; this may take several minutes >>> >>> =============================================================================== >>> >>> >>> ******************************************************************************* >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >>> for details): >>> >>> ------------------------------------------------------------------------------- >>> Error running make on Metis: Could not execute "cd >>> /Users/bhelenbr/Packages/petsc-3.2-p7/externalpackages/metis-4.0.3 && make >>> clean && make library && make minstall && make clean": >>> >>> >>> Any ideas what direction to go with this? >>> >>> Thanks, >>> >>> Brian >>> >>> >>> >>> Brian Helenbrook >>> Associate Professor >>> 362 CAMP >>> Mech. and Aero. Eng. Dept. >>> Clarkson University >>> Potsdam, NY 13699-5725 >>> >>> work: 315-268-2204 >>> fax: 315-268-6695 >>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Jan 28 17:37:42 2013 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 28 Jan 2013 15:37:42 -0800 Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: <51070BC6.9020804@berkeley.edu> I have seen similar behavior on my mac (works fine on Linux) -- I reported this to the mailing list a few weeks back. I eventually tracked it down to a BLAS issue but gave up on finding the exact cause as I needed to move on -- moved over to MUMPS. But the problem is not in your imagination. If I have the time, I will try to get back to it (especially since I finally learned that you have to use dsymutil to get the line numbers in the debugger/valgrind). -sanjay On 1/28/13 3:09 PM, Jed Brown wrote: > Vague "random answers" isn't very helpful. If there is a real problem, > we'd like a test case so we can track it down. > > > On Mon, Jan 28, 2013 at 3:58 PM, Gaetan Kenway > > wrote: > > Hi everyone > > I have the exactly same issue actually. When I updated to > petsc-3.3, SuperLU_dist was giving me random answers to > KSPSolve(). Maybe half of the time you would get the same result > as 3.2, other times it was a little off and other times widely > differnet. I am using SuperLU_dist with a PREONLY ksp object. > > I haven't tracked down what is causing it and reverted back to > petsc-3.2 that still works. > > Also, to fix the issue with the configure below, just drop out the > download-metis. You need it for 3.3 but not 3.2 > > Gaetan > > > On Mon, Jan 28, 2013 at 4:42 PM, Jed Brown > wrote: > > Send -ksp_monitor_true_residual -ksp_view output for both > cases so we can try to identify the source of the different > convergence behavior. > > > On Mon, Jan 28, 2013 at 3:37 PM, Brian Helenbrook > > wrote: > > Dear Petsc-Users-List, > > I recently upgraded to petsc3.3-p5 from petsc3.2-p7 and > the results from my code have changed. I am using superLU > with the following options: > > -ksp_type preonly -pc_type lu > -pc_factor_mat_solver_package superlu_dist > > Everything was working with petsc3.2 but now I get totally > different answers and the iteration doesn't converge. My > build configuration is > > ./config/configure.py --prefix=${HOME}/Packages > --with-fortran=0 --download-superlu_dist=1 --with-x=0 > --download-parmetis=1 --download-metis=1 > --with-mpi-dir=${HOME}/Packages > --with-valgrind-dir=${HOME}/Packages > > I am running on OS X 10.8.2 with openmpi-1.6.3. > > I have run valgrind on my code and it is clean (except for > start-up issues with mpi which occur before my code is > entered.) > > I'm not very sure how to go about debugging this. What > I've tried is to re-install pets-3.2-p7, but now I am > having trouble getting that to build: > > ./config/configure.py --prefix=${HOME}/Packages > --with-fortran=0 --download-superlu_dist=1 --with-x=0 > --download-parmetis=1 --download-metis=1 > --with-mpi-dir=${HOME}/Packages > --with-valgrind-dir=${HOME}/Packages > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > =============================================================================== > Compiling & installing Metis; this may take several > minutes > =============================================================================== > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see > configure.log for details): > ------------------------------------------------------------------------------- > Error running make on Metis: Could not execute "cd > /Users/bhelenbr/Packages/petsc-3.2-p7/externalpackages/metis-4.0.3 > && make clean && make library && make minstall && make clean": > > > Any ideas what direction to go with this? > > Thanks, > > Brian > > > > Brian Helenbrook > Associate Professor > 362 CAMP > Mech. and Aero. Eng. Dept. > Clarkson University > Potsdam, NY 13699-5725 > > work: 315-268-2204 > fax: 315-268-6695 > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhelenbr at clarkson.edu Mon Jan 28 18:46:43 2013 From: bhelenbr at clarkson.edu (Brian Helenbrook) Date: Mon, 28 Jan 2013 19:46:43 -0500 Subject: [petsc-users] Rock & Hard Place with SuperLU Message-ID: Hi Again, Thanks for everybody's prompt help. I got petsc3.2-p7 running again and that works so at least I am able to get results again. I ran my test case with petsc3.2-p7 and petsc3.3-p5 and turned on ksp_view output. The file "output" is identical except for one line: tolerance for zero pivot 2.22045e-14 for petsc3.3-p5 and tolerance for zero pivot 1e-12 for petsc3.2-p7 This is the file "output" from ksp_view KSP Object: 2 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: 2 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 0, needed 0 Factored matrix follows: Matrix Object: 2 MPI processes type: mpiaij rows=6957, cols=6957 package used to perform factorization: superlu_dist total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 SuperLU_DIST run parameters: Process grid nprow 2 x npcol 1 Equilibrate matrix TRUE Matrix input mode 1 Replace tiny pivots TRUE Use iterative refinement FALSE Processors in row 2 col partition 1 Row permutation LargeDiag Column permutation METIS_AT_PLUS_A Parallel symbolic factorization FALSE Repeated factorization SamePattern_SameRowPerm linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=6957, cols=6957 total: nonzeros=611043, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 1407 nodes, limit used is 5 The residual vector going into the KSP inversion is the same, but the inversion gives a different answer petsc3.3-p5: jacobian made 4.628e-01 seconds matrix inverted 9.669e-01 seconds # iterations 1 residual0 1.824e-05 du 4.290e-05 solve time: 1.821e-02 seconds petsc3.2-p7: jacobian made 4.279e-01 seconds matrix inverted 6.854e-01 seconds # iterations 1 residual0 1.824e-05 du 1.885e-05 solve time: 1.284e-02 seconds Where the output is calculated as: double resmax; VecNorm(petsc_f, NORM_2, &resmax ); PetscGetTime(&time1); err = KSPSolve(ksp,petsc_f,petsc_du); CHKERRABORT(MPI_COMM_WORLD,err); double resmax2; VecNorm(petsc_du, NORM_2, &resmax2 ); KSPGetIterationNumber(ksp,&its); PetscGetTime(&time2); *gbl->log << "# iterations " << its << " residual0 " << resmax << " du " << resmax2 << " solve time: " << time2-time1 << " seconds" << endl; I can output the jacobian and make sure that is the same, but I'm guessing that it is. Any other suggestions of things to try? I'd be surprised if the zero-pivot thing had much to do with it although I could be wrong. I've arranged the matrices to try and make them diagonally dominant going in. Does anyone know how to change that setting of the top of their head (i.e. without me reading the manual). Thanks again, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jan 28 19:20:43 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Jan 2013 20:20:43 -0500 Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: On Mon, Jan 28, 2013 at 7:46 PM, Brian Helenbrook wrote: > Hi Again, > > Thanks for everybody's prompt help. I got petsc3.2-p7 running again and > that works so at least I am able to get results again. > > I ran my test case with petsc3.2-p7 and petsc3.3-p5 and turned on ksp_view > output. The file "output" is identical except for one line: > > tolerance for zero pivot 2.22045e-14 for petsc3.3-p5 and > tolerance for zero pivot 1e-12 for petsc3.2-p7 > > This is the file "output" from ksp_view > > KSP Object: 2 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: 2 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 0, needed 0 > Factored matrix follows: > Matrix Object: 2 MPI processes > type: mpiaij > rows=6957, cols=6957 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > SuperLU_DIST run parameters: > Process grid nprow 2 x npcol 1 > Equilibrate matrix TRUE > Matrix input mode 1 > Replace tiny pivots TRUE > Use iterative refinement FALSE > Processors in row 2 col partition 1 > Row permutation LargeDiag > Column permutation METIS_AT_PLUS_A > Parallel symbolic factorization FALSE > Repeated factorization SamePattern_SameRowPerm > linear system matrix = precond matrix: > Matrix Object: 2 MPI processes > type: mpiaij > rows=6957, cols=6957 > total: nonzeros=611043, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 1407 nodes, limit used > is 5 > > The residual vector going into the KSP inversion is the same, but the > inversion gives a different answer > > petsc3.3-p5: > > jacobian made 4.628e-01 seconds > matrix inverted 9.669e-01 seconds > # iterations 1 residual0 1.824e-05 du 4.290e-05 solve time: 1.821e-02 > seconds > > petsc3.2-p7: > > jacobian made 4.279e-01 seconds > matrix inverted 6.854e-01 seconds > # iterations 1 residual0 1.824e-05 du 1.885e-05 solve time: 1.284e-02 > seconds > > Where the output is calculated as: > > double resmax; > VecNorm(petsc_f, NORM_2, &resmax ); > > > PetscGetTime(&time1); > err = KSPSolve(ksp,petsc_f,petsc_du); > CHKERRABORT(MPI_COMM_WORLD,err); > > > double resmax2; > VecNorm(petsc_du, NORM_2, &resmax2 ); > > > KSPGetIterationNumber(ksp,&its); > PetscGetTime(&time2); > *gbl->log << "# iterations " << its << " residual0 " << resmax << " du "<< resmax2 << " > solve time: " << time2-time1 << " seconds" << endl; > > > I can output the jacobian and make sure that is the same, but I'm guessing > that it is. Any other suggestions of things to try? I'd be surprised > if the zero-pivot thing had much to do with it although I could be wrong. > I've arranged the matrices to try and make them diagonally dominant going > in. Does anyone know how to change that setting of the top of their head > (i.e. without me reading the manual). > Excellent! 1) Output the matrix and rhs in PETSc binary format, either with a call to Vec/MatView(), or with -ksp_view_binary 2) Run your setup using KSP ex10 3) Send them (or post them) to petsc-maint at mcs.anl.gov and we can also do that and find the bug Thanks, Matt > Thanks again, > > Brian > > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Mon Jan 28 23:09:30 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Mon, 28 Jan 2013 23:09:30 -0600 Subject: [petsc-users] Rock & Hard Place with SuperLU In-Reply-To: References: Message-ID: Brian: Add option '-mat_superlu_dist_equil false'. Do you still get same behavior? The matrix 'rows=6957, cols=6957' is very small. Run it sequentially using superlu with option '-mat_superlu_conditionnumber'. Let us know the estimated 'Recip. condition number'. It seems your matrix is very ill-conditioned. Hong > Hi Again, > > Thanks for everybody's prompt help. I got petsc3.2-p7 running again and > that works so at least I am able to get results again. > > I ran my test case with petsc3.2-p7 and petsc3.3-p5 and turned on ksp_view > output. The file "output" is identical except for one line: > > tolerance for zero pivot 2.22045e-14 for petsc3.3-p5 and > tolerance for zero pivot 1e-12 for petsc3.2-p7 > > This is the file "output" from ksp_view > > KSP Object: 2 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: 2 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 0, needed 0 > Factored matrix follows: > Matrix Object: 2 MPI processes > type: mpiaij > rows=6957, cols=6957 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > SuperLU_DIST run parameters: > Process grid nprow 2 x npcol 1 > Equilibrate matrix TRUE > Matrix input mode 1 > Replace tiny pivots TRUE > Use iterative refinement FALSE > Processors in row 2 col partition 1 > Row permutation LargeDiag > Column permutation METIS_AT_PLUS_A > Parallel symbolic factorization FALSE > Repeated factorization SamePattern_SameRowPerm > linear system matrix = precond matrix: > Matrix Object: 2 MPI processes > type: mpiaij > rows=6957, cols=6957 > total: nonzeros=611043, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 1407 nodes, limit used is > 5 > > The residual vector going into the KSP inversion is the same, but the > inversion gives a different answer > > petsc3.3-p5: > > jacobian made 4.628e-01 seconds > matrix inverted 9.669e-01 seconds > # iterations 1 residual0 1.824e-05 du 4.290e-05 solve time: 1.821e-02 > seconds > > petsc3.2-p7: > > jacobian made 4.279e-01 seconds > matrix inverted 6.854e-01 seconds > # iterations 1 residual0 1.824e-05 du 1.885e-05 solve time: 1.284e-02 > seconds > > Where the output is calculated as: > > double resmax; > VecNorm(petsc_f, NORM_2, &resmax ); > > > PetscGetTime(&time1); > err = KSPSolve(ksp,petsc_f,petsc_du); > CHKERRABORT(MPI_COMM_WORLD,err); > > > double resmax2; > VecNorm(petsc_du, NORM_2, &resmax2 ); > > > KSPGetIterationNumber(ksp,&its); > PetscGetTime(&time2); > *gbl->log << "# iterations " << its << " residual0 " << resmax << " du " << > resmax2 << " solve time: " << time2-time1 << " seconds" << endl; > > > I can output the jacobian and make sure that is the same, but I'm guessing > that it is. Any other suggestions of things to try? I'd be surprised > if the zero-pivot thing had much to do with it although I could be wrong. > I've arranged the matrices to try and make them diagonally dominant going > in. Does anyone know how to change that setting of the top of their head > (i.e. without me reading the manual). > > Thanks again, > > Brian > > > > > > > > From lyh03259.aps at gmail.com Tue Jan 29 13:48:01 2013 From: lyh03259.aps at gmail.com (Yonghui) Date: Tue, 29 Jan 2013 13:48:01 -0600 Subject: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? Message-ID: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> Dear PETSc users, I am start to use PETSc-3.3 and trying to build a Cygwin free windows version (I just don't want to have Cygwin installed). Thanks for the effort that the developers made for those macros (PETSC_HAVE__FINITE, PETSC_HAVE__ISNAN, etc). Here is a question: how does srand48/drand48 works in windows? Can I replace them with other random number generator (that's the last option since I am not sure whether they will be used in other functions)? I don't see any equivalent definition in any headers in windows so far. There is a windows build tutorial but need Cygwin installed. Does that mean I have to use srand48/drand48 provided by Cygwin (not sure but maybe)? I am using MSVS 2010+intel compiler+mpich. Any comment will be appreciated. Thanks, Yonghui -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Jan 29 13:55:00 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 29 Jan 2013 13:55:00 -0600 (CST) Subject: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? In-Reply-To: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> References: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> Message-ID: cygwin is required primarily for the build tools. But compilation is done directly with MS compiler/s [so no cygwin.dll is used] wrt rand stuff - cl supports PETSC_HAVE_RAND. For eg: you can check all the flags that get set for MS compiler/[Comapq DVF] at: ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/build_arch-mswin_ps3.log Satish On Tue, 29 Jan 2013, Yonghui wrote: > Dear PETSc users, > > > > I am start to use PETSc-3.3 and trying to build a Cygwin free windows > version (I just don't want to have Cygwin installed). > > Thanks for the effort that the developers made for those macros > (PETSC_HAVE__FINITE, PETSC_HAVE__ISNAN, etc). > > > > Here is a question: how does srand48/drand48 works in windows? Can I replace > them with other random number generator (that's the last option since I am > not sure whether they will be used in other functions)? I don't see any > equivalent definition in any headers in windows so far. > > There is a windows build tutorial but need Cygwin installed. Does that mean > I have to use srand48/drand48 provided by Cygwin (not sure but maybe)? > > > > I am using MSVS 2010+intel compiler+mpich. > > > > Any comment will be appreciated. > > > > Thanks, > > Yonghui > > From balay at mcs.anl.gov Tue Jan 29 13:59:13 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 29 Jan 2013 13:59:13 -0600 (CST) Subject: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? In-Reply-To: References: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> Message-ID: BTW: You can: - install cygwin - build PETSc - delete cygwin This will satisfy your criteria - and avoid the time sink [wrt building petsc without cygwin] Satish On Tue, 29 Jan 2013, Satish Balay wrote: > cygwin is required primarily for the build tools. But compilation is > done directly with MS compiler/s [so no cygwin.dll is used] > > wrt rand stuff - cl supports PETSC_HAVE_RAND. For eg: you can check > all the flags that get set for MS compiler/[Comapq DVF] at: > > ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/build_arch-mswin_ps3.log > > Satish > > On Tue, 29 Jan 2013, Yonghui wrote: > > > Dear PETSc users, > > > > > > > > I am start to use PETSc-3.3 and trying to build a Cygwin free windows > > version (I just don't want to have Cygwin installed). > > > > Thanks for the effort that the developers made for those macros > > (PETSC_HAVE__FINITE, PETSC_HAVE__ISNAN, etc). > > > > > > > > Here is a question: how does srand48/drand48 works in windows? Can I replace > > them with other random number generator (that's the last option since I am > > not sure whether they will be used in other functions)? I don't see any > > equivalent definition in any headers in windows so far. > > > > There is a windows build tutorial but need Cygwin installed. Does that mean > > I have to use srand48/drand48 provided by Cygwin (not sure but maybe)? > > > > > > > > I am using MSVS 2010+intel compiler+mpich. > > > > > > > > Any comment will be appreciated. > > > > > > > > Thanks, > > > > Yonghui > > > > > > From gaurish108 at gmail.com Tue Jan 29 19:11:21 2013 From: gaurish108 at gmail.com (Gaurish Telang) Date: Tue, 29 Jan 2013 20:11:21 -0500 Subject: [petsc-users] Why am I getting bad scalability for least squares problems? Message-ID: Hello, I am trying to solve some over-determined / under-determined least-squares systems in parallel using the LSQR routine of PETSc. In spite of being able to solve the least squares problem || Ax-b || _correctly_ with this Krylov method on multiple processors, I don't seem to be acheiving scalability with respect to wall-clock time for the set of matrices that I am interested in. Below, I have listed the matrix sizes, the number of non-zeros in each matrix and the wall-clock times to required for 10 iterations of the LSQR krylov method. The timings have been amortized over 50 iterations of the solver i.e. I solve the least-square problem 50 times ( where each time 10 iterations of LSQR are carried out) and obtain the arithmetic mean of wall-clock times so recorded (i.e (sum-of-clock-times)/50). Matrix size: 28407 x 19899 Non-zeros: 725363 Wall clock times 1 proc: 0.094 s 2 proc: 0.087 s 3 proc: 0.089 s 4 proc: 0.093 s Matrix size: 95194 x 155402 Non-zeros: 3877164 Wall clock times : 1 proc: 0.23 s 2 proc: 0.21 s 3 proc: 0.22 s 4 proc: 0.23 s Matrix size: 125878 x 207045 Non-zeros: 3717995 Wall clock times 1 proc: 0.24 s 2 proc: 0.22 s 3 proc: 0.23 s 4 proc: 0.24 s I have other matrices which show similar bad scaling as I increase the processor count. Please let me know what I can do to improve the performance after I increase the processor count. I feel it is a data-layout problem i.e. I need to chose some other data-structure with PETSc to represent the matrix and vector of my least-squares problem. Currently my Matrix type is "mpiaij" and Vec type is "mpi" which I set at the command-line while running the executable. I use the terminal command : mpirun -np ./test_parallel -fmat -frhs -vec_type mpi -mat_type mpiaij -ksp_type lsqr -ksp_max_it 10 I have pasted the code I am using below for your reference. Thank you, Gaurish. ============================================================================================================================== #include #include #include #include #undef __FUNCT__ #define __FUNCT__ "main" int main(int argc,char **args) { Vec x, b, residue; /* approx solution, RHS, residual */ Mat A; /* linear system matrix */ KSP ksp; /* linear solver context */ PC pc; /* preconditioner context */ PetscErrorCode ierr; PetscInt m,n ; /* # number of rows and columns of the matrix read in*/ PetscViewer fd ; PetscInt size; //tscInt its; //PetscScalar norm, tol=1.e-5; PetscBool flg; char fmat[PETSC_MAX_PATH_LEN]; /* input file names */ char frhs[PETSC_MAX_PATH_LEN]; /* input file names */ PetscInitialize(&argc,&args,(char *)0,help); ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); ierr = PetscOptionsGetString(PETSC_NULL,"-fmat",fmat,PETSC_MAX_PATH_LEN,&flg); ierr = PetscOptionsGetString(PETSC_NULL,"-frhs",frhs,PETSC_MAX_PATH_LEN,&flg); /* Read in the matrix */ ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, fmat, FILE_MODE_READ, &fd ); CHKERRQ(ierr); ierr = MatCreate(PETSC_COMM_WORLD,&A); // ierr = MatSetType(A,MATSEQAIJ); ierr = MatSetFromOptions(A);CHKERRQ(ierr); ierr = MatLoad(A,fd); ierr = PetscViewerDestroy(&fd); /*Read in the right hand side*/ ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, frhs, FILE_MODE_READ, &fd ); CHKERRQ(ierr); ierr = VecCreate(PETSC_COMM_WORLD,&b); ierr = VecSetFromOptions(b);CHKERRQ(ierr); ierr = VecLoad(b,fd); ierr = PetscViewerDestroy(&fd); /* Get the matrix size. Used for setting the vector sizes*/ ierr = MatGetSize(A , &m, &n); //printf("The size of the matrix read in is %d x %d\n", m , n); VecCreate(PETSC_COMM_WORLD, &x); VecSetSizes(x, PETSC_DECIDE, n); VecSetFromOptions(x); VecCreate(PETSC_COMM_WORLD, &residue); VecSetSizes(residue, PETSC_DECIDE, m); VecSetFromOptions(residue); /* Set the solver type at the command-line */ ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr); ierr = KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); ierr = PCSetType(pc,PCNONE);CHKERRQ(ierr); ierr = KSPSetTolerances(ksp,1.e-5,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT);CHKERRQ(ierr); /* Set runtime options, e.g., -ksp_type -pc_type -ksp_monitor -ksp_rtol These options will override those specified above as long as KSPSetFromOptions() is called _after_ any other customization routines */ ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); /* Initial guess for the krylov method set to zero*/ PetscScalar p = 0; ierr = VecSet(x,p);CHKERRQ(ierr); ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE);CHKERRQ(ierr); /* Solve linear system */ PetscLogDouble v1,v2,elapsed_time[50]; int i; for (i = 0; i < 50; ++i) { ierr = PetscGetTime(&v1);CHKERRQ(ierr); ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr); ierr = PetscGetTime(&v2);CHKERRQ(ierr); elapsed_time[i] = v2 - v1; PetscPrintf(PETSC_COMM_WORLD,"[%d] Time for the solve: %g s\n", i , elapsed_time[i]); } PetscLogDouble sum=0,amortized_time ; for ( i = 0; i < 50; ++i) { sum += elapsed_time[i]; } amortized_time = sum/50; PetscPrintf(PETSC_COMM_WORLD,"\n\n***********************\nAmortized Time for the solve: %g s\n***********************\n\n", amortized_time); /* View solver info; we could instead use the option -ksp_view to print this info to the screen at the conclusion of KSPSolve().*/ ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); /* Clean up */ ierr = VecDestroy(&x);CHKERRQ(ierr); ierr = VecDestroy(&residue);CHKERRQ(ierr); ierr = VecDestroy(&b);CHKERRQ(ierr); ierr = MatDestroy(&A);CHKERRQ(ierr); ierr = KSPDestroy(&ksp);CHKERRQ(ierr); ierr = PetscFinalize(); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jan 29 19:57:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Jan 2013 20:57:30 -0500 Subject: [petsc-users] Why am I getting bad scalability for least squares problems? In-Reply-To: References: Message-ID: On Tue, Jan 29, 2013 at 8:11 PM, Gaurish Telang wrote: > Hello, > > I am trying to solve some over-determined / under-determined least-squares > systems in parallel using the LSQR routine of PETSc. > In spite of being able to solve the least squares problem || Ax-b || > _correctly_ with this Krylov method on multiple processors, I don't seem > to be acheiving scalability with respect to wall-clock time > for the set of matrices that I am interested in. > > Below, I have listed the matrix sizes, the number of non-zeros in each > matrix and the wall-clock times to required for 10 iterations of the LSQR > krylov method. > The timings have been amortized over 50 iterations of the solver i.e. I > solve the least-square problem 50 times ( where each time 10 iterations of > LSQR are carried out) > and obtain the arithmetic mean of wall-clock times so recorded (i.e > (sum-of-clock-times)/50). > Never ask performance questions without sending the output of -log_summary. Also: http://www.mcs.anl.gov/petsc/documentation/faq.html#computers http://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel Matt > Matrix size: 28407 x 19899 > Non-zeros: 725363 > Wall clock times > 1 proc: 0.094 s > 2 proc: 0.087 s > 3 proc: 0.089 s > 4 proc: 0.093 s > > Matrix size: 95194 x 155402 > Non-zeros: 3877164 > Wall clock times : > 1 proc: 0.23 s > 2 proc: 0.21 s > 3 proc: 0.22 s > 4 proc: 0.23 s > > Matrix size: 125878 x 207045 > Non-zeros: 3717995 > Wall clock times > 1 proc: 0.24 s > 2 proc: 0.22 s > 3 proc: 0.23 s > 4 proc: 0.24 s > > I have other matrices which show similar bad scaling as I increase the > processor count. > > Please let me know what I can do to improve the performance after I > increase the processor count. > I feel it is a data-layout problem i.e. I need to chose some other > data-structure with PETSc to represent the matrix and vector of my > least-squares problem. > > Currently my Matrix type is "mpiaij" and Vec type is "mpi" which I set at > the command-line while running the executable. > > I use the terminal command : > mpirun -np ./test_parallel > -fmat > -frhs > -vec_type mpi > -mat_type mpiaij > -ksp_type lsqr > -ksp_max_it 10 > > > I have pasted the code I am using below for your reference. > > Thank you, > Gaurish. > > > > > > > > ============================================================================================================================== > > #include > #include > #include > #include > #undef __FUNCT__ > #define __FUNCT__ "main" > int main(int argc,char **args) > { > Vec x, b, residue; /* approx solution, RHS, residual */ > Mat A; /* linear system matrix */ > KSP ksp; /* linear solver context */ > PC pc; /* preconditioner context */ > PetscErrorCode ierr; > PetscInt m,n ; /* # number of rows and columns of the > matrix read in*/ > PetscViewer fd ; > PetscInt size; > //tscInt its; > //PetscScalar norm, tol=1.e-5; > PetscBool flg; > char fmat[PETSC_MAX_PATH_LEN]; /* input file names */ > char frhs[PETSC_MAX_PATH_LEN]; /* input file names */ > > PetscInitialize(&argc,&args,(char *)0,help); > ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);CHKERRQ(ierr); > > ierr = > PetscOptionsGetString(PETSC_NULL,"-fmat",fmat,PETSC_MAX_PATH_LEN,&flg); > ierr = > PetscOptionsGetString(PETSC_NULL,"-frhs",frhs,PETSC_MAX_PATH_LEN,&flg); > > > /* Read in the matrix */ > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, fmat, FILE_MODE_READ, &fd > ); CHKERRQ(ierr); > ierr = MatCreate(PETSC_COMM_WORLD,&A); > // ierr = MatSetType(A,MATSEQAIJ); > ierr = MatSetFromOptions(A);CHKERRQ(ierr); > ierr = MatLoad(A,fd); > ierr = PetscViewerDestroy(&fd); > > > /*Read in the right hand side*/ > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, frhs, FILE_MODE_READ, &fd > ); CHKERRQ(ierr); > ierr = VecCreate(PETSC_COMM_WORLD,&b); > ierr = VecSetFromOptions(b);CHKERRQ(ierr); > ierr = VecLoad(b,fd); > ierr = PetscViewerDestroy(&fd); > > > /* Get the matrix size. Used for setting the vector sizes*/ > ierr = MatGetSize(A , &m, &n); > //printf("The size of the matrix read in is %d x %d\n", m , n); > > > VecCreate(PETSC_COMM_WORLD, &x); > VecSetSizes(x, PETSC_DECIDE, n); > VecSetFromOptions(x); > > VecCreate(PETSC_COMM_WORLD, &residue); > VecSetSizes(residue, PETSC_DECIDE, m); > VecSetFromOptions(residue); > > > /* Set the solver type at the command-line */ > ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr); > ierr = KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); > ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); > ierr = PCSetType(pc,PCNONE);CHKERRQ(ierr); > ierr = > KSPSetTolerances(ksp,1.e-5,PETSC_DEFAULT,PETSC_DEFAULT,PETSC_DEFAULT);CHKERRQ(ierr); > /* Set runtime options, e.g., > -ksp_type -pc_type -ksp_monitor -ksp_rtol > These options will override those specified above as long as > KSPSetFromOptions() is called _after_ any other customization > routines */ > ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); > > /* Initial guess for the krylov method set to zero*/ > PetscScalar p = 0; > ierr = VecSet(x,p);CHKERRQ(ierr); > ierr = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE);CHKERRQ(ierr); > /* Solve linear system */ > > PetscLogDouble v1,v2,elapsed_time[50]; > int i; > for (i = 0; i < 50; ++i) > { > ierr = PetscGetTime(&v1);CHKERRQ(ierr); > ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr); > ierr = PetscGetTime(&v2);CHKERRQ(ierr); > elapsed_time[i] = v2 - v1; > PetscPrintf(PETSC_COMM_WORLD,"[%d] Time for the solve: %g s\n", i > , elapsed_time[i]); > } > > PetscLogDouble sum=0,amortized_time ; > for ( i = 0; i < 50; ++i) > { > sum += elapsed_time[i]; > } > amortized_time = sum/50; > PetscPrintf(PETSC_COMM_WORLD,"\n\n***********************\nAmortized > Time for the solve: %g s\n***********************\n\n", amortized_time); > > > /* View solver info; we could instead use the option -ksp_view to print > this info to the screen at the conclusion of KSPSolve().*/ > ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); > > /* Clean up */ > ierr = VecDestroy(&x);CHKERRQ(ierr); > ierr = VecDestroy(&residue);CHKERRQ(ierr); > ierr = VecDestroy(&b);CHKERRQ(ierr); > ierr = MatDestroy(&A);CHKERRQ(ierr); > ierr = KSPDestroy(&ksp);CHKERRQ(ierr); > > ierr = PetscFinalize(); > return 0; > } > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyh03259.aps at gmail.com Tue Jan 29 22:39:39 2013 From: lyh03259.aps at gmail.com (Yonghui) Date: Tue, 29 Jan 2013 22:39:39 -0600 Subject: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? In-Reply-To: References: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> Message-ID: <005801cdfea3$d22b2660$76817320$@gmail.com> I created a Solution (2 projects: Fortran and c) for PETSc-3.3 and spent 2 afternoon for building the library purely with MSVS 2010 and parallel studio 13. The library works OK with ex19 as a test. Adding PETSC_HAVE_RAND into petscconf.h didn't solve my problem since srand48 and drand48 are not replaced by rand. It seems PETSc just provides a wrapper for srand48 and no algorithms depend on them. So I think it should be OK as long as I don't use the wrapper. Thanks Satish's explanation but I can't reach the log (maybe not open to public?). Could you attach it, Satish? Yonghui -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay Sent: Tuesday, January 29, 2013 1:55 PM To: PETSc users list Subject: Re: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? cygwin is required primarily for the build tools. But compilation is done directly with MS compiler/s [so no cygwin.dll is used] wrt rand stuff - cl supports PETSC_HAVE_RAND. For eg: you can check all the flags that get set for MS compiler/[Comapq DVF] at: ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/build_arch-mswin_ps3.log Satish On Tue, 29 Jan 2013, Yonghui wrote: > Dear PETSc users, > > > > I am start to use PETSc-3.3 and trying to build a Cygwin free windows > version (I just don't want to have Cygwin installed). > > Thanks for the effort that the developers made for those macros > (PETSC_HAVE__FINITE, PETSC_HAVE__ISNAN, etc). > > > > Here is a question: how does srand48/drand48 works in windows? Can I > replace them with other random number generator (that's the last > option since I am not sure whether they will be used in other > functions)? I don't see any equivalent definition in any headers in windows so far. > > There is a windows build tutorial but need Cygwin installed. Does that > mean I have to use srand48/drand48 provided by Cygwin (not sure but maybe)? > > > > I am using MSVS 2010+intel compiler+mpich. > > > > Any comment will be appreciated. > > > > Thanks, > > Yonghui > > From balay at mcs.anl.gov Tue Jan 29 23:08:25 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 29 Jan 2013 23:08:25 -0600 (CST) Subject: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? In-Reply-To: <005801cdfea3$d22b2660$76817320$@gmail.com> References: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> <005801cdfea3$d22b2660$76817320$@gmail.com> Message-ID: Perhaps you have the flag PETSC_HAVE_DRAND48 enabled [which shouldn't be for MS compilers] The logfiles get deleted/recreatd everynight in that location. You can try: ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2013-Jan-29/make_arch-mswin_ps3.log Satish On Tue, 29 Jan 2013, Yonghui wrote: > I created a Solution (2 projects: Fortran and c) for PETSc-3.3 and spent 2 > afternoon for building the library purely with MSVS 2010 and parallel studio > 13. The library works OK with ex19 as a test. Adding PETSC_HAVE_RAND into > petscconf.h didn't solve my problem since srand48 and drand48 are not > replaced by rand. It seems PETSc just provides a wrapper for srand48 and no > algorithms depend on them. So I think it should be OK as long as I don't use > the wrapper. > Thanks Satish's explanation but I can't reach the log (maybe not open to > public?). Could you attach it, Satish? > > Yonghui > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay > Sent: Tuesday, January 29, 2013 1:55 PM > To: PETSc users list > Subject: Re: [petsc-users] how does the srand48/drand48 works in windows > build PETSc-3.3? > > cygwin is required primarily for the build tools. But compilation is done > directly with MS compiler/s [so no cygwin.dll is used] > > wrt rand stuff - cl supports PETSC_HAVE_RAND. For eg: you can check all the > flags that get set for MS compiler/[Comapq DVF] at: > > ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/build_arch-mswin_ps3.log > > Satish > > On Tue, 29 Jan 2013, Yonghui wrote: > > > Dear PETSc users, > > > > > > > > I am start to use PETSc-3.3 and trying to build a Cygwin free windows > > version (I just don't want to have Cygwin installed). > > > > Thanks for the effort that the developers made for those macros > > (PETSC_HAVE__FINITE, PETSC_HAVE__ISNAN, etc). > > > > > > > > Here is a question: how does srand48/drand48 works in windows? Can I > > replace them with other random number generator (that's the last > > option since I am not sure whether they will be used in other > > functions)? I don't see any equivalent definition in any headers in > windows so far. > > > > There is a windows build tutorial but need Cygwin installed. Does that > > mean I have to use srand48/drand48 provided by Cygwin (not sure but > maybe)? > > > > > > > > I am using MSVS 2010+intel compiler+mpich. > > > > > > > > Any comment will be appreciated. > > > > > > > > Thanks, > > > > Yonghui > > > > > > > From lyh03259.aps at gmail.com Tue Jan 29 23:47:04 2013 From: lyh03259.aps at gmail.com (Yonghui) Date: Tue, 29 Jan 2013 23:47:04 -0600 Subject: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? In-Reply-To: References: <002401cdfe59$8ca3c560$a5eb5020$@gmail.com> <005801cdfea3$d22b2660$76817320$@gmail.com> Message-ID: <005a01cdfead$3d65ee20$b831ca60$@gmail.com> OK. I saw the log file and figured my problem. I should remove rand48.c from the source. I agree that srand48 shouldn't be used for windows build. Thanks for attach the log file. Everything is OK now. -----Original Message----- From: Satish Balay [mailto:balay at mcs.anl.gov] Sent: Tuesday, January 29, 2013 11:08 PM To: Yonghui Cc: 'PETSc users list' Subject: RE: [petsc-users] how does the srand48/drand48 works in windows build PETSc-3.3? Perhaps you have the flag PETSC_HAVE_DRAND48 enabled [which shouldn't be for MS compilers] The logfiles get deleted/recreatd everynight in that location. You can try: ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2013-Jan-29/make_arch-ms win_ps3.log Satish On Tue, 29 Jan 2013, Yonghui wrote: > I created a Solution (2 projects: Fortran and c) for PETSc-3.3 and > spent 2 afternoon for building the library purely with MSVS 2010 and > parallel studio 13. The library works OK with ex19 as a test. Adding > PETSC_HAVE_RAND into petscconf.h didn't solve my problem since srand48 > and drand48 are not replaced by rand. It seems PETSc just provides a > wrapper for srand48 and no algorithms depend on them. So I think it > should be OK as long as I don't use the wrapper. > Thanks Satish's explanation but I can't reach the log (maybe not open > to public?). Could you attach it, Satish? > > Yonghui > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay > Sent: Tuesday, January 29, 2013 1:55 PM > To: PETSc users list > Subject: Re: [petsc-users] how does the srand48/drand48 works in > windows build PETSc-3.3? > > cygwin is required primarily for the build tools. But compilation is > done directly with MS compiler/s [so no cygwin.dll is used] > > wrt rand stuff - cl supports PETSC_HAVE_RAND. For eg: you can check > all the flags that get set for MS compiler/[Comapq DVF] at: > > ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/build_arch-mswin_ps3.log > > Satish > > On Tue, 29 Jan 2013, Yonghui wrote: > > > Dear PETSc users, > > > > > > > > I am start to use PETSc-3.3 and trying to build a Cygwin free > > windows version (I just don't want to have Cygwin installed). > > > > Thanks for the effort that the developers made for those macros > > (PETSC_HAVE__FINITE, PETSC_HAVE__ISNAN, etc). > > > > > > > > Here is a question: how does srand48/drand48 works in windows? Can I > > replace them with other random number generator (that's the last > > option since I am not sure whether they will be used in other > > functions)? I don't see any equivalent definition in any headers in > windows so far. > > > > There is a windows build tutorial but need Cygwin installed. Does > > that mean I have to use srand48/drand48 provided by Cygwin (not sure > > but > maybe)? > > > > > > > > I am using MSVS 2010+intel compiler+mpich. > > > > > > > > Any comment will be appreciated. > > > > > > > > Thanks, > > > > Yonghui > > > > > > > From Wadud.Miah at awe.co.uk Wed Jan 30 12:13:27 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Wed, 30 Jan 2013 18:13:27 +0000 Subject: [petsc-users] PETSc documentation in HTML Message-ID: <201301301813.r0UIDVHc025776@msw1.awe.co.uk> Hello PETSc users, Do you know where I can download the PETSc 3.3-p5 help topics and manual pages in HTML? Thanks in advance, -------------------------- Wadud Miah HPC, Design Physics Division Direct: 0118 98 56220 AWE, Aldermaston, Reading, RG7 4PR ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Jan 30 12:26:46 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 30 Jan 2013 12:26:46 -0600 (CST) Subject: [petsc-users] PETSc documentation in HTML In-Reply-To: <201301301813.r0UIDVHc025776@msw1.awe.co.uk> References: <201301301813.r0UIDVHc025776@msw1.awe.co.uk> Message-ID: All documentation is included in the petsc tarball. https://www.mcs.anl.gov/petsc/download/index.html Satish On Wed, 30 Jan 2013, Wadud.Miah at awe.co.uk wrote: > Hello PETSc users, > > Do you know where I can download the PETSc 3.3-p5 help topics and manual pages in HTML? > > Thanks in advance, > > -------------------------- > Wadud Miah > HPC, Design Physics Division > Direct: 0118 98 56220 > AWE, Aldermaston, Reading, RG7 4PR > > > > > ___________________________________________________ > ____________________________ > > The information in this email and in any attachment(s) is > commercial in confidence. If you are not the named addressee(s) > or > if you receive this email in error then any distribution, copying or > use of this communication or the information in it is strictly > prohibited. Please notify us immediately by email at > admin.internet(at)awe.co.uk, and then delete this message from > your computer. While attachments are virus checked, AWE plc > does not accept any liability in respect of any virus which is not > detected. > > AWE Plc > Registered in England and Wales > Registration No 02763902 > AWE, Aldermaston, Reading, RG7 4PR From ling.zou at inl.gov Wed Jan 30 17:30:17 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Wed, 30 Jan 2013 16:30:17 -0700 Subject: [petsc-users] compare snes_mf_operator and snes_fd Message-ID: Hi, All I am testing the performance of snes_mf_operator against snes_fd. I know snes_fd is for test/debugging and extremely slow, which is ok for my testing purpose. I then compared the code performance using snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less computing time then snes_fd, however, the snes_mf_operator non-linear solver performance is worse than snes_fd, in terms of non linear iteration in each time steps. Here is the PETSc Options Table entries taken from the log_summary when using snes_mf_operator #PETSc Option Table entries: -ksp_converged_reason -ksp_gmres_restart 300 -ksp_monitor_true_residual -log_summary -m pipe_7eqn_2phase_step7_ps.i -mat_fd_type ds -pc_type lu -snes_mf_operator -snes_monitor #End of PETSc Option Table entries Here is the PETSc Options Table entries taken from the log_summary when using snes_fd #PETSc Option Table entries: -ksp_converged_reason -ksp_gmres_restart 300 -ksp_monitor_true_residual -log_summary -m pipe_7eqn_2phase_step7_ps.i -mat_fd_type ds -pc_type lu -snes_fd -snes_monitor #End of PETSc Option Table entries The full code output along with log_summary are attached. I've noticed that when using snes_fd, the non-linear convergence is always good in each time step, around 3-4 non-linear steps with almost quadratic convergence rate. In each non-linear step, it uses only 1 linear step to converge as I used '-pc_type lu' and only 1 linear step is expected. Here is a piece of output I pulled out from the code output (very nice non-linear, linear performance but of course very expensive): DT: 1.234568e-05 Solving time step 7, time=4.34568e-05... Initial |residual|_2 = 3.547156e+00 NL step 0, |residual|_2 = 3.547156e+00 0 SNES Function norm 3.547155872103e+00 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid norm 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid norm 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 Linear solve converged due to CONVERGED_RTOL iterations 1 NL step 1, |residual|_2 = 4.900005e-04 1 SNES Function norm 4.900004596844e-04 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid norm 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid norm 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 Linear solve converged due to CONVERGED_RTOL iterations 1 NL step 2, |residual|_2 = 1.171419e-06 2 SNES Function norm 1.171419468770e-06 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid norm 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid norm 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 Linear solve converged due to CONVERGED_RTOL iterations 1 NL step 3, |residual|_2 = 1.860041e-08 3 SNES Function norm 1.860041398803e-08 Converged:1 Back to the snes_mf_operator option, it behaviors differently. It generally takes more non-linear and linear steps. The 'KSP unpreconditioned resid norm' drops nicely however the 'true resid norm' seems to be a bit wired to me, drops then increases. DT: 1.524158e-05 Solving time step 9, time=7.24158e-05... Initial |residual|_2 = 3.601003e+00 NL step 0, |residual|_2 = 3.601003e+00 0 SNES Function norm 3.601003423006e+00 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid norm 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid norm 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid norm 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid norm 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 NL step 1, |residual|_2 = 5.928815e-02 1 SNES Function norm 5.928815267199e-02 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid norm 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid norm 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid norm 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 Linear solve converged due to CONVERGED_RTOL iterations 2 NL step 2, |residual|_2 = 3.340603e-05 2 SNES Function norm 3.340603450829e-05 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid norm 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid norm 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid norm 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid norm 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid norm 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 Linear solve converged due to CONVERGED_RTOL iterations 4 NL step 3, |residual|_2 = 2.126180e-05 3 SNES Function norm 2.126179867301e-05 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid norm 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid norm 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid norm 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid norm 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid norm 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 Linear solve converged due to CONVERGED_RTOL iterations 4 NL step 4, |residual|_2 = 2.863517e-06 4 SNES Function norm 2.863517221239e-06 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid norm 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid norm 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid norm 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 Linear solve converged due to CONVERGED_RTOL iterations 2 NL step 5, |residual|_2 = 9.132390e-08 5 SNES Function norm 9.132390063388e-08 Converged:1 My questions: 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is explicitly constructed. when combined with -pc_type lu, the problem J (du) = -R is directly solved as (du) = J^{-1} * (-R) where J^{-1} is calculated from this explicitly constructed matrix J, using LU factorization. 2, what's the difference between snes_mf_operator and snes_fd? What I understand (might be wrong) is snes_mf_operator does not *explicitly construct* the matrix J, as it is a matrix free method. Is the finite differencing methods behind the matrix free operator in snes_mf_operator and the matrix construction in snes_fd are the same? 3, It seems that snes_mf_operator is preconditioned, while snes_fd is not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', should it be identical to the 'true resid norm'? Is it my fault, for example, giving a bad preconditioning matrix, makes the KSP not working well? I'd appreciate your help...there are too many (maybe bad) questions today. And please let me know if you may need more information. Best, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: snes_fd_output.dat Type: application/octet-stream Size: 54086 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: snes_mf_operator_output.dat Type: application/octet-stream Size: 101695 bytes Desc: not available URL: From irving at naml.us Wed Jan 30 18:32:45 2013 From: irving at naml.us (Geoffrey Irving) Date: Wed, 30 Jan 2013 16:32:45 -0800 Subject: [petsc-users] order dependency in petsc options? Message-ID: I'm setting options via CHECK(PetscOptionsClear()); CHECK(PetscOptionsInsert(&argc,&argv,0)); a while after calling PetscInitialize(). If the options are -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd -ksp_type cg -pc_type icc -pc_factor_levels 0 -ksp_max_it 100 it successfully caps CG iterations at 100. If I use -ksp_max_it 100 -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd -ksp_type cg -pc_type icc -pc_factor_levels 0 the maximum iterations are 10000. I.e., -ksp_max_it work at the end but not at the beginning. What might be causing this order dependence? I'm using [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 Thanks, Geoffrey From irving at naml.us Wed Jan 30 18:37:08 2013 From: irving at naml.us (Geoffrey Irving) Date: Wed, 30 Jan 2013 16:37:08 -0800 Subject: [petsc-users] order dependency in petsc options? In-Reply-To: References: Message-ID: Aside to Jed: yes, rcm seems to work a lot better than nested dissection. Geoffrey On Wed, Jan 30, 2013 at 4:32 PM, Geoffrey Irving wrote: > I'm setting options via > > CHECK(PetscOptionsClear()); > CHECK(PetscOptionsInsert(&argc,&argv,0)); > > a while after calling PetscInitialize(). If the options are > > -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd -ksp_type cg > -pc_type icc -pc_factor_levels 0 -ksp_max_it 100 > > it successfully caps CG iterations at 100. If I use > > -ksp_max_it 100 -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd > -ksp_type cg -pc_type icc -pc_factor_levels 0 > > the maximum iterations are 10000. I.e., -ksp_max_it work at the end > but not at the beginning. What might be causing this order > dependence? I'm using > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 > 11:26:24 CDT 2012 > > Thanks, > Geoffrey From knepley at gmail.com Wed Jan 30 19:38:35 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Jan 2013 20:38:35 -0500 Subject: [petsc-users] order dependency in petsc options? In-Reply-To: References: Message-ID: On Wed, Jan 30, 2013 at 7:32 PM, Geoffrey Irving wrote: > I'm setting options via > > CHECK(PetscOptionsClear()); > CHECK(PetscOptionsInsert(&argc,&argv,0)); > > a while after calling PetscInitialize(). If the options are > > -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd -ksp_type cg > -pc_type icc -pc_factor_levels 0 -ksp_max_it 100 > > it successfully caps CG iterations at 100. If I use > > -ksp_max_it 100 -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd > -ksp_type cg -pc_type icc -pc_factor_levels 0 > > the maximum iterations are 10000. I.e., -ksp_max_it work at the end > but not at the beginning. What might be causing this order > dependence? I'm using > If you are using straight up argv, then the first argument is always the program name and gets ignored. Matt > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 > 11:26:24 CDT 2012 > > Thanks, > Geoffrey > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jan 30 19:40:24 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Jan 2013 20:40:24 -0500 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: > Hi, All > > I am testing the performance of snes_mf_operator against snes_fd. > You need to give -snes_view so we can see what solver is begin used. Matt > I know snes_fd is for test/debugging and extremely slow, which is ok for > my testing purpose. I then compared the code performance using > snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less > computing time then snes_fd, however, the snes_mf_operator non-linear > solver performance is worse than snes_fd, in terms of non linear iteration > in each time steps. > > Here is the PETSc Options Table entries taken from the log_summary when > using snes_mf_operator > #PETSc Option Table entries: > -ksp_converged_reason > -ksp_gmres_restart 300 > -ksp_monitor_true_residual > -log_summary > -m pipe_7eqn_2phase_step7_ps.i > -mat_fd_type ds > -pc_type lu > -snes_mf_operator > -snes_monitor > #End of PETSc Option Table entries > > Here is the PETSc Options Table entries taken from the log_summary when > using snes_fd > #PETSc Option Table entries: > -ksp_converged_reason > -ksp_gmres_restart 300 > -ksp_monitor_true_residual > -log_summary > -m pipe_7eqn_2phase_step7_ps.i > -mat_fd_type ds > -pc_type lu > -snes_fd > -snes_monitor > #End of PETSc Option Table entries > > The full code output along with log_summary are attached. > > I've noticed that when using snes_fd, the non-linear convergence is > always good in each time step, around 3-4 non-linear steps with almost > quadratic convergence rate. In each non-linear step, it uses only 1 linear > step to converge as I used '-pc_type lu' and only 1 linear step is > expected. Here is a piece of output I pulled out from the code output (very > nice non-linear, linear performance but of course very expensive): > > DT: 1.234568e-05 > Solving time step 7, time=4.34568e-05... > Initial |residual|_2 = 3.547156e+00 > NL step 0, |residual|_2 = 3.547156e+00 > 0 SNES Function norm 3.547155872103e+00 > 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid norm > 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid norm > 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 > Linear solve converged due to CONVERGED_RTOL iterations 1 > NL step 1, |residual|_2 = 4.900005e-04 > 1 SNES Function norm 4.900004596844e-04 > 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid norm > 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid norm > 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 > Linear solve converged due to CONVERGED_RTOL iterations 1 > NL step 2, |residual|_2 = 1.171419e-06 > 2 SNES Function norm 1.171419468770e-06 > 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid norm > 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid norm > 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 > Linear solve converged due to CONVERGED_RTOL iterations 1 > NL step 3, |residual|_2 = 1.860041e-08 > 3 SNES Function norm 1.860041398803e-08 > Converged:1 > > Back to the snes_mf_operator option, it behaviors differently. It > generally takes more non-linear and linear steps. The 'KSP unpreconditioned > resid norm' drops nicely however the 'true resid norm' seems to be a bit > wired to me, drops then increases. > > DT: 1.524158e-05 > Solving time step 9, time=7.24158e-05... > Initial |residual|_2 = 3.601003e+00 > NL step 0, |residual|_2 = 3.601003e+00 > 0 SNES Function norm 3.601003423006e+00 > 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid norm > 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid norm > 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 > 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid norm > 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 > 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid norm > 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 > Linear solve converged due to CONVERGED_RTOL iterations 3 > NL step 1, |residual|_2 = 5.928815e-02 > 1 SNES Function norm 5.928815267199e-02 > 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid norm > 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid norm > 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 > 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid norm > 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 > Linear solve converged due to CONVERGED_RTOL iterations 2 > NL step 2, |residual|_2 = 3.340603e-05 > 2 SNES Function norm 3.340603450829e-05 > 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid norm > 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid norm > 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 > 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid norm > 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 > 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid norm > 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 > 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid norm > 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 > Linear solve converged due to CONVERGED_RTOL iterations 4 > NL step 3, |residual|_2 = 2.126180e-05 > 3 SNES Function norm 2.126179867301e-05 > 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid norm > 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid norm > 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 > 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid norm > 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 > 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid norm > 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 > 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid norm > 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 > Linear solve converged due to CONVERGED_RTOL iterations 4 > NL step 4, |residual|_2 = 2.863517e-06 > 4 SNES Function norm 2.863517221239e-06 > 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid norm > 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid norm > 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 > 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid norm > 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 > Linear solve converged due to CONVERGED_RTOL iterations 2 > NL step 5, |residual|_2 = 9.132390e-08 > 5 SNES Function norm 9.132390063388e-08 > Converged:1 > > > My questions: > 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is > explicitly constructed. when combined with -pc_type lu, the problem > J (du) = -R > is directly solved as (du) = J^{-1} * (-R) > where J^{-1} is calculated from this explicitly constructed matrix J, > using LU factorization. > > 2, what's the difference between snes_mf_operator and snes_fd? > What I understand (might be wrong) is snes_mf_operator does not > *explicitly construct* the matrix J, as it is a matrix free method. Is the > finite differencing methods behind the matrix free operator > in snes_mf_operator and the matrix construction in snes_fd are the same? > > 3, It seems that snes_mf_operator is preconditioned, while snes_fd is not. > Why it says ' KSP unpreconditioned resid norm ' but I am expecting 'KSP > preconditioned resid norm'. Also if it is 'unpreconditioned', should it > be identical to the 'true resid norm'? Is it my fault, for example, giving > a bad preconditioning matrix, makes the KSP not working well? > > I'd appreciate your help...there are too many (maybe bad) questions today. > And please let me know if you may need more information. > > Best, > > Ling > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From irving at naml.us Wed Jan 30 19:44:53 2013 From: irving at naml.us (Geoffrey Irving) Date: Wed, 30 Jan 2013 17:44:53 -0800 Subject: [petsc-users] order dependency in petsc options? In-Reply-To: References: Message-ID: On Wed, Jan 30, 2013 at 5:38 PM, Matthew Knepley wrote: > On Wed, Jan 30, 2013 at 7:32 PM, Geoffrey Irving wrote: >> >> I'm setting options via >> >> CHECK(PetscOptionsClear()); >> CHECK(PetscOptionsInsert(&argc,&argv,0)); >> >> a while after calling PetscInitialize(). If the options are >> >> -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd -ksp_type cg >> -pc_type icc -pc_factor_levels 0 -ksp_max_it 100 >> >> it successfully caps CG iterations at 100. If I use >> >> -ksp_max_it 100 -ksp_rtol 1e-3 -pc_factor_mat_ordering_type nd >> -ksp_type cg -pc_type icc -pc_factor_levels 0 >> >> the maximum iterations are 10000. I.e., -ksp_max_it work at the end >> but not at the beginning. What might be causing this order >> dependence? I'm using > > > If you are using straight up argv, then the first argument is always the > program name > and gets ignored. Yep, that was it. Thanks. Geoffrey From bsmith at mcs.anl.gov Wed Jan 30 20:50:05 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 30 Jan 2013 20:50:05 -0600 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: <675AA81A-E482-4406-A0DB-4D35DFFD57FD@mcs.anl.gov> Also see http://www.mcs.anl.gov/petsc/documentation/faq.html#newton On Jan 30, 2013, at 7:40 PM, Matthew Knepley wrote: > On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: > Hi, All > > I am testing the performance of snes_mf_operator against snes_fd. > > You need to give -snes_view so we can see what solver is begin used. > > Matt > I know snes_fd is for test/debugging and extremely slow, which is ok for my testing purpose. I then compared the code performance using snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less computing time then snes_fd, however, the snes_mf_operator non-linear solver performance is worse than snes_fd, in terms of non linear iteration in each time steps. > > Here is the PETSc Options Table entries taken from the log_summary when using snes_mf_operator > #PETSc Option Table entries: > -ksp_converged_reason > -ksp_gmres_restart 300 > -ksp_monitor_true_residual > -log_summary > -m pipe_7eqn_2phase_step7_ps.i > -mat_fd_type ds > -pc_type lu > -snes_mf_operator > -snes_monitor > #End of PETSc Option Table entries > > Here is the PETSc Options Table entries taken from the log_summary when using snes_fd > #PETSc Option Table entries: > -ksp_converged_reason > -ksp_gmres_restart 300 > -ksp_monitor_true_residual > -log_summary > -m pipe_7eqn_2phase_step7_ps.i > -mat_fd_type ds > -pc_type lu > -snes_fd > -snes_monitor > #End of PETSc Option Table entries > > The full code output along with log_summary are attached. > > I've noticed that when using snes_fd, the non-linear convergence is always good in each time step, around 3-4 non-linear steps with almost quadratic convergence rate. In each non-linear step, it uses only 1 linear step to converge as I used '-pc_type lu' and only 1 linear step is expected. Here is a piece of output I pulled out from the code output (very nice non-linear, linear performance but of course very expensive): > > DT: 1.234568e-05 > Solving time step 7, time=4.34568e-05... > Initial |residual|_2 = 3.547156e+00 > NL step 0, |residual|_2 = 3.547156e+00 > 0 SNES Function norm 3.547155872103e+00 > 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid norm 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid norm 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 > Linear solve converged due to CONVERGED_RTOL iterations 1 > NL step 1, |residual|_2 = 4.900005e-04 > 1 SNES Function norm 4.900004596844e-04 > 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid norm 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid norm 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 > Linear solve converged due to CONVERGED_RTOL iterations 1 > NL step 2, |residual|_2 = 1.171419e-06 > 2 SNES Function norm 1.171419468770e-06 > 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid norm 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid norm 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 > Linear solve converged due to CONVERGED_RTOL iterations 1 > NL step 3, |residual|_2 = 1.860041e-08 > 3 SNES Function norm 1.860041398803e-08 > Converged:1 > > Back to the snes_mf_operator option, it behaviors differently. It generally takes more non-linear and linear steps. The 'KSP unpreconditioned resid norm' drops nicely however the 'true resid norm' seems to be a bit wired to me, drops then increases. > > DT: 1.524158e-05 > Solving time step 9, time=7.24158e-05... > Initial |residual|_2 = 3.601003e+00 > NL step 0, |residual|_2 = 3.601003e+00 > 0 SNES Function norm 3.601003423006e+00 > 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid norm 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid norm 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 > 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid norm 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 > 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid norm 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 > Linear solve converged due to CONVERGED_RTOL iterations 3 > NL step 1, |residual|_2 = 5.928815e-02 > 1 SNES Function norm 5.928815267199e-02 > 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid norm 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid norm 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 > 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid norm 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 > Linear solve converged due to CONVERGED_RTOL iterations 2 > NL step 2, |residual|_2 = 3.340603e-05 > 2 SNES Function norm 3.340603450829e-05 > 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid norm 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid norm 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 > 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid norm 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 > 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid norm 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 > 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid norm 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 > Linear solve converged due to CONVERGED_RTOL iterations 4 > NL step 3, |residual|_2 = 2.126180e-05 > 3 SNES Function norm 2.126179867301e-05 > 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid norm 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid norm 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 > 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid norm 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 > 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid norm 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 > 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid norm 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 > Linear solve converged due to CONVERGED_RTOL iterations 4 > NL step 4, |residual|_2 = 2.863517e-06 > 4 SNES Function norm 2.863517221239e-06 > 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid norm 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid norm 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 > 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid norm 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 > Linear solve converged due to CONVERGED_RTOL iterations 2 > NL step 5, |residual|_2 = 9.132390e-08 > 5 SNES Function norm 9.132390063388e-08 > Converged:1 > > > My questions: > 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is explicitly constructed. when combined with -pc_type lu, the problem > J (du) = -R > is directly solved as (du) = J^{-1} * (-R) > where J^{-1} is calculated from this explicitly constructed matrix J, using LU factorization. > > 2, what's the difference between snes_mf_operator and snes_fd? > What I understand (might be wrong) is snes_mf_operator does not *explicitly construct* the matrix J, as it is a matrix free method. Is the finite differencing methods behind the matrix free operator in snes_mf_operator and the matrix construction in snes_fd are the same? > > 3, It seems that snes_mf_operator is preconditioned, while snes_fd is not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', should it be identical to the 'true resid norm'? Is it my fault, for example, giving a bad preconditioning matrix, makes the KSP not working well? > > I'd appreciate your help...there are too many (maybe bad) questions today. And please let me know if you may need more information. > > Best, > > Ling > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From alejandrocof at gmail.com Thu Jan 31 09:58:26 2013 From: alejandrocof at gmail.com (Alex Salazar) Date: Thu, 31 Jan 2013 09:58:26 -0600 Subject: [petsc-users] dense matrix problem Message-ID: Hello, everyone I want to get the solution of algrebraic system (with a dense zero-diagonal simetric matrix) of a interpolation problem (using radial basis functions) with Petsc. For a few points (around 10 elements) using KSP I get the correct solution, however when I increase the number of points, the result is incorrect. Any Idea ? Regards Alejandro. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jan 31 10:07:49 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 31 Jan 2013 10:07:49 -0600 Subject: [petsc-users] dense matrix problem In-Reply-To: References: Message-ID: "result is incorrect" is not helpful. Please include diagnostics. Are you using an iterative or direct solver? Start here if you don't know what to check: http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging On Thu, Jan 31, 2013 at 9:58 AM, Alex Salazar wrote: > Hello, everyone > > I want to get the solution of algrebraic system (with a dense > zero-diagonal simetric matrix) of a interpolation problem (using radial > basis functions) with Petsc. > > For a few points (around 10 elements) using KSP I get the correct > solution, however when I increase the number of points, the result is > incorrect. > > Any Idea ? > > Regards Alejandro. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 31 10:09:48 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 11:09:48 -0500 Subject: [petsc-users] dense matrix problem In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 11:07 AM, Jed Brown wrote: > "result is incorrect" is not helpful. Please include diagnostics. Are you > using an iterative or direct solver? Start here if you don't know what to > check: > > > http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging > And ASM works great for these problems: http://arxiv.org/abs/0909.5413 Matt > > On Thu, Jan 31, 2013 at 9:58 AM, Alex Salazar wrote: > >> Hello, everyone >> >> I want to get the solution of algrebraic system (with a dense >> zero-diagonal simetric matrix) of a interpolation problem (using radial >> basis functions) with Petsc. >> >> For a few points (around 10 elements) using KSP I get the correct >> solution, however when I increase the number of points, the result is >> incorrect. >> >> Any Idea ? >> >> Regards Alejandro. >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandrocof at gmail.com Thu Jan 31 11:31:30 2013 From: alejandrocof at gmail.com (Alex Salazar) Date: Thu, 31 Jan 2013 11:31:30 -0600 Subject: [petsc-users] dense matrix problem In-Reply-To: References: Message-ID: Thanks Jed, sorry I new using Petsc. I am runing my program with default options : mpirun -np 1 ./a.out runing the program with a few points (10) the interpolation is aceptable, using the monitor options ie. mpirun -np 1 ./a.out -ksp_converged_reason -ksp_monitor_true_residual the output is the next 0 KSP preconditioned resid norm 7.415666989515e+05 true resid norm 1.510533668194e+04 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 4.935801698308e+02 true resid norm 1.721762948997e+01 ||r(i)||/||b|| 1.139837519183e-03 2 KSP preconditioned resid norm 1.038493428107e-01 true resid norm 9.406615966569e+00 ||r(i)||/||b|| 6.227346112594e-04 Linear solve converged due to CONVERGED_RTOL iterations 2 But if encrease the number of points to interpolate, the result is incorrect and the time of convergece encrease, the output in this case is: ..... 138 KSP preconditioned resid norm 4.897817095518e-06 true resid norm 5.455886375264e-08 ||r(i)||/||b|| 4.852601039969e-10 139 KSP preconditioned resid norm 3.535973415166e-06 true resid norm 3.717900878947e-08 ||r(i)||/||b|| 3.306793512687e-10 140 KSP preconditioned resid norm 2.450794339359e-06 true resid norm 3.009597005096e-08 ||r(i)||/||b|| 2.676810430480e-10 Linear solve converged due to CONVERGED_RTOL iterations 140 [0] time (sec) 89.940000 the code where the solution is implemented is the next: KSP ksp; KSPCreate(PETSC_COMM_WORLD,&ksp); KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN); KSPSetTolerances(ksp,1.e-5/(N+nt+1),1.e-50,PETSC_DEFAULT,PETSC_DEFAULT); KSPSetFromOptions(ksp); KSPSolve(ksp,b,x); Let me know if you need another information. Regard Alex 2013/1/31 Jed Brown > "result is incorrect" is not helpful. Please include diagnostics. Are you > using an iterative or direct solver? Start here if you don't know what to > check: > > > http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging > > > > On Thu, Jan 31, 2013 at 9:58 AM, Alex Salazar wrote: > >> Hello, everyone >> >> I want to get the solution of algrebraic system (with a dense >> zero-diagonal simetric matrix) of a interpolation problem (using radial >> basis functions) with Petsc. >> >> For a few points (around 10 elements) using KSP I get the correct >> solution, however when I increase the number of points, the result is >> incorrect. >> >> Any Idea ? >> >> Regards Alejandro. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandrocof at gmail.com Thu Jan 31 11:34:04 2013 From: alejandrocof at gmail.com (Alex Salazar) Date: Thu, 31 Jan 2013 11:34:04 -0600 Subject: [petsc-users] dense matrix problem In-Reply-To: References: Message-ID: Thanks Matthew !, It sound very interesting I going to check it. Regard Alex... 2013/1/31 Matthew Knepley > On Thu, Jan 31, 2013 at 11:07 AM, Jed Brown wrote: > >> "result is incorrect" is not helpful. Please include diagnostics. Are you >> using an iterative or direct solver? Start here if you don't know what to >> check: >> >> >> http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging >> > > And ASM works great for these problems: > > http://arxiv.org/abs/0909.5413 > > Matt > > >> >> On Thu, Jan 31, 2013 at 9:58 AM, Alex Salazar wrote: >> >>> Hello, everyone >>> >>> I want to get the solution of algrebraic system (with a dense >>> zero-diagonal simetric matrix) of a interpolation problem (using radial >>> basis functions) with Petsc. >>> >>> For a few points (around 10 elements) using KSP I get the correct >>> solution, however when I increase the number of points, the result is >>> incorrect. >>> >>> Any Idea ? >>> >>> Regards Alejandro. >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 31 12:12:20 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 13:12:20 -0500 Subject: [petsc-users] dense matrix problem In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 12:31 PM, Alex Salazar wrote: > Thanks Jed, sorry I new using Petsc. > > I am runing my program with default options : > mpirun -np 1 ./a.out > runing the program with a few points (10) the interpolation is aceptable, > > > using the monitor options ie. > mpirun -np 1 ./a.out -ksp_converged_reason -ksp_monitor_true_residual > the output is the next > 0 KSP preconditioned resid norm 7.415666989515e+05 true resid norm > 1.510533668194e+04 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 4.935801698308e+02 true resid norm > 1.721762948997e+01 ||r(i)||/||b|| 1.139837519183e-03 > 2 KSP preconditioned resid norm 1.038493428107e-01 true resid norm > 9.406615966569e+00 ||r(i)||/||b|| 6.227346112594e-04 > Linear solve converged due to CONVERGED_RTOL iterations 2 > > But if encrease the number of points to interpolate, the result is > incorrect and the time of convergece encrease, the output in this case is: > > ..... > 138 KSP preconditioned resid norm 4.897817095518e-06 true resid norm > 5.455886375264e-08 ||r(i)||/||b|| 4.852601039969e-10 > 139 KSP preconditioned resid norm 3.535973415166e-06 true resid norm > 3.717900878947e-08 ||r(i)||/||b|| 3.306793512687e-10 > 140 KSP preconditioned resid norm 2.450794339359e-06 true resid norm > 3.009597005096e-08 ||r(i)||/||b|| 2.676810430480e-10 > Linear solve converged due to CONVERGED_RTOL iterations 140 > [0] time (sec) 89.940000 > You have an ill-conditioned system, so you will have to lower the tolerance. Also note that your preconditioner is a piece of crap. Matt > > the code where the solution is implemented is the next: > KSP ksp; > KSPCreate(PETSC_COMM_WORLD,&ksp); > KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN); > KSPSetTolerances(ksp,1.e-5/(N+nt+1),1.e-50,PETSC_DEFAULT,PETSC_DEFAULT); > KSPSetFromOptions(ksp); > KSPSolve(ksp,b,x); > > Let me know if you need another information. > > Regard Alex > > > 2013/1/31 Jed Brown > >> "result is incorrect" is not helpful. Please include diagnostics. Are you >> using an iterative or direct solver? Start here if you don't know what to >> check: >> >> >> http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging >> >> >> >> On Thu, Jan 31, 2013 at 9:58 AM, Alex Salazar wrote: >> >>> Hello, everyone >>> >>> I want to get the solution of algrebraic system (with a dense >>> zero-diagonal simetric matrix) of a interpolation problem (using radial >>> basis functions) with Petsc. >>> >>> For a few points (around 10 elements) using KSP I get the correct >>> solution, however when I increase the number of points, the result is >>> incorrect. >>> >>> Any Idea ? >>> >>> Regards Alejandro. >>> >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Jan 31 12:16:34 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 31 Jan 2013 11:16:34 -0700 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: Thank you Matt and Barry. I didn't get a chance to reply you yesterday. Here are the new output files with -snes_view on. Ling On Wed, Jan 30, 2013 at 6:40 PM, Matthew Knepley wrote: > On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: > >> Hi, All >> >> I am testing the performance of snes_mf_operator against snes_fd. >> > > You need to give -snes_view so we can see what solver is begin used. > > Matt > >> I know snes_fd is for test/debugging and extremely slow, which is ok for >> my testing purpose. I then compared the code performance using >> snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less >> computing time then snes_fd, however, the snes_mf_operator non-linear >> solver performance is worse than snes_fd, in terms of non linear iteration >> in each time steps. >> >> Here is the PETSc Options Table entries taken from the log_summary when >> using snes_mf_operator >> #PETSc Option Table entries: >> -ksp_converged_reason >> -ksp_gmres_restart 300 >> -ksp_monitor_true_residual >> -log_summary >> -m pipe_7eqn_2phase_step7_ps.i >> -mat_fd_type ds >> -pc_type lu >> -snes_mf_operator >> -snes_monitor >> #End of PETSc Option Table entries >> >> Here is the PETSc Options Table entries taken from the log_summary when >> using snes_fd >> #PETSc Option Table entries: >> -ksp_converged_reason >> -ksp_gmres_restart 300 >> -ksp_monitor_true_residual >> -log_summary >> -m pipe_7eqn_2phase_step7_ps.i >> -mat_fd_type ds >> -pc_type lu >> -snes_fd >> -snes_monitor >> #End of PETSc Option Table entries >> >> The full code output along with log_summary are attached. >> >> I've noticed that when using snes_fd, the non-linear convergence is >> always good in each time step, around 3-4 non-linear steps with almost >> quadratic convergence rate. In each non-linear step, it uses only 1 linear >> step to converge as I used '-pc_type lu' and only 1 linear step is >> expected. Here is a piece of output I pulled out from the code output (very >> nice non-linear, linear performance but of course very expensive): >> >> DT: 1.234568e-05 >> Solving time step 7, time=4.34568e-05... >> Initial |residual|_2 = 3.547156e+00 >> NL step 0, |residual|_2 = 3.547156e+00 >> 0 SNES Function norm 3.547155872103e+00 >> 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid norm >> 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid norm >> 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 >> Linear solve converged due to CONVERGED_RTOL iterations 1 >> NL step 1, |residual|_2 = 4.900005e-04 >> 1 SNES Function norm 4.900004596844e-04 >> 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid norm >> 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid norm >> 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 >> Linear solve converged due to CONVERGED_RTOL iterations 1 >> NL step 2, |residual|_2 = 1.171419e-06 >> 2 SNES Function norm 1.171419468770e-06 >> 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid norm >> 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid norm >> 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 >> Linear solve converged due to CONVERGED_RTOL iterations 1 >> NL step 3, |residual|_2 = 1.860041e-08 >> 3 SNES Function norm 1.860041398803e-08 >> Converged:1 >> >> Back to the snes_mf_operator option, it behaviors differently. It >> generally takes more non-linear and linear steps. The 'KSP unpreconditioned >> resid norm' drops nicely however the 'true resid norm' seems to be a bit >> wired to me, drops then increases. >> >> DT: 1.524158e-05 >> Solving time step 9, time=7.24158e-05... >> Initial |residual|_2 = 3.601003e+00 >> NL step 0, |residual|_2 = 3.601003e+00 >> 0 SNES Function norm 3.601003423006e+00 >> 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid norm >> 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid norm >> 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 >> 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid norm >> 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 >> 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid norm >> 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> NL step 1, |residual|_2 = 5.928815e-02 >> 1 SNES Function norm 5.928815267199e-02 >> 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid norm >> 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid norm >> 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 >> 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid norm >> 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 >> Linear solve converged due to CONVERGED_RTOL iterations 2 >> NL step 2, |residual|_2 = 3.340603e-05 >> 2 SNES Function norm 3.340603450829e-05 >> 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid norm >> 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid norm >> 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 >> 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid norm >> 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 >> 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid norm >> 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 >> 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid norm >> 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 >> Linear solve converged due to CONVERGED_RTOL iterations 4 >> NL step 3, |residual|_2 = 2.126180e-05 >> 3 SNES Function norm 2.126179867301e-05 >> 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid norm >> 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid norm >> 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 >> 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid norm >> 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 >> 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid norm >> 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 >> 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid norm >> 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 >> Linear solve converged due to CONVERGED_RTOL iterations 4 >> NL step 4, |residual|_2 = 2.863517e-06 >> 4 SNES Function norm 2.863517221239e-06 >> 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid norm >> 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid norm >> 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 >> 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid norm >> 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 >> Linear solve converged due to CONVERGED_RTOL iterations 2 >> NL step 5, |residual|_2 = 9.132390e-08 >> 5 SNES Function norm 9.132390063388e-08 >> Converged:1 >> >> >> My questions: >> 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is >> explicitly constructed. when combined with -pc_type lu, the problem >> J (du) = -R >> is directly solved as (du) = J^{-1} * (-R) >> where J^{-1} is calculated from this explicitly constructed matrix J, >> using LU factorization. >> >> 2, what's the difference between snes_mf_operator and snes_fd? >> What I understand (might be wrong) is snes_mf_operator does not >> *explicitly construct* the matrix J, as it is a matrix free method. Is the >> finite differencing methods behind the matrix free operator >> in snes_mf_operator and the matrix construction in snes_fd are the same? >> >> 3, It seems that snes_mf_operator is preconditioned, while snes_fd is >> not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting >> 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', >> should it be identical to the 'true resid norm'? Is it my fault, for >> example, giving a bad preconditioning matrix, makes the KSP not working >> well? >> >> I'd appreciate your help...there are too many (maybe bad) questions >> today. And please let me know if you may need more information. >> >> Best, >> >> Ling >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: snes_fd_output Type: application/octet-stream Size: 97760 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: snes_mf_operator_output Type: application/octet-stream Size: 136639 bytes Desc: not available URL: From knepley at gmail.com Thu Jan 31 12:28:37 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 13:28:37 -0500 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 1:16 PM, Zou (Non-US), Ling wrote: > Thank you Matt and Barry. I didn't get a chance to reply you yesterday. > Here are the new output files with -snes_view on. > It seems clear that the matrix you are providing to snes_mf_operator is not a good preconditioner for the actual matrix obtained with snes_fd. Maybe you have a bug in your evaluation. Maybe you could try -snes_check_jacobian to see. Matt > Ling > > > On Wed, Jan 30, 2013 at 6:40 PM, Matthew Knepley wrote: > >> On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: >> >>> Hi, All >>> >>> I am testing the performance of snes_mf_operator against snes_fd. >>> >> >> You need to give -snes_view so we can see what solver is begin used. >> >> Matt >> >>> I know snes_fd is for test/debugging and extremely slow, which is ok for >>> my testing purpose. I then compared the code performance using >>> snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less >>> computing time then snes_fd, however, the snes_mf_operator non-linear >>> solver performance is worse than snes_fd, in terms of non linear iteration >>> in each time steps. >>> >>> Here is the PETSc Options Table entries taken from the log_summary when >>> using snes_mf_operator >>> #PETSc Option Table entries: >>> -ksp_converged_reason >>> -ksp_gmres_restart 300 >>> -ksp_monitor_true_residual >>> -log_summary >>> -m pipe_7eqn_2phase_step7_ps.i >>> -mat_fd_type ds >>> -pc_type lu >>> -snes_mf_operator >>> -snes_monitor >>> #End of PETSc Option Table entries >>> >>> Here is the PETSc Options Table entries taken from the log_summary when >>> using snes_fd >>> #PETSc Option Table entries: >>> -ksp_converged_reason >>> -ksp_gmres_restart 300 >>> -ksp_monitor_true_residual >>> -log_summary >>> -m pipe_7eqn_2phase_step7_ps.i >>> -mat_fd_type ds >>> -pc_type lu >>> -snes_fd >>> -snes_monitor >>> #End of PETSc Option Table entries >>> >>> The full code output along with log_summary are attached. >>> >>> I've noticed that when using snes_fd, the non-linear convergence is >>> always good in each time step, around 3-4 non-linear steps with almost >>> quadratic convergence rate. In each non-linear step, it uses only 1 linear >>> step to converge as I used '-pc_type lu' and only 1 linear step is >>> expected. Here is a piece of output I pulled out from the code output (very >>> nice non-linear, linear performance but of course very expensive): >>> >>> DT: 1.234568e-05 >>> Solving time step 7, time=4.34568e-05... >>> Initial |residual|_2 = 3.547156e+00 >>> NL step 0, |residual|_2 = 3.547156e+00 >>> 0 SNES Function norm 3.547155872103e+00 >>> 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid norm >>> 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid norm >>> 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 >>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>> NL step 1, |residual|_2 = 4.900005e-04 >>> 1 SNES Function norm 4.900004596844e-04 >>> 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid norm >>> 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid norm >>> 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 >>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>> NL step 2, |residual|_2 = 1.171419e-06 >>> 2 SNES Function norm 1.171419468770e-06 >>> 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid norm >>> 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid norm >>> 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 >>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>> NL step 3, |residual|_2 = 1.860041e-08 >>> 3 SNES Function norm 1.860041398803e-08 >>> Converged:1 >>> >>> Back to the snes_mf_operator option, it behaviors differently. It >>> generally takes more non-linear and linear steps. The 'KSP unpreconditioned >>> resid norm' drops nicely however the 'true resid norm' seems to be a bit >>> wired to me, drops then increases. >>> >>> DT: 1.524158e-05 >>> Solving time step 9, time=7.24158e-05... >>> Initial |residual|_2 = 3.601003e+00 >>> NL step 0, |residual|_2 = 3.601003e+00 >>> 0 SNES Function norm 3.601003423006e+00 >>> 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid norm >>> 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid norm >>> 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 >>> 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid norm >>> 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 >>> 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid norm >>> 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> NL step 1, |residual|_2 = 5.928815e-02 >>> 1 SNES Function norm 5.928815267199e-02 >>> 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid norm >>> 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid norm >>> 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 >>> 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid norm >>> 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 >>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>> NL step 2, |residual|_2 = 3.340603e-05 >>> 2 SNES Function norm 3.340603450829e-05 >>> 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid norm >>> 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid norm >>> 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 >>> 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid norm >>> 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 >>> 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid norm >>> 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 >>> 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid norm >>> 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 >>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>> NL step 3, |residual|_2 = 2.126180e-05 >>> 3 SNES Function norm 2.126179867301e-05 >>> 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid norm >>> 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid norm >>> 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 >>> 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid norm >>> 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 >>> 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid norm >>> 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 >>> 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid norm >>> 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 >>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>> NL step 4, |residual|_2 = 2.863517e-06 >>> 4 SNES Function norm 2.863517221239e-06 >>> 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid norm >>> 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid norm >>> 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 >>> 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid norm >>> 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 >>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>> NL step 5, |residual|_2 = 9.132390e-08 >>> 5 SNES Function norm 9.132390063388e-08 >>> Converged:1 >>> >>> >>> My questions: >>> 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is >>> explicitly constructed. when combined with -pc_type lu, the problem >>> J (du) = -R >>> is directly solved as (du) = J^{-1} * (-R) >>> where J^{-1} is calculated from this explicitly constructed matrix J, >>> using LU factorization. >>> >>> 2, what's the difference between snes_mf_operator and snes_fd? >>> What I understand (might be wrong) is snes_mf_operator does not >>> *explicitly construct* the matrix J, as it is a matrix free method. Is the >>> finite differencing methods behind the matrix free operator >>> in snes_mf_operator and the matrix construction in snes_fd are the same? >>> >>> 3, It seems that snes_mf_operator is preconditioned, while snes_fd is >>> not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting >>> 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', >>> should it be identical to the 'true resid norm'? Is it my fault, for >>> example, giving a bad preconditioning matrix, makes the KSP not working >>> well? >>> >>> I'd appreciate your help...there are too many (maybe bad) questions >>> today. And please let me know if you may need more information. >>> >>> Best, >>> >>> Ling >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Jan 31 13:25:39 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 31 Jan 2013 12:25:39 -0700 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 11:28 AM, Matthew Knepley wrote: > On Thu, Jan 31, 2013 at 1:16 PM, Zou (Non-US), Ling wrote: > >> Thank you Matt and Barry. I didn't get a chance to reply you yesterday. >> Here are the new output files with -snes_view on. >> > > It seems clear that the matrix you are providing to snes_mf_operator is > not a good > preconditioner for the actual matrix obtained with snes_fd. Maybe you have > a bug in > your evaluation. Maybe you could try -snes_check_jacobian to see. > > Matt > Thank you Matt. -snes_check_jacobian options seems not working (I am using PETSc 3.3p4). However, now I got a clue what I need to improve. By the way, as ksp needs the Pmat as the matrix for preconditioning procedure, is there any way let ksp use the finite difference matrix provided by snes? or this is exactly what snes_fd is doing. Also, could you explain a bit more about the wired 'true resid norm' drops and increases behavior? 0 KSP unpreconditioned resid norm 7.527570931028e-02 true resid norm 7.527570931028e-02 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 7.217693018525e-06 true resid norm 7.217693018525e-06 ||r(i)||/||b|| 9.588342753138e-05 2 KSP unpreconditioned resid norm 1.052214184181e-07 true resid norm 1.410618177438e-02 ||r(i)||/||b|| 1.873935417365e-01 3 KSP unpreconditioned resid norm 1.023527631618e-07 true resid norm 1.410612986979e-02 ||r(i)||/||b|| 1.873928522101e-01 4 KSP unpreconditioned resid norm 1.930893544395e-08 true resid norm 1.408238386773e-02 ||r(i)||/||b|| 1.870773984964e-01 Ling > >> Ling >> >> >> On Wed, Jan 30, 2013 at 6:40 PM, Matthew Knepley wrote: >> >>> On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: >>> >>>> Hi, All >>>> >>>> I am testing the performance of snes_mf_operator against snes_fd. >>>> >>> >>> You need to give -snes_view so we can see what solver is begin used. >>> >>> Matt >>> >>>> I know snes_fd is for test/debugging and extremely slow, which is ok >>>> for my testing purpose. I then compared the code performance using >>>> snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less >>>> computing time then snes_fd, however, the snes_mf_operator non-linear >>>> solver performance is worse than snes_fd, in terms of non linear iteration >>>> in each time steps. >>>> >>>> Here is the PETSc Options Table entries taken from the log_summary when >>>> using snes_mf_operator >>>> #PETSc Option Table entries: >>>> -ksp_converged_reason >>>> -ksp_gmres_restart 300 >>>> -ksp_monitor_true_residual >>>> -log_summary >>>> -m pipe_7eqn_2phase_step7_ps.i >>>> -mat_fd_type ds >>>> -pc_type lu >>>> -snes_mf_operator >>>> -snes_monitor >>>> #End of PETSc Option Table entries >>>> >>>> Here is the PETSc Options Table entries taken from the log_summary when >>>> using snes_fd >>>> #PETSc Option Table entries: >>>> -ksp_converged_reason >>>> -ksp_gmres_restart 300 >>>> -ksp_monitor_true_residual >>>> -log_summary >>>> -m pipe_7eqn_2phase_step7_ps.i >>>> -mat_fd_type ds >>>> -pc_type lu >>>> -snes_fd >>>> -snes_monitor >>>> #End of PETSc Option Table entries >>>> >>>> The full code output along with log_summary are attached. >>>> >>>> I've noticed that when using snes_fd, the non-linear convergence is >>>> always good in each time step, around 3-4 non-linear steps with almost >>>> quadratic convergence rate. In each non-linear step, it uses only 1 linear >>>> step to converge as I used '-pc_type lu' and only 1 linear step is >>>> expected. Here is a piece of output I pulled out from the code output (very >>>> nice non-linear, linear performance but of course very expensive): >>>> >>>> DT: 1.234568e-05 >>>> Solving time step 7, time=4.34568e-05... >>>> Initial |residual|_2 = 3.547156e+00 >>>> NL step 0, |residual|_2 = 3.547156e+00 >>>> 0 SNES Function norm 3.547155872103e+00 >>>> 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid >>>> norm 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid >>>> norm 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 >>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>> NL step 1, |residual|_2 = 4.900005e-04 >>>> 1 SNES Function norm 4.900004596844e-04 >>>> 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid >>>> norm 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid >>>> norm 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 >>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>> NL step 2, |residual|_2 = 1.171419e-06 >>>> 2 SNES Function norm 1.171419468770e-06 >>>> 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid >>>> norm 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid >>>> norm 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 >>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>> NL step 3, |residual|_2 = 1.860041e-08 >>>> 3 SNES Function norm 1.860041398803e-08 >>>> Converged:1 >>>> >>>> Back to the snes_mf_operator option, it behaviors differently. It >>>> generally takes more non-linear and linear steps. The 'KSP unpreconditioned >>>> resid norm' drops nicely however the 'true resid norm' seems to be a bit >>>> wired to me, drops then increases. >>>> >>>> DT: 1.524158e-05 >>>> Solving time step 9, time=7.24158e-05... >>>> Initial |residual|_2 = 3.601003e+00 >>>> NL step 0, |residual|_2 = 3.601003e+00 >>>> 0 SNES Function norm 3.601003423006e+00 >>>> 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid >>>> norm 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid >>>> norm 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 >>>> 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid >>>> norm 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 >>>> 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid >>>> norm 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 >>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>> NL step 1, |residual|_2 = 5.928815e-02 >>>> 1 SNES Function norm 5.928815267199e-02 >>>> 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid >>>> norm 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid >>>> norm 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 >>>> 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid >>>> norm 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 >>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>> NL step 2, |residual|_2 = 3.340603e-05 >>>> 2 SNES Function norm 3.340603450829e-05 >>>> 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid >>>> norm 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid >>>> norm 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 >>>> 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid >>>> norm 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 >>>> 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid >>>> norm 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 >>>> 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid >>>> norm 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 >>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>> NL step 3, |residual|_2 = 2.126180e-05 >>>> 3 SNES Function norm 2.126179867301e-05 >>>> 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid >>>> norm 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid >>>> norm 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 >>>> 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid >>>> norm 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 >>>> 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid >>>> norm 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 >>>> 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid >>>> norm 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 >>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>> NL step 4, |residual|_2 = 2.863517e-06 >>>> 4 SNES Function norm 2.863517221239e-06 >>>> 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid >>>> norm 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>> 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid >>>> norm 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 >>>> 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid >>>> norm 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 >>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>> NL step 5, |residual|_2 = 9.132390e-08 >>>> 5 SNES Function norm 9.132390063388e-08 >>>> Converged:1 >>>> >>>> >>>> My questions: >>>> 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is >>>> explicitly constructed. when combined with -pc_type lu, the problem >>>> J (du) = -R >>>> is directly solved as (du) = J^{-1} * (-R) >>>> where J^{-1} is calculated from this explicitly constructed matrix J, >>>> using LU factorization. >>>> >>>> 2, what's the difference between snes_mf_operator and snes_fd? >>>> What I understand (might be wrong) is snes_mf_operator does not >>>> *explicitly construct* the matrix J, as it is a matrix free method. Is the >>>> finite differencing methods behind the matrix free operator >>>> in snes_mf_operator and the matrix construction in snes_fd are the same? >>>> >>>> 3, It seems that snes_mf_operator is preconditioned, while snes_fd is >>>> not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting >>>> 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', >>>> should it be identical to the 'true resid norm'? Is it my fault, for >>>> example, giving a bad preconditioning matrix, makes the KSP not working >>>> well? >>>> >>>> I'd appreciate your help...there are too many (maybe bad) questions >>>> today. And please let me know if you may need more information. >>>> >>>> Best, >>>> >>>> Ling >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 31 13:29:15 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 14:29:15 -0500 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 2:25 PM, Zou (Non-US), Ling wrote: > > > On Thu, Jan 31, 2013 at 11:28 AM, Matthew Knepley wrote: > >> On Thu, Jan 31, 2013 at 1:16 PM, Zou (Non-US), Ling wrote: >> >>> Thank you Matt and Barry. I didn't get a chance to reply you yesterday. >>> Here are the new output files with -snes_view on. >>> >> >> It seems clear that the matrix you are providing to snes_mf_operator is >> not a good >> preconditioner for the actual matrix obtained with snes_fd. Maybe you >> have a bug in >> your evaluation. Maybe you could try -snes_check_jacobian to see. >> >> Matt >> > > Thank you Matt. -snes_check_jacobian options seems not working (I am using > PETSc 3.3p4). However, now I got a clue what I need to improve. By the way, > as ksp needs the Pmat as the matrix for preconditioning procedure, is there > any way let ksp use the finite difference matrix provided by snes? or this > is exactly what snes_fd is doing. > That is what snes_fd is doing. > Also, could you explain a bit more about the wired 'true resid norm' drops > and increases behavior? > > 0 KSP unpreconditioned resid norm 7.527570931028e-02 true resid norm > 7.527570931028e-02 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP unpreconditioned resid norm 7.217693018525e-06 true resid norm > 7.217693018525e-06 ||r(i)||/||b|| 9.588342753138e-05 > 2 KSP unpreconditioned resid norm 1.052214184181e-07 true resid norm > 1.410618177438e-02 ||r(i)||/||b|| 1.873935417365e-01 > 3 KSP unpreconditioned resid norm 1.023527631618e-07 true resid norm > 1.410612986979e-02 ||r(i)||/||b|| 1.873928522101e-01 > 4 KSP unpreconditioned resid norm 1.930893544395e-08 true resid norm > 1.408238386773e-02 ||r(i)||/||b|| 1.870773984964e-01 > This looks like you are losing orthogonality in the GMRES basis after step 1. Maybe try *-ksp_gmres_modifiedgramschmidt* You have an error in your Jacobian. * *Matt > Ling > > > >> >>> Ling >>> >>> >>> On Wed, Jan 30, 2013 at 6:40 PM, Matthew Knepley wrote: >>> >>>> On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: >>>> >>>>> Hi, All >>>>> >>>>> I am testing the performance of snes_mf_operator against snes_fd. >>>>> >>>> >>>> You need to give -snes_view so we can see what solver is begin used. >>>> >>>> Matt >>>> >>>>> I know snes_fd is for test/debugging and extremely slow, which is ok >>>>> for my testing purpose. I then compared the code performance using >>>>> snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less >>>>> computing time then snes_fd, however, the snes_mf_operator non-linear >>>>> solver performance is worse than snes_fd, in terms of non linear iteration >>>>> in each time steps. >>>>> >>>>> Here is the PETSc Options Table entries taken from the log_summary >>>>> when using snes_mf_operator >>>>> #PETSc Option Table entries: >>>>> -ksp_converged_reason >>>>> -ksp_gmres_restart 300 >>>>> -ksp_monitor_true_residual >>>>> -log_summary >>>>> -m pipe_7eqn_2phase_step7_ps.i >>>>> -mat_fd_type ds >>>>> -pc_type lu >>>>> -snes_mf_operator >>>>> -snes_monitor >>>>> #End of PETSc Option Table entries >>>>> >>>>> Here is the PETSc Options Table entries taken from the log_summary >>>>> when using snes_fd >>>>> #PETSc Option Table entries: >>>>> -ksp_converged_reason >>>>> -ksp_gmres_restart 300 >>>>> -ksp_monitor_true_residual >>>>> -log_summary >>>>> -m pipe_7eqn_2phase_step7_ps.i >>>>> -mat_fd_type ds >>>>> -pc_type lu >>>>> -snes_fd >>>>> -snes_monitor >>>>> #End of PETSc Option Table entries >>>>> >>>>> The full code output along with log_summary are attached. >>>>> >>>>> I've noticed that when using snes_fd, the non-linear convergence is >>>>> always good in each time step, around 3-4 non-linear steps with almost >>>>> quadratic convergence rate. In each non-linear step, it uses only 1 linear >>>>> step to converge as I used '-pc_type lu' and only 1 linear step is >>>>> expected. Here is a piece of output I pulled out from the code output (very >>>>> nice non-linear, linear performance but of course very expensive): >>>>> >>>>> DT: 1.234568e-05 >>>>> Solving time step 7, time=4.34568e-05... >>>>> Initial |residual|_2 = 3.547156e+00 >>>>> NL step 0, |residual|_2 = 3.547156e+00 >>>>> 0 SNES Function norm 3.547155872103e+00 >>>>> 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid >>>>> norm 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid >>>>> norm 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>> NL step 1, |residual|_2 = 4.900005e-04 >>>>> 1 SNES Function norm 4.900004596844e-04 >>>>> 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid >>>>> norm 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid >>>>> norm 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>> NL step 2, |residual|_2 = 1.171419e-06 >>>>> 2 SNES Function norm 1.171419468770e-06 >>>>> 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid >>>>> norm 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid >>>>> norm 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>> NL step 3, |residual|_2 = 1.860041e-08 >>>>> 3 SNES Function norm 1.860041398803e-08 >>>>> Converged:1 >>>>> >>>>> Back to the snes_mf_operator option, it behaviors differently. It >>>>> generally takes more non-linear and linear steps. The 'KSP unpreconditioned >>>>> resid norm' drops nicely however the 'true resid norm' seems to be a bit >>>>> wired to me, drops then increases. >>>>> >>>>> DT: 1.524158e-05 >>>>> Solving time step 9, time=7.24158e-05... >>>>> Initial |residual|_2 = 3.601003e+00 >>>>> NL step 0, |residual|_2 = 3.601003e+00 >>>>> 0 SNES Function norm 3.601003423006e+00 >>>>> 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid >>>>> norm 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid >>>>> norm 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 >>>>> 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid >>>>> norm 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 >>>>> 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid >>>>> norm 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>>> NL step 1, |residual|_2 = 5.928815e-02 >>>>> 1 SNES Function norm 5.928815267199e-02 >>>>> 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid >>>>> norm 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid >>>>> norm 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 >>>>> 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid >>>>> norm 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>>> NL step 2, |residual|_2 = 3.340603e-05 >>>>> 2 SNES Function norm 3.340603450829e-05 >>>>> 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid >>>>> norm 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid >>>>> norm 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 >>>>> 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid >>>>> norm 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 >>>>> 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid >>>>> norm 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 >>>>> 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid >>>>> norm 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>>> NL step 3, |residual|_2 = 2.126180e-05 >>>>> 3 SNES Function norm 2.126179867301e-05 >>>>> 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid >>>>> norm 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid >>>>> norm 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 >>>>> 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid >>>>> norm 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 >>>>> 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid >>>>> norm 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 >>>>> 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid >>>>> norm 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>>> NL step 4, |residual|_2 = 2.863517e-06 >>>>> 4 SNES Function norm 2.863517221239e-06 >>>>> 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid >>>>> norm 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>>> 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid >>>>> norm 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 >>>>> 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid >>>>> norm 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 >>>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>>> NL step 5, |residual|_2 = 9.132390e-08 >>>>> 5 SNES Function norm 9.132390063388e-08 >>>>> Converged:1 >>>>> >>>>> >>>>> My questions: >>>>> 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, is >>>>> explicitly constructed. when combined with -pc_type lu, the problem >>>>> J (du) = -R >>>>> is directly solved as (du) = J^{-1} * (-R) >>>>> where J^{-1} is calculated from this explicitly constructed matrix J, >>>>> using LU factorization. >>>>> >>>>> 2, what's the difference between snes_mf_operator and snes_fd? >>>>> What I understand (might be wrong) is snes_mf_operator does not >>>>> *explicitly construct* the matrix J, as it is a matrix free method. Is the >>>>> finite differencing methods behind the matrix free operator >>>>> in snes_mf_operator and the matrix construction in snes_fd are the same? >>>>> >>>>> 3, It seems that snes_mf_operator is preconditioned, while snes_fd is >>>>> not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting >>>>> 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', >>>>> should it be identical to the 'true resid norm'? Is it my fault, for >>>>> example, giving a bad preconditioning matrix, makes the KSP not working >>>>> well? >>>>> >>>>> I'd appreciate your help...there are too many (maybe bad) questions >>>>> today. And please let me know if you may need more information. >>>>> >>>>> Best, >>>>> >>>>> Ling >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jan 31 13:46:57 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 31 Jan 2013 13:46:57 -0600 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 1:29 PM, Matthew Knepley wrote: > On Thu, Jan 31, 2013 at 2:25 PM, Zou (Non-US), Ling wrote: > >> >> >> On Thu, Jan 31, 2013 at 11:28 AM, Matthew Knepley wrote: >> >>> On Thu, Jan 31, 2013 at 1:16 PM, Zou (Non-US), Ling wrote: >>> >>>> Thank you Matt and Barry. I didn't get a chance to reply you yesterday. >>>> Here are the new output files with -snes_view on. >>>> >>> >>> It seems clear that the matrix you are providing to snes_mf_operator is >>> not a good >>> preconditioner for the actual matrix obtained with snes_fd. Maybe you >>> have a bug in >>> your evaluation. Maybe you could try -snes_check_jacobian to see. >>> >>> Matt >>> >> >> Thank you Matt. -snes_check_jacobian options seems not working (I am >> using PETSc 3.3p4). >> > That option is in petsc-dev, use -snes_type test or -snes_compare_explicit in petsc-3.3. If you are using MOOSE, then chances are you have not assembled an exact Jacobian thus these will show a difference even if everything is "working fine". Checking convergence with -snes_mf_operator -pc_type lu as you have done is a good test for whether an inexact Jacobian is still a good approximation. You might have to assembly an off-diagonal block, for example. > However, now I got a clue what I need to improve. By the way, as ksp needs >> the Pmat as the matrix for preconditioning procedure, is there any way let >> ksp use the finite difference matrix provided by snes? or this is exactly >> what snes_fd is doing. >> > > That is what snes_fd is doing. > > >> Also, could you explain a bit more about the wired 'true resid norm' >> drops and increases behavior? >> >> 0 KSP unpreconditioned resid norm 7.527570931028e-02 true resid norm >> 7.527570931028e-02 ||r(i)||/||b|| 1.000000000000e+00 >> 1 KSP unpreconditioned resid norm 7.217693018525e-06 true resid norm >> 7.217693018525e-06 ||r(i)||/||b|| 9.588342753138e-05 >> 2 KSP unpreconditioned resid norm 1.052214184181e-07 true resid norm >> 1.410618177438e-02 ||r(i)||/||b|| 1.873935417365e-01 >> 3 KSP unpreconditioned resid norm 1.023527631618e-07 true resid norm >> 1.410612986979e-02 ||r(i)||/||b|| 1.873928522101e-01 >> 4 KSP unpreconditioned resid norm 1.930893544395e-08 true resid norm >> 1.408238386773e-02 ||r(i)||/||b|| 1.870773984964e-01 >> > > This looks like you are losing orthogonality in the GMRES basis after step > 1. Maybe try *-ksp_gmres_modifiedgramschmidt* > You have an error in your Jacobian. > I would also be concerned about finite differencing error. Does the convergence behavior change at all if you use -mat_mffd_type ds? > * *Matt > > >> Ling >> >> >> >>> >>>> Ling >>>> >>>> >>>> On Wed, Jan 30, 2013 at 6:40 PM, Matthew Knepley wrote: >>>> >>>>> On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling wrote: >>>>> >>>>>> Hi, All >>>>>> >>>>>> I am testing the performance of snes_mf_operator against snes_fd. >>>>>> >>>>> >>>>> You need to give -snes_view so we can see what solver is begin used. >>>>> >>>>> Matt >>>>> >>>>>> I know snes_fd is for test/debugging and extremely slow, which is ok >>>>>> for my testing purpose. I then compared the code performance using >>>>>> snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less >>>>>> computing time then snes_fd, however, the snes_mf_operator non-linear >>>>>> solver performance is worse than snes_fd, in terms of non linear iteration >>>>>> in each time steps. >>>>>> >>>>>> Here is the PETSc Options Table entries taken from the log_summary >>>>>> when using snes_mf_operator >>>>>> #PETSc Option Table entries: >>>>>> -ksp_converged_reason >>>>>> -ksp_gmres_restart 300 >>>>>> -ksp_monitor_true_residual >>>>>> -log_summary >>>>>> -m pipe_7eqn_2phase_step7_ps.i >>>>>> -mat_fd_type ds >>>>>> -pc_type lu >>>>>> -snes_mf_operator >>>>>> -snes_monitor >>>>>> #End of PETSc Option Table entries >>>>>> >>>>>> Here is the PETSc Options Table entries taken from the log_summary >>>>>> when using snes_fd >>>>>> #PETSc Option Table entries: >>>>>> -ksp_converged_reason >>>>>> -ksp_gmres_restart 300 >>>>>> -ksp_monitor_true_residual >>>>>> -log_summary >>>>>> -m pipe_7eqn_2phase_step7_ps.i >>>>>> -mat_fd_type ds >>>>>> -pc_type lu >>>>>> -snes_fd >>>>>> -snes_monitor >>>>>> #End of PETSc Option Table entries >>>>>> >>>>>> The full code output along with log_summary are attached. >>>>>> >>>>>> I've noticed that when using snes_fd, the non-linear convergence is >>>>>> always good in each time step, around 3-4 non-linear steps with almost >>>>>> quadratic convergence rate. In each non-linear step, it uses only 1 linear >>>>>> step to converge as I used '-pc_type lu' and only 1 linear step is >>>>>> expected. Here is a piece of output I pulled out from the code output (very >>>>>> nice non-linear, linear performance but of course very expensive): >>>>>> >>>>>> DT: 1.234568e-05 >>>>>> Solving time step 7, time=4.34568e-05... >>>>>> Initial |residual|_2 = 3.547156e+00 >>>>>> NL step 0, |residual|_2 = 3.547156e+00 >>>>>> 0 SNES Function norm 3.547155872103e+00 >>>>>> 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid >>>>>> norm 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid >>>>>> norm 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>>> NL step 1, |residual|_2 = 4.900005e-04 >>>>>> 1 SNES Function norm 4.900004596844e-04 >>>>>> 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid >>>>>> norm 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid >>>>>> norm 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>>> NL step 2, |residual|_2 = 1.171419e-06 >>>>>> 2 SNES Function norm 1.171419468770e-06 >>>>>> 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid >>>>>> norm 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid >>>>>> norm 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>>> NL step 3, |residual|_2 = 1.860041e-08 >>>>>> 3 SNES Function norm 1.860041398803e-08 >>>>>> Converged:1 >>>>>> >>>>>> Back to the snes_mf_operator option, it behaviors differently. It >>>>>> generally takes more non-linear and linear steps. The 'KSP unpreconditioned >>>>>> resid norm' drops nicely however the 'true resid norm' seems to be a bit >>>>>> wired to me, drops then increases. >>>>>> >>>>>> DT: 1.524158e-05 >>>>>> Solving time step 9, time=7.24158e-05... >>>>>> Initial |residual|_2 = 3.601003e+00 >>>>>> NL step 0, |residual|_2 = 3.601003e+00 >>>>>> 0 SNES Function norm 3.601003423006e+00 >>>>>> 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid >>>>>> norm 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid >>>>>> norm 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 >>>>>> 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid >>>>>> norm 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 >>>>>> 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid >>>>>> norm 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>>>> NL step 1, |residual|_2 = 5.928815e-02 >>>>>> 1 SNES Function norm 5.928815267199e-02 >>>>>> 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid >>>>>> norm 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid >>>>>> norm 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 >>>>>> 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid >>>>>> norm 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>>>> NL step 2, |residual|_2 = 3.340603e-05 >>>>>> 2 SNES Function norm 3.340603450829e-05 >>>>>> 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid >>>>>> norm 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid >>>>>> norm 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 >>>>>> 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid >>>>>> norm 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 >>>>>> 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid >>>>>> norm 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 >>>>>> 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid >>>>>> norm 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>>>> NL step 3, |residual|_2 = 2.126180e-05 >>>>>> 3 SNES Function norm 2.126179867301e-05 >>>>>> 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid >>>>>> norm 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid >>>>>> norm 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 >>>>>> 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid >>>>>> norm 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 >>>>>> 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid >>>>>> norm 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 >>>>>> 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid >>>>>> norm 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>>>> NL step 4, |residual|_2 = 2.863517e-06 >>>>>> 4 SNES Function norm 2.863517221239e-06 >>>>>> 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid >>>>>> norm 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>>>> 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid >>>>>> norm 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 >>>>>> 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid >>>>>> norm 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 >>>>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>>>> NL step 5, |residual|_2 = 9.132390e-08 >>>>>> 5 SNES Function norm 9.132390063388e-08 >>>>>> Converged:1 >>>>>> >>>>>> >>>>>> My questions: >>>>>> 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, >>>>>> is explicitly constructed. when combined with -pc_type lu, the problem >>>>>> J (du) = -R >>>>>> is directly solved as (du) = J^{-1} * (-R) >>>>>> where J^{-1} is calculated from this explicitly constructed matrix J, >>>>>> using LU factorization. >>>>>> >>>>>> 2, what's the difference between snes_mf_operator and snes_fd? >>>>>> What I understand (might be wrong) is snes_mf_operator does not >>>>>> *explicitly construct* the matrix J, as it is a matrix free method. Is the >>>>>> finite differencing methods behind the matrix free operator >>>>>> in snes_mf_operator and the matrix construction in snes_fd are the same? >>>>>> >>>>>> 3, It seems that snes_mf_operator is preconditioned, while snes_fd is >>>>>> not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting >>>>>> 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', >>>>>> should it be identical to the 'true resid norm'? Is it my fault, for >>>>>> example, giving a bad preconditioning matrix, makes the KSP not working >>>>>> well? >>>>>> >>>>>> I'd appreciate your help...there are too many (maybe bad) questions >>>>>> today. And please let me know if you may need more information. >>>>>> >>>>>> Best, >>>>>> >>>>>> Ling >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Thu Jan 31 14:36:52 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 31 Jan 2013 13:36:52 -0700 Subject: [petsc-users] compare snes_mf_operator and snes_fd In-Reply-To: References: Message-ID: Thank you Jed. Yes I am using MOOSE. I've also tried snes_type test options before and it worked like a charm to help me fix the Jacobian problem (as you may remember I asked quite a lot snes_type test questions here before). '-mat_mffd_type ds' is quite helpful as far as I can tell now. Here is a table I made to show both '-mat_mffd_type ds' and '-ksp_gmres_modifiedgramschmidt', both seems helpful. '-mat_mffd_type ds' | '-ksp_gmres_modifiedgramschmidt' | # of function call No No 36709 No Yes 36014 Yes No 22470 Yes Yes 17862 Ultimately, I guess I need improve my Jacobian anyway. Best, Ling On Thu, Jan 31, 2013 at 12:46 PM, Jed Brown wrote: > On Thu, Jan 31, 2013 at 1:29 PM, Matthew Knepley wrote: > >> On Thu, Jan 31, 2013 at 2:25 PM, Zou (Non-US), Ling wrote: >> >>> >>> >>> On Thu, Jan 31, 2013 at 11:28 AM, Matthew Knepley wrote: >>> >>>> On Thu, Jan 31, 2013 at 1:16 PM, Zou (Non-US), Ling wrote: >>>> >>>>> Thank you Matt and Barry. I didn't get a chance to reply you yesterday. >>>>> Here are the new output files with -snes_view on. >>>>> >>>> >>>> It seems clear that the matrix you are providing to snes_mf_operator is >>>> not a good >>>> preconditioner for the actual matrix obtained with snes_fd. Maybe you >>>> have a bug in >>>> your evaluation. Maybe you could try -snes_check_jacobian to see. >>>> >>>> Matt >>>> >>> >>> Thank you Matt. -snes_check_jacobian options seems not working (I am >>> using PETSc 3.3p4). >>> >> > That option is in petsc-dev, use -snes_type test or -snes_compare_explicit > in petsc-3.3. If you are using MOOSE, then chances are you have not > assembled an exact Jacobian thus these will show a difference even if > everything is "working fine". Checking convergence with -snes_mf_operator > -pc_type lu as you have done is a good test for whether an inexact Jacobian > is still a good approximation. You might have to assembly an off-diagonal > block, for example. > > >> However, now I got a clue what I need to improve. By the way, as ksp >>> needs the Pmat as the matrix for preconditioning procedure, is there any >>> way let ksp use the finite difference matrix provided by snes? or this is >>> exactly what snes_fd is doing. >>> >> >> That is what snes_fd is doing. >> >> >>> Also, could you explain a bit more about the wired 'true resid norm' >>> drops and increases behavior? >>> >>> 0 KSP unpreconditioned resid norm 7.527570931028e-02 true resid norm >>> 7.527570931028e-02 ||r(i)||/||b|| 1.000000000000e+00 >>> 1 KSP unpreconditioned resid norm 7.217693018525e-06 true resid norm >>> 7.217693018525e-06 ||r(i)||/||b|| 9.588342753138e-05 >>> 2 KSP unpreconditioned resid norm 1.052214184181e-07 true resid norm >>> 1.410618177438e-02 ||r(i)||/||b|| 1.873935417365e-01 >>> 3 KSP unpreconditioned resid norm 1.023527631618e-07 true resid norm >>> 1.410612986979e-02 ||r(i)||/||b|| 1.873928522101e-01 >>> 4 KSP unpreconditioned resid norm 1.930893544395e-08 true resid norm >>> 1.408238386773e-02 ||r(i)||/||b|| 1.870773984964e-01 >>> >> >> This looks like you are losing orthogonality in the GMRES basis after >> step 1. Maybe try *-ksp_gmres_modifiedgramschmidt* >> You have an error in your Jacobian. >> > > I would also be concerned about finite differencing error. Does the > convergence behavior change at all if you use -mat_mffd_type ds? > > >> * *Matt >> >> >>> Ling >>> >>> >>> >>>> >>>>> Ling >>>>> >>>>> >>>>> On Wed, Jan 30, 2013 at 6:40 PM, Matthew Knepley wrote: >>>>> >>>>>> On Wed, Jan 30, 2013 at 6:30 PM, Zou (Non-US), Ling >>>>> > wrote: >>>>>> >>>>>>> Hi, All >>>>>>> >>>>>>> I am testing the performance of snes_mf_operator against snes_fd. >>>>>>> >>>>>> >>>>>> You need to give -snes_view so we can see what solver is begin used. >>>>>> >>>>>> Matt >>>>>> >>>>>>> I know snes_fd is for test/debugging and extremely slow, which is ok >>>>>>> for my testing purpose. I then compared the code performance using >>>>>>> snes_mf_operator against snes_fd. Of course, snes_mf_operator uses way less >>>>>>> computing time then snes_fd, however, the snes_mf_operator non-linear >>>>>>> solver performance is worse than snes_fd, in terms of non linear iteration >>>>>>> in each time steps. >>>>>>> >>>>>>> Here is the PETSc Options Table entries taken from the log_summary >>>>>>> when using snes_mf_operator >>>>>>> #PETSc Option Table entries: >>>>>>> -ksp_converged_reason >>>>>>> -ksp_gmres_restart 300 >>>>>>> -ksp_monitor_true_residual >>>>>>> -log_summary >>>>>>> -m pipe_7eqn_2phase_step7_ps.i >>>>>>> -mat_fd_type ds >>>>>>> -pc_type lu >>>>>>> -snes_mf_operator >>>>>>> -snes_monitor >>>>>>> #End of PETSc Option Table entries >>>>>>> >>>>>>> Here is the PETSc Options Table entries taken from the log_summary >>>>>>> when using snes_fd >>>>>>> #PETSc Option Table entries: >>>>>>> -ksp_converged_reason >>>>>>> -ksp_gmres_restart 300 >>>>>>> -ksp_monitor_true_residual >>>>>>> -log_summary >>>>>>> -m pipe_7eqn_2phase_step7_ps.i >>>>>>> -mat_fd_type ds >>>>>>> -pc_type lu >>>>>>> -snes_fd >>>>>>> -snes_monitor >>>>>>> #End of PETSc Option Table entries >>>>>>> >>>>>>> The full code output along with log_summary are attached. >>>>>>> >>>>>>> I've noticed that when using snes_fd, the non-linear convergence is >>>>>>> always good in each time step, around 3-4 non-linear steps with almost >>>>>>> quadratic convergence rate. In each non-linear step, it uses only 1 linear >>>>>>> step to converge as I used '-pc_type lu' and only 1 linear step is >>>>>>> expected. Here is a piece of output I pulled out from the code output (very >>>>>>> nice non-linear, linear performance but of course very expensive): >>>>>>> >>>>>>> DT: 1.234568e-05 >>>>>>> Solving time step 7, time=4.34568e-05... >>>>>>> Initial |residual|_2 = 3.547156e+00 >>>>>>> NL step 0, |residual|_2 = 3.547156e+00 >>>>>>> 0 SNES Function norm 3.547155872103e+00 >>>>>>> 0 KSP unpreconditioned resid norm 3.547155872103e+00 true resid >>>>>>> norm 3.547155872103e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 3.128472759493e-15 true resid >>>>>>> norm 2.343197746412e-15 ||r(i)||/||b|| 6.605849392864e-16 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>>>> NL step 1, |residual|_2 = 4.900005e-04 >>>>>>> 1 SNES Function norm 4.900004596844e-04 >>>>>>> 0 KSP unpreconditioned resid norm 4.900004596844e-04 true resid >>>>>>> norm 4.900004596844e-04 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 5.026229113909e-18 true resid >>>>>>> norm 1.400595243895e-17 ||r(i)||/||b|| 2.858354959089e-14 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>>>> NL step 2, |residual|_2 = 1.171419e-06 >>>>>>> 2 SNES Function norm 1.171419468770e-06 >>>>>>> 0 KSP unpreconditioned resid norm 1.171419468770e-06 true resid >>>>>>> norm 1.171419468770e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 5.679448617332e-21 true resid >>>>>>> norm 4.763172202015e-21 ||r(i)||/||b|| 4.066154207782e-15 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>>>>>> NL step 3, |residual|_2 = 1.860041e-08 >>>>>>> 3 SNES Function norm 1.860041398803e-08 >>>>>>> Converged:1 >>>>>>> >>>>>>> Back to the snes_mf_operator option, it behaviors differently. It >>>>>>> generally takes more non-linear and linear steps. The 'KSP unpreconditioned >>>>>>> resid norm' drops nicely however the 'true resid norm' seems to be a bit >>>>>>> wired to me, drops then increases. >>>>>>> >>>>>>> DT: 1.524158e-05 >>>>>>> Solving time step 9, time=7.24158e-05... >>>>>>> Initial |residual|_2 = 3.601003e+00 >>>>>>> NL step 0, |residual|_2 = 3.601003e+00 >>>>>>> 0 SNES Function norm 3.601003423006e+00 >>>>>>> 0 KSP unpreconditioned resid norm 3.601003423006e+00 true resid >>>>>>> norm 3.601003423006e+00 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 5.931429724028e-02 true resid >>>>>>> norm 5.931429724028e-02 ||r(i)||/||b|| 1.647160257092e-02 >>>>>>> 2 KSP unpreconditioned resid norm 1.379343811770e-05 true resid >>>>>>> norm 5.203950797327e+00 ||r(i)||/||b|| 1.445139086534e+00 >>>>>>> 3 KSP unpreconditioned resid norm 4.432805478482e-08 true resid >>>>>>> norm 5.203984109211e+00 ||r(i)||/||b|| 1.445148337256e+00 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>>>>>> NL step 1, |residual|_2 = 5.928815e-02 >>>>>>> 1 SNES Function norm 5.928815267199e-02 >>>>>>> 0 KSP unpreconditioned resid norm 5.928815267199e-02 true resid >>>>>>> norm 5.928815267199e-02 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 3.276993782949e-06 true resid >>>>>>> norm 3.276993782949e-06 ||r(i)||/||b|| 5.527232061148e-05 >>>>>>> 2 KSP unpreconditioned resid norm 2.082083269186e-08 true resid >>>>>>> norm 1.551766076370e-05 ||r(i)||/||b|| 2.617329106129e-04 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>>>>> NL step 2, |residual|_2 = 3.340603e-05 >>>>>>> 2 SNES Function norm 3.340603450829e-05 >>>>>>> 0 KSP unpreconditioned resid norm 3.340603450829e-05 true resid >>>>>>> norm 3.340603450829e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 6.659426858789e-07 true resid >>>>>>> norm 6.659426858789e-07 ||r(i)||/||b|| 1.993480207037e-02 >>>>>>> 2 KSP unpreconditioned resid norm 6.115119674466e-07 true resid >>>>>>> norm 2.887921320245e-06 ||r(i)||/||b|| 8.644909109246e-02 >>>>>>> 3 KSP unpreconditioned resid norm 1.907116539439e-09 true resid >>>>>>> norm 1.000874623281e-06 ||r(i)||/||b|| 2.996089293486e-02 >>>>>>> 4 KSP unpreconditioned resid norm 3.383211446515e-12 true resid >>>>>>> norm 1.005586686459e-06 ||r(i)||/||b|| 3.010194718591e-02 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>>>>> NL step 3, |residual|_2 = 2.126180e-05 >>>>>>> 3 SNES Function norm 2.126179867301e-05 >>>>>>> 0 KSP unpreconditioned resid norm 2.126179867301e-05 true resid >>>>>>> norm 2.126179867301e-05 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 2.724944027954e-06 true resid >>>>>>> norm 2.724944027954e-06 ||r(i)||/||b|| 1.281615008147e-01 >>>>>>> 2 KSP unpreconditioned resid norm 7.933800605616e-10 true resid >>>>>>> norm 2.776823963042e-06 ||r(i)||/||b|| 1.306015547295e-01 >>>>>>> 3 KSP unpreconditioned resid norm 6.130449965920e-11 true resid >>>>>>> norm 2.777694372634e-06 ||r(i)||/||b|| 1.306424924510e-01 >>>>>>> 4 KSP unpreconditioned resid norm 2.090637685604e-13 true resid >>>>>>> norm 2.777696567814e-06 ||r(i)||/||b|| 1.306425956963e-01 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 4 >>>>>>> NL step 4, |residual|_2 = 2.863517e-06 >>>>>>> 4 SNES Function norm 2.863517221239e-06 >>>>>>> 0 KSP unpreconditioned resid norm 2.863517221239e-06 true resid >>>>>>> norm 2.863517221239e-06 ||r(i)||/||b|| 1.000000000000e+00 >>>>>>> 1 KSP unpreconditioned resid norm 2.518692933040e-10 true resid >>>>>>> norm 2.518692933039e-10 ||r(i)||/||b|| 8.795801590987e-05 >>>>>>> 2 KSP unpreconditioned resid norm 2.165272180327e-12 true resid >>>>>>> norm 1.136392813468e-09 ||r(i)||/||b|| 3.968520967987e-04 >>>>>>> Linear solve converged due to CONVERGED_RTOL iterations 2 >>>>>>> NL step 5, |residual|_2 = 9.132390e-08 >>>>>>> 5 SNES Function norm 9.132390063388e-08 >>>>>>> Converged:1 >>>>>>> >>>>>>> >>>>>>> My questions: >>>>>>> 1, Is it true? when using snes_fd, the real Jacobian matrix, say J, >>>>>>> is explicitly constructed. when combined with -pc_type lu, the problem >>>>>>> J (du) = -R >>>>>>> is directly solved as (du) = J^{-1} * (-R) >>>>>>> where J^{-1} is calculated from this explicitly constructed matrix >>>>>>> J, using LU factorization. >>>>>>> >>>>>>> 2, what's the difference between snes_mf_operator and snes_fd? >>>>>>> What I understand (might be wrong) is snes_mf_operator does not >>>>>>> *explicitly construct* the matrix J, as it is a matrix free method. Is the >>>>>>> finite differencing methods behind the matrix free operator >>>>>>> in snes_mf_operator and the matrix construction in snes_fd are the same? >>>>>>> >>>>>>> 3, It seems that snes_mf_operator is preconditioned, while snes_fd >>>>>>> is not. Why it says ' KSP unpreconditioned resid norm ' but I am expecting >>>>>>> 'KSP preconditioned resid norm'. Also if it is 'unpreconditioned', >>>>>>> should it be identical to the 'true resid norm'? Is it my fault, for >>>>>>> example, giving a bad preconditioning matrix, makes the KSP not working >>>>>>> well? >>>>>>> >>>>>>> I'd appreciate your help...there are too many (maybe bad) questions >>>>>>> today. And please let me know if you may need more information. >>>>>>> >>>>>>> Best, >>>>>>> >>>>>>> Ling >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From irving at naml.us Thu Jan 31 18:25:28 2013 From: irving at naml.us (Geoffrey Irving) Date: Thu, 31 Jan 2013 16:25:28 -0800 Subject: [petsc-users] automatic determination of which libraries petsc wants Message-ID: We have an scons build system linking against PETSc, and it would be nice to have an automatic way of determining the list of libraries that a statically linked, installed version of PETSc wants (e.g., the MacPorts installed version). What's a good way to do such a thing from *outside* the PETSc build system? Thanks, Geoffrey From balay at mcs.anl.gov Thu Jan 31 18:34:16 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 31 Jan 2013 18:34:16 -0600 (CST) Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, 31 Jan 2013, Geoffrey Irving wrote: > We have an scons build system linking against PETSc, and it would be > nice to have an automatic way of determining the list of libraries > that a statically linked, installed version of PETSc wants (e.g., the > MacPorts installed version). What's a good way to do such a thing > from *outside* the PETSc build system? One way is for some script [like configure or equivalent] to create a simple petsc makefile and use any of the following targets to get the required info >>>>>>>>>>>> asterix:/home/balay/tmp>cat makefile PETSC_DIR=/home/balay/spetsc PETSC_ARCH=asterix64 include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules asterix:/home/balay/tmp>make getincludedirs -I/home/balay/spetsc/include -I/home/balay/spetsc/asterix64/include -I/home/balay/soft/linux64/mpich2-1.1/include -I/home/balay/soft/mpich2-1.5/include asterix:/home/balay/tmp>make getlinklibs -Wl,-rpath,/home/balay/spetsc/asterix64/lib -Wl,-rpath,/home/balay/spetsc/asterix64/lib -L/home/balay/spetsc/asterix64/lib -lpetsc -llapack -lblas -lX11 -lpthread -lm -Wl,-rpath,/home/balay/soft/mpich2-1.5/lib -L/home/balay/soft/mpich2-1.5/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl asterix:/home/balay/tmp> >>>>>>>>> [more similar targets are listed in conf/rules] The other option is to use pkgconfig file created by configure. It should be in PETSC_ARCH/lib/pkgconfig [in petsc-dev] Satish From knepley at gmail.com Thu Jan 31 18:34:41 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 19:34:41 -0500 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving wrote: > We have an scons build system linking against PETSc, and it would be > nice to have an automatic way of determining the list of libraries > that a statically linked, installed version of PETSc wants (e.g., the > MacPorts installed version). What's a good way to do such a thing > from *outside* the PETSc build system? > It depends on how much work you want to do. For at least two years I think, our default had been -lpetsc. I would just do this. Matt > Thanks, > Geoffrey > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jan 31 18:38:27 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 19:38:27 -0500 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 7:34 PM, Matthew Knepley wrote: > On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving wrote: > >> We have an scons build system linking against PETSc, and it would be >> nice to have an automatic way of determining the list of libraries >> that a statically linked, installed version of PETSc wants (e.g., the >> MacPorts installed version). What's a good way to do such a thing >> from *outside* the PETSc build system? >> > > It depends on how much work you want to do. For at least two years I think, > our default had been -lpetsc. I would just do this. > Satish is right, use 'make getlinklibs'. However, if you don't have to waste time on a life or family, you may want to consider getting the info straight from the configure. You can load up the Python module from from $PETSC_ARCH/conf/RDict.db and pull out all these things. There is an example in configVars.py. Matt > Matt > > >> Thanks, >> Geoffrey >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Jan 31 18:39:21 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 31 Jan 2013 18:39:21 -0600 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 6:34 PM, Matthew Knepley wrote: > It depends on how much work you want to do. For at least two years I think, > our default had been -lpetsc. I would just do this. > It's as though you didn't read the part where Geoff said "statically linked". ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From irving at naml.us Thu Jan 31 18:39:55 2013 From: irving at naml.us (Geoffrey Irving) Date: Thu, 31 Jan 2013 16:39:55 -0800 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 4:34 PM, Matthew Knepley wrote: > On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving wrote: >> >> We have an scons build system linking against PETSc, and it would be >> nice to have an automatic way of determining the list of libraries >> that a statically linked, installed version of PETSc wants (e.g., the >> MacPorts installed version). What's a good way to do such a thing >> from *outside* the PETSc build system? > > > It depends on how much work you want to do. For at least two years I think, > our default had been -lpetsc. I would just do this. The MacPorts build of petsc has no option for shared, so just doing -lpetsc spews hundreds of linker errors about yaml, ML, mumps, SCOTCH, etc. Geoffrey From irving at naml.us Thu Jan 31 18:46:53 2013 From: irving at naml.us (Geoffrey Irving) Date: Thu, 31 Jan 2013 16:46:53 -0800 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 4:38 PM, Matthew Knepley wrote: > On Thu, Jan 31, 2013 at 7:34 PM, Matthew Knepley wrote: >> >> On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving wrote: >>> >>> We have an scons build system linking against PETSc, and it would be >>> nice to have an automatic way of determining the list of libraries >>> that a statically linked, installed version of PETSc wants (e.g., the >>> MacPorts installed version). What's a good way to do such a thing >>> from *outside* the PETSc build system? >> >> >> It depends on how much work you want to do. For at least two years I >> think, >> our default had been -lpetsc. I would just do this. > > > Satish is right, use 'make getlinklibs'. > > However, if you don't have to waste time on a life or family, you may want > to consider > getting the info straight from the configure. You can load up the Python > module from > from $PETSC_ARCH/conf/RDict.db and pull out all these things. There is an > example > in configVars.py. configVars.py errors out at "import script": > ./bin/configVars.py Traceback (most recent call last): File "./bin/configVars.py", line 7, in import script ImportError: No module named script Indeed, I don't see any file named script.py anywhere underneath /optlocal/lib/petsc, nor any directory named exactly "config" as configVars seems to want. Bad macports, bad? Geoffrey From knepley at gmail.com Thu Jan 31 18:49:52 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 19:49:52 -0500 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 7:46 PM, Geoffrey Irving wrote: > On Thu, Jan 31, 2013 at 4:38 PM, Matthew Knepley > wrote: > > On Thu, Jan 31, 2013 at 7:34 PM, Matthew Knepley > wrote: > >> > >> On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving > wrote: > >>> > >>> We have an scons build system linking against PETSc, and it would be > >>> nice to have an automatic way of determining the list of libraries > >>> that a statically linked, installed version of PETSc wants (e.g., the > >>> MacPorts installed version). What's a good way to do such a thing > >>> from *outside* the PETSc build system? > >> > >> > >> It depends on how much work you want to do. For at least two years I > >> think, > >> our default had been -lpetsc. I would just do this. > > > > > > Satish is right, use 'make getlinklibs'. > > > > However, if you don't have to waste time on a life or family, you may > want > > to consider > > getting the info straight from the configure. You can load up the Python > > module from > > from $PETSC_ARCH/conf/RDict.db and pull out all these things. There is an > > example > > in configVars.py. > > configVars.py errors out at "import script": > > > ./bin/configVars.py > Traceback (most recent call last): > File "./bin/configVars.py", line 7, in > import script > ImportError: No module named script > > Indeed, I don't see any file named script.py anywhere underneath > /optlocal/lib/petsc, nor any directory named exactly "config" as > configVars seems to want. > > Bad macports, bad? Requires config/BuildSystem in PYTHONPATH. This is what configure.py does first. Matt > > Geoffrey > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Jan 31 18:53:54 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 31 Jan 2013 18:53:54 -0600 (CST) Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, 31 Jan 2013, Matthew Knepley wrote: > On Thu, Jan 31, 2013 at 7:46 PM, Geoffrey Irving wrote: > > > On Thu, Jan 31, 2013 at 4:38 PM, Matthew Knepley > > wrote: > > > On Thu, Jan 31, 2013 at 7:34 PM, Matthew Knepley > > wrote: > > >> > > >> On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving > > wrote: > > >>> > > >>> We have an scons build system linking against PETSc, and it would be > > >>> nice to have an automatic way of determining the list of libraries > > >>> that a statically linked, installed version of PETSc wants (e.g., the > > >>> MacPorts installed version). What's a good way to do such a thing > > >>> from *outside* the PETSc build system? > > >> > > >> > > >> It depends on how much work you want to do. For at least two years I > > >> think, > > >> our default had been -lpetsc. I would just do this. > > > > > > > > > Satish is right, use 'make getlinklibs'. > > > > > > However, if you don't have to waste time on a life or family, you may > > want > > > to consider > > > getting the info straight from the configure. You can load up the Python > > > module from > > > from $PETSC_ARCH/conf/RDict.db and pull out all these things. There is an > > > example > > > in configVars.py. > > > > configVars.py errors out at "import script": > > > > > ./bin/configVars.py > > Traceback (most recent call last): > > File "./bin/configVars.py", line 7, in > > import script > > ImportError: No module named script > > > > Indeed, I don't see any file named script.py anywhere underneath > > /optlocal/lib/petsc, nor any directory named exactly "config" as > > configVars seems to want. > > > > Bad macports, bad? > > > Requires config/BuildSystem in PYTHONPATH. This is what configure.py does > first. This requires PETSc sources - which would be missing in "prefix install" of PETSc. [which macports presumably uses] Satish From bsmith at mcs.anl.gov Thu Jan 31 19:05:21 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 31 Jan 2013 19:05:21 -0600 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> Why all the nonstandard suggestions, why not just do what Satish suggested: (if those work for you let us know and we'll improve the pkgconfig file Barrys-MacBook-Pro:power_grid barrysmith$ more ~/Src/petsc-dev/arch-gnu/lib/pkgconfig/PETSc.pc prefix=/Users/barrysmith/Src/petsc-dev exec_prefix=${prefix} includedir=${prefix}/include libdir=/Users/barrysmith/Src/petsc-dev/arch-gnu/lib ccompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpicc fcompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpif90 blaslapacklibs=-llapack -lblas Name: PETSc Description: Library to solve ODEs and algebraic equations Version: 3.3.0 Cflags: -I/Users/barrysmith/Src/petsc-dev/include -I/Users/barrysmith/Src/petsc-dev/arch-gnu/include -I/usr/local/include/libAfterImage/ -I/opt/X11/include -I/Users/barrysmith/Src/ams-dev/include Libs: -L/Users/barrysmith/Src/petsc-dev/arch-gnu/lib -lpetsc -lHYPRE -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 -L/usr/llvm-gcc-4.2/lib/gcc -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 -L/usr/llvm-gcc-4.2/lib -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib -lmpichcxx -lstdc++ -lml -lmpichcxx -lstdc++ -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -llapack -lblas -L/usr/local/include/libAfterImage/../../lib -lAfterImage -ltriangle -L/opt/X11/lib -lX11 -L/Users/barrysmith/Src/ams-dev/lib -lamspub -lparmetis -lmetis -lfftw3_mpi -lfftw3 -lptesmumps -lptscotch -lptscotcherr -lpthread -lyaml -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.7.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lm -lmpichcxx -lstdc++ -lm -lz -lz -ldl -lpmpich -lmpich -lopa -lmpl -lSystem -ldl On Jan 31, 2013, at 6:34 PM, Satish Balay wrote: > On Thu, 31 Jan 2013, Geoffrey Irving wrote: > >> We have an scons build system linking against PETSc, and it would be >> nice to have an automatic way of determining the list of libraries >> that a statically linked, installed version of PETSc wants (e.g., the >> MacPorts installed version). What's a good way to do such a thing >> from *outside* the PETSc build system? > > > One way is for some script [like configure or equivalent] to create a > simple petsc makefile and use any of the following targets to get > the required info > >>>>>>>>>>>>> > asterix:/home/balay/tmp>cat makefile > PETSC_DIR=/home/balay/spetsc > PETSC_ARCH=asterix64 > include ${PETSC_DIR}/conf/variables > include ${PETSC_DIR}/conf/rules > > asterix:/home/balay/tmp>make getincludedirs > -I/home/balay/spetsc/include -I/home/balay/spetsc/asterix64/include -I/home/balay/soft/linux64/mpich2-1.1/include -I/home/balay/soft/mpich2-1.5/include > asterix:/home/balay/tmp>make getlinklibs > -Wl,-rpath,/home/balay/spetsc/asterix64/lib -Wl,-rpath,/home/balay/spetsc/asterix64/lib -L/home/balay/spetsc/asterix64/lib -lpetsc -llapack -lblas -lX11 -lpthread -lm -Wl,-rpath,/home/balay/soft/mpich2-1.5/lib -L/home/balay/soft/mpich2-1.5/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl > asterix:/home/balay/tmp> >>>>>>>>>> > > [more similar targets are listed in conf/rules] > > The other option is to use pkgconfig file created by configure. > It should be in PETSC_ARCH/lib/pkgconfig [in petsc-dev] > > Satish > From irving at naml.us Thu Jan 31 19:10:42 2013 From: irving at naml.us (Geoffrey Irving) Date: Thu, 31 Jan 2013 17:10:42 -0800 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> References: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> Message-ID: That would be nice, but unfortunately MacPorts doesn't seem to have installed a .pc file anywhere: > find /opt/local -iname petsc.pc # produces nothing I'm not very familiar with pkgconfig. Would it be somewhere else? pkg-config --list-all didn't have it either. Geoffrey On Thu, Jan 31, 2013 at 5:05 PM, Barry Smith wrote: > > Why all the nonstandard suggestions, why not just do what Satish suggested: (if those work for you let us know and we'll improve the pkgconfig file > > Barrys-MacBook-Pro:power_grid barrysmith$ more ~/Src/petsc-dev/arch-gnu/lib/pkgconfig/PETSc.pc > prefix=/Users/barrysmith/Src/petsc-dev > exec_prefix=${prefix} > includedir=${prefix}/include > libdir=/Users/barrysmith/Src/petsc-dev/arch-gnu/lib > ccompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpicc > fcompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpif90 > blaslapacklibs=-llapack -lblas > > Name: PETSc > Description: Library to solve ODEs and algebraic equations > Version: 3.3.0 > Cflags: -I/Users/barrysmith/Src/petsc-dev/include -I/Users/barrysmith/Src/petsc-dev/arch-gnu/include -I/usr/local/include/libAfterImage/ -I/opt/X11/include -I/Users/barrysmith/Src/ams-dev/include > Libs: -L/Users/barrysmith/Src/petsc-dev/arch-gnu/lib -lpetsc -lHYPRE -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 -L/usr/llvm-gcc-4.2/lib/gcc -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 -L/usr/llvm-gcc-4.2/lib -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib -lmpichcxx -lstdc++ -lml -lmpichcxx -lstdc++ -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -llapack -lblas -L/usr/local/include/libAfterImage/../../lib -lAfterImage -ltriangle -L/opt/X11/lib -lX11 -L/Users/barrysmith/Src/ams-dev/lib -lamspub -lparmetis -lmetis -lfftw3_mpi -lfftw3 -lptesmumps -lptscotch -lptscotcherr -lpthread -lyaml -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.7.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lm -lmpichcxx -lstdc++ -lm -lz -lz -ldl -lpmpich -lmpich -lopa -lmpl -lSystem -ldl > > > On Jan 31, 2013, at 6:34 PM, Satish Balay wrote: > >> On Thu, 31 Jan 2013, Geoffrey Irving wrote: >> >>> We have an scons build system linking against PETSc, and it would be >>> nice to have an automatic way of determining the list of libraries >>> that a statically linked, installed version of PETSc wants (e.g., the >>> MacPorts installed version). What's a good way to do such a thing >>> from *outside* the PETSc build system? >> >> >> One way is for some script [like configure or equivalent] to create a >> simple petsc makefile and use any of the following targets to get >> the required info >> >>>>>>>>>>>>>> >> asterix:/home/balay/tmp>cat makefile >> PETSC_DIR=/home/balay/spetsc >> PETSC_ARCH=asterix64 >> include ${PETSC_DIR}/conf/variables >> include ${PETSC_DIR}/conf/rules >> >> asterix:/home/balay/tmp>make getincludedirs >> -I/home/balay/spetsc/include -I/home/balay/spetsc/asterix64/include -I/home/balay/soft/linux64/mpich2-1.1/include -I/home/balay/soft/mpich2-1.5/include >> asterix:/home/balay/tmp>make getlinklibs >> -Wl,-rpath,/home/balay/spetsc/asterix64/lib -Wl,-rpath,/home/balay/spetsc/asterix64/lib -L/home/balay/spetsc/asterix64/lib -lpetsc -llapack -lblas -lX11 -lpthread -lm -Wl,-rpath,/home/balay/soft/mpich2-1.5/lib -L/home/balay/soft/mpich2-1.5/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl >> asterix:/home/balay/tmp> >>>>>>>>>>> >> >> [more similar targets are listed in conf/rules] >> >> The other option is to use pkgconfig file created by configure. >> It should be in PETSC_ARCH/lib/pkgconfig [in petsc-dev] >> >> Satish >> > From knepley at gmail.com Thu Jan 31 19:11:07 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 31 Jan 2013 20:11:07 -0500 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> References: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> Message-ID: On Thu, Jan 31, 2013 at 8:05 PM, Barry Smith wrote: > > Why all the nonstandard suggestions, why not just do what Satish > suggested: (if those work for you let us know and we'll improve the > pkgconfig file > I agreed that if all you want to do is link PETSc, then do what Satish said. However, there is a lot more information in the configure, and someone could potentially use it in their build system to be "really smart". I am sure Ray Kurzweil has a team at Google working on it. Matt > > Barrys-MacBook-Pro:power_grid barrysmith$ more > ~/Src/petsc-dev/arch-gnu/lib/pkgconfig/PETSc.pc > prefix=/Users/barrysmith/Src/petsc-dev > exec_prefix=${prefix} > includedir=${prefix}/include > libdir=/Users/barrysmith/Src/petsc-dev/arch-gnu/lib > ccompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpicc > fcompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpif90 > blaslapacklibs=-llapack -lblas > > Name: PETSc > Description: Library to solve ODEs and algebraic equations > Version: 3.3.0 > Cflags: -I/Users/barrysmith/Src/petsc-dev/include > -I/Users/barrysmith/Src/petsc-dev/arch-gnu/include > -I/usr/local/include/libAfterImage/ -I/opt/X11/include > -I/Users/barrysmith/Src/ams-dev/include > Libs: -L/Users/barrysmith/Src/petsc-dev/arch-gnu/lib -lpetsc -lHYPRE > -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 > -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 > -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 > -L/usr/llvm-gcc-4.2/lib/gcc > -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 > -L/usr/llvm-gcc-4.2/lib > -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib > -lmpichcxx -lstdc++ -lml -lmpichcxx -lstdc++ -lcmumps -ldmumps -lsmumps > -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 > -llapack -lblas -L/usr/local/include/libAfterImage/../../lib -lAfterImage > -ltriangle -L/opt/X11/lib -lX11 -L/Users/barrysmith/Src/ams-dev/lib > -lamspub -lparmetis -lmetis -lfftw3_mpi -lfftw3 -lptesmumps -lptscotch > -lptscotcherr -lpthread -lyaml -lmpichf90 -lgfortran > -L/usr/local/lib/gcc/x86_64-apple-darwin10.7.0/4.6.0 -L/usr/local/lib > -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lm -lmpichcxx -lstdc++ -lm -lz > -lz -ldl -lpmpich -lmpich -lopa -lmpl -lSystem -ldl > > > On Jan 31, 2013, at 6:34 PM, Satish Balay wrote: > > > On Thu, 31 Jan 2013, Geoffrey Irving wrote: > > > >> We have an scons build system linking against PETSc, and it would be > >> nice to have an automatic way of determining the list of libraries > >> that a statically linked, installed version of PETSc wants (e.g., the > >> MacPorts installed version). What's a good way to do such a thing > >> from *outside* the PETSc build system? > > > > > > One way is for some script [like configure or equivalent] to create a > > simple petsc makefile and use any of the following targets to get > > the required info > > > >>>>>>>>>>>>> > > asterix:/home/balay/tmp>cat makefile > > PETSC_DIR=/home/balay/spetsc > > PETSC_ARCH=asterix64 > > include ${PETSC_DIR}/conf/variables > > include ${PETSC_DIR}/conf/rules > > > > asterix:/home/balay/tmp>make getincludedirs > > -I/home/balay/spetsc/include -I/home/balay/spetsc/asterix64/include > -I/home/balay/soft/linux64/mpich2-1.1/include > -I/home/balay/soft/mpich2-1.5/include > > asterix:/home/balay/tmp>make getlinklibs > > -Wl,-rpath,/home/balay/spetsc/asterix64/lib > -Wl,-rpath,/home/balay/spetsc/asterix64/lib > -L/home/balay/spetsc/asterix64/lib -lpetsc -llapack -lblas -lX11 -lpthread > -lm -Wl,-rpath,/home/balay/soft/mpich2-1.5/lib > -L/home/balay/soft/mpich2-1.5/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.7.2 > -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -lmpichf90 -lgfortran -lm > -lgfortran -lm -lquadmath -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl > > asterix:/home/balay/tmp> > >>>>>>>>>> > > > > [more similar targets are listed in conf/rules] > > > > The other option is to use pkgconfig file created by configure. > > It should be in PETSC_ARCH/lib/pkgconfig [in petsc-dev] > > > > Satish > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Jan 31 19:14:58 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 31 Jan 2013 19:14:58 -0600 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> Message-ID: On Jan 31, 2013, at 7:10 PM, Geoffrey Irving wrote: > That would be nice, but unfortunately MacPorts doesn't seem to have > installed a .pc file anywhere: > >> find /opt/local -iname petsc.pc # produces nothing > > I'm not very familiar with pkgconfig. Would it be somewhere else? > pkg-config --list-all didn't have it either. I probably added it after our last release. We are do for a new release soon. Barry > > Geoffrey > > On Thu, Jan 31, 2013 at 5:05 PM, Barry Smith wrote: >> >> Why all the nonstandard suggestions, why not just do what Satish suggested: (if those work for you let us know and we'll improve the pkgconfig file >> >> Barrys-MacBook-Pro:power_grid barrysmith$ more ~/Src/petsc-dev/arch-gnu/lib/pkgconfig/PETSc.pc >> prefix=/Users/barrysmith/Src/petsc-dev >> exec_prefix=${prefix} >> includedir=${prefix}/include >> libdir=/Users/barrysmith/Src/petsc-dev/arch-gnu/lib >> ccompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpicc >> fcompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpif90 >> blaslapacklibs=-llapack -lblas >> >> Name: PETSc >> Description: Library to solve ODEs and algebraic equations >> Version: 3.3.0 >> Cflags: -I/Users/barrysmith/Src/petsc-dev/include -I/Users/barrysmith/Src/petsc-dev/arch-gnu/include -I/usr/local/include/libAfterImage/ -I/opt/X11/include -I/Users/barrysmith/Src/ams-dev/include >> Libs: -L/Users/barrysmith/Src/petsc-dev/arch-gnu/lib -lpetsc -lHYPRE -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 -L/usr/llvm-gcc-4.2/lib/gcc -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 -L/usr/llvm-gcc-4.2/lib -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib -lmpichcxx -lstdc++ -lml -lmpichcxx -lstdc++ -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 -llapack -lblas -L/usr/local/include/libAfterImage/../../lib -lAfterImage -ltriangle -L/opt/X11/lib -lX11 -L/Users/barrysmith/Src/ams-dev/lib -lamspub -lparmetis -lmetis -lfftw3_mpi -lfftw3 -lptesmumps -lptscotch -lptscotcherr -lpthread -lyaml -lmpichf90 -lgfortran -L/usr/local/lib/gcc/x86_64-apple-darwin10.7.0/4.6.0 -L/usr/local/lib -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lm -lmpichcxx -lstdc++ -lm -lz -lz -ldl -lpmpich -lmpich -lopa -lmpl -lSystem -ldl >> >> >> On Jan 31, 2013, at 6:34 PM, Satish Balay wrote: >> >>> On Thu, 31 Jan 2013, Geoffrey Irving wrote: >>> >>>> We have an scons build system linking against PETSc, and it would be >>>> nice to have an automatic way of determining the list of libraries >>>> that a statically linked, installed version of PETSc wants (e.g., the >>>> MacPorts installed version). What's a good way to do such a thing >>>> from *outside* the PETSc build system? >>> >>> >>> One way is for some script [like configure or equivalent] to create a >>> simple petsc makefile and use any of the following targets to get >>> the required info >>> >>>>>>>>>>>>>>> >>> asterix:/home/balay/tmp>cat makefile >>> PETSC_DIR=/home/balay/spetsc >>> PETSC_ARCH=asterix64 >>> include ${PETSC_DIR}/conf/variables >>> include ${PETSC_DIR}/conf/rules >>> >>> asterix:/home/balay/tmp>make getincludedirs >>> -I/home/balay/spetsc/include -I/home/balay/spetsc/asterix64/include -I/home/balay/soft/linux64/mpich2-1.1/include -I/home/balay/soft/mpich2-1.5/include >>> asterix:/home/balay/tmp>make getlinklibs >>> -Wl,-rpath,/home/balay/spetsc/asterix64/lib -Wl,-rpath,/home/balay/spetsc/asterix64/lib -L/home/balay/spetsc/asterix64/lib -lpetsc -llapack -lblas -lX11 -lpthread -lm -Wl,-rpath,/home/balay/soft/mpich2-1.5/lib -L/home/balay/soft/mpich2-1.5/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl >>> asterix:/home/balay/tmp> >>>>>>>>>>>> >>> >>> [more similar targets are listed in conf/rules] >>> >>> The other option is to use pkgconfig file created by configure. >>> It should be in PETSC_ARCH/lib/pkgconfig [in petsc-dev] >>> >>> Satish >>> >> From jedbrown at mcs.anl.gov Thu Jan 31 19:15:01 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 31 Jan 2013 19:15:01 -0600 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> References: <9DC49EE9-EFFB-4C07-BA16-1A9C9F12762B@mcs.anl.gov> Message-ID: I agree with using PETSc.pc now that it's available (but it's not in petsc-3.3). FWIW, it should be putting everything but libpetsc into Libs.private (which will be used for static linking). On Thu, Jan 31, 2013 at 7:05 PM, Barry Smith wrote: > > Why all the nonstandard suggestions, why not just do what Satish > suggested: (if those work for you let us know and we'll improve the > pkgconfig file > > Barrys-MacBook-Pro:power_grid barrysmith$ more > ~/Src/petsc-dev/arch-gnu/lib/pkgconfig/PETSc.pc > prefix=/Users/barrysmith/Src/petsc-dev > exec_prefix=${prefix} > includedir=${prefix}/include > libdir=/Users/barrysmith/Src/petsc-dev/arch-gnu/lib > ccompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpicc > fcompiler=/Users/barrysmith/Src/petsc-dev/arch-gnu/bin/mpif90 > blaslapacklibs=-llapack -lblas > > Name: PETSc > Description: Library to solve ODEs and algebraic equations > Version: 3.3.0 > Cflags: -I/Users/barrysmith/Src/petsc-dev/include > -I/Users/barrysmith/Src/petsc-dev/arch-gnu/include > -I/usr/local/include/libAfterImage/ -I/opt/X11/include > -I/Users/barrysmith/Src/ams-dev/include > Libs: -L/Users/barrysmith/Src/petsc-dev/arch-gnu/lib -lpetsc -lHYPRE > -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 > -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/x86_64 > -L/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 > -L/usr/llvm-gcc-4.2/lib/gcc > -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1 > -L/usr/llvm-gcc-4.2/lib > -L/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib > -lmpichcxx -lstdc++ -lml -lmpichcxx -lstdc++ -lcmumps -ldmumps -lsmumps > -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_4.3 -lsuperlu_dist_3.2 > -llapack -lblas -L/usr/local/include/libAfterImage/../../lib -lAfterImage > -ltriangle -L/opt/X11/lib -lX11 -L/Users/barrysmith/Src/ams-dev/lib > -lamspub -lparmetis -lmetis -lfftw3_mpi -lfftw3 -lptesmumps -lptscotch > -lptscotcherr -lpthread -lyaml -lmpichf90 -lgfortran > -L/usr/local/lib/gcc/x86_64-apple-darwin10.7.0/4.6.0 -L/usr/local/lib > -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lm -lmpichcxx -lstdc++ -lm -lz > -lz -ldl -lpmpich -lmpich -lopa -lmpl -lSystem -ldl > > > On Jan 31, 2013, at 6:34 PM, Satish Balay wrote: > > > On Thu, 31 Jan 2013, Geoffrey Irving wrote: > > > >> We have an scons build system linking against PETSc, and it would be > >> nice to have an automatic way of determining the list of libraries > >> that a statically linked, installed version of PETSc wants (e.g., the > >> MacPorts installed version). What's a good way to do such a thing > >> from *outside* the PETSc build system? > > > > > > One way is for some script [like configure or equivalent] to create a > > simple petsc makefile and use any of the following targets to get > > the required info > > > >>>>>>>>>>>>> > > asterix:/home/balay/tmp>cat makefile > > PETSC_DIR=/home/balay/spetsc > > PETSC_ARCH=asterix64 > > include ${PETSC_DIR}/conf/variables > > include ${PETSC_DIR}/conf/rules > > > > asterix:/home/balay/tmp>make getincludedirs > > -I/home/balay/spetsc/include -I/home/balay/spetsc/asterix64/include > -I/home/balay/soft/linux64/mpich2-1.1/include > -I/home/balay/soft/mpich2-1.5/include > > asterix:/home/balay/tmp>make getlinklibs > > -Wl,-rpath,/home/balay/spetsc/asterix64/lib > -Wl,-rpath,/home/balay/spetsc/asterix64/lib > -L/home/balay/spetsc/asterix64/lib -lpetsc -llapack -lblas -lX11 -lpthread > -lm -Wl,-rpath,/home/balay/soft/mpich2-1.5/lib > -L/home/balay/soft/mpich2-1.5/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.7.2 > -L/usr/lib/gcc/x86_64-redhat-linux/4.7.2 -lmpichf90 -lgfortran -lm > -lgfortran -lm -lquadmath -lm -ldl -lmpich -lopa -lmpl -lrt -lgcc_s -ldl > > asterix:/home/balay/tmp> > >>>>>>>>>> > > > > [more similar targets are listed in conf/rules] > > > > The other option is to use pkgconfig file created by configure. > > It should be in PETSC_ARCH/lib/pkgconfig [in petsc-dev] > > > > Satish > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at mcs.anl.gov Thu Jan 31 19:29:24 2013 From: sean at mcs.anl.gov (Sean Farley) Date: Thu, 31 Jan 2013 19:29:24 -0600 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 6:46 PM, Geoffrey Irving wrote: > On Thu, Jan 31, 2013 at 4:38 PM, Matthew Knepley wrote: >> On Thu, Jan 31, 2013 at 7:34 PM, Matthew Knepley wrote: >>> >>> On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving wrote: >>>> >>>> We have an scons build system linking against PETSc, and it would be >>>> nice to have an automatic way of determining the list of libraries >>>> that a statically linked, installed version of PETSc wants (e.g., the >>>> MacPorts installed version). What's a good way to do such a thing >>>> from *outside* the PETSc build system? >>> >>> >>> It depends on how much work you want to do. For at least two years I >>> think, >>> our default had been -lpetsc. I would just do this. >> >> >> Satish is right, use 'make getlinklibs'. >> >> However, if you don't have to waste time on a life or family, you may want >> to consider >> getting the info straight from the configure. You can load up the Python >> module from >> from $PETSC_ARCH/conf/RDict.db and pull out all these things. There is an >> example >> in configVars.py. > > configVars.py errors out at "import script": > >> ./bin/configVars.py > Traceback (most recent call last): > File "./bin/configVars.py", line 7, in > import script > ImportError: No module named script > > Indeed, I don't see any file named script.py anywhere underneath > /optlocal/lib/petsc, nor any directory named exactly "config" as > configVars seems to want. > > Bad macports, bad? MacPorts does a lot of things bad ;-) Actually, now that I'm a certified MacPorts developer, I've had my sights on fixing the PETSc port. First, I need to do some massaging to get the other devs to understand the need of a standalone gfortran port. Then, the rest will fall in place (hopefully). From irving at naml.us Thu Jan 31 19:53:48 2013 From: irving at naml.us (Geoffrey Irving) Date: Thu, 31 Jan 2013 17:53:48 -0800 Subject: [petsc-users] automatic determination of which libraries petsc wants In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 5:29 PM, Sean Farley wrote: > On Thu, Jan 31, 2013 at 6:46 PM, Geoffrey Irving wrote: >> On Thu, Jan 31, 2013 at 4:38 PM, Matthew Knepley wrote: >>> On Thu, Jan 31, 2013 at 7:34 PM, Matthew Knepley wrote: >>>> >>>> On Thu, Jan 31, 2013 at 7:25 PM, Geoffrey Irving wrote: >>>>> >>>>> We have an scons build system linking against PETSc, and it would be >>>>> nice to have an automatic way of determining the list of libraries >>>>> that a statically linked, installed version of PETSc wants (e.g., the >>>>> MacPorts installed version). What's a good way to do such a thing >>>>> from *outside* the PETSc build system? >>>> >>>> >>>> It depends on how much work you want to do. For at least two years I >>>> think, >>>> our default had been -lpetsc. I would just do this. >>> >>> >>> Satish is right, use 'make getlinklibs'. >>> >>> However, if you don't have to waste time on a life or family, you may want >>> to consider >>> getting the info straight from the configure. You can load up the Python >>> module from >>> from $PETSC_ARCH/conf/RDict.db and pull out all these things. There is an >>> example >>> in configVars.py. >> >> configVars.py errors out at "import script": >> >>> ./bin/configVars.py >> Traceback (most recent call last): >> File "./bin/configVars.py", line 7, in >> import script >> ImportError: No module named script >> >> Indeed, I don't see any file named script.py anywhere underneath >> /optlocal/lib/petsc, nor any directory named exactly "config" as >> configVars seems to want. >> >> Bad macports, bad? > > MacPorts does a lot of things bad ;-) Actually, now that I'm a > certified MacPorts developer, I've had my sights on fixing the PETSc > port. First, I need to do some massaging to get the other devs to > understand the need of a standalone gfortran port. Then, the rest will > fall in place (hopefully). Cool, I look forward to that. For now I ended up just updating our manual list of libraries. Geoffrey